Cybersecurity and Data Privacy Outlook and Review: 2015

February 17, 2015

Concerns about cybersecurity and data privacy have exploded into the public consciousness in recent years, accompanied by a host of new and rapidly developing legal issues.  From data breaches potentially affecting millions of consumers, to increasingly active policing of cybersecurity by the FTC and other U.S. regulators, to the protection of “the right to be forgotten” in the European Union, the headlines have been filled with cybersecurity and data privacy news and legal developments–and there is no end in sight.

In this annual edition of Gibson Dunn’s Cybersecurity and Data Privacy Outlook and Review, the firm’s Information Technology and Data Privacy group describes key data privacy and security events from 2014 and sets forth anticipated trends for the near future.  The topics covered are: (i) civil litigation; (ii) regulatory and policy developments; (iii) legislative developments; (iv) criminal enforcement; and (v) select international developments in the European Union and the Asia-Pacific region.


Table of Contents

I.       Class Action and Civil Litigation Developments

A.     Article III Standing

1.     Statutory Rights of Action As Substitute for Harm

2.     Theories of Harm in the Data Breach Context

3.     Resource Consumption and Overpayment as Theories of Harm

4.     Requirement of Certainly Impending Harm

B.     Substantive Trends in Data Privacy Class Actions

1.     Data Breach Litigation

2.     Email Scanning Litigation

3.     VPPA Litigation

4.     ECPA Litigation and the “Contents of Communications”

5.     California’s Song-Beverly Credit Card Act and Point-of-Service Data Collection

6.     TCPA Litigation

II.     Regulatory and Policy Developments

A.     FTC Enforcement Trends

1.     Cybersecurity, Data Breaches, and Legal Challenges to the FTC’s Authority

2.     The U.S.-EU Safe Harbor

3.     High-Profile FTC Consent Decrees

B.     The FTC’s Revised COPPA Rule

C.     FCC Guidance and Amendments to the TCPA

D.     The NIST Cybersecurity Framework

III.    Legislative Developments

A.     Proposed Federal Data Breach Notification and Cybersecurity Legislation

1.     Legislation Arising From Prominent Retailer Data Breaches

2.     Cybersecurity Legislative Efforts

3.     Health Exchange Security and Transparency Act

4.     The Law Enforcement Access to Data Stored Abroad Act

5.     Protecting Student Privacy Act

6.     Do Not Track Kids Act

7.     The Edward Snowden Affair and NSA Surveillance

B.     Recently Enacted State Privacy Laws

1.     Data Breach Notification

2.     Credit Card Monitoring After Data Breach

3.     Social Media Access

4.     Drone Regulation

5.     California’s “Do Not Track” Law

6.     California’s “Digital Eraser” Law

7.     California’s Privacy for Student Records Laws

C.     Legislative Outlook

IV.    Criminal Enforcement

A.     Fourth Amendment Developments

1.     U.S. v. Ringmaiden

2.     Cell Phones and Warrantless Searches

B.     Identity Theft and Carding Crimes

1.     United States v. Lazar (E.D. Va.)

2.     United States v. Vega (E.D.N.Y)

C.     Money Laundering

1.     United States v. Dotcom (E.D. Va.)

2.     United States v. Faiella (S.D.N.Y)

3.     United States v. Liberty Reserve S.A. (S.D.N.Y)

D.     Economic Espionage Act

1.     United States v. Aleynikov (2d Cir.) and United States v. Agrawal (2d Cir.)

2.     United States v. Liew (N.D. Cal.)

3.     United States v. Wang Dong (W.D. Penn.)

4.     United States v. Leroux (D. Del.)

E.      Computer Fraud and Abuse Act

1.     United States v. Nosal (N.D. Cal.)

2.     Hacktivism

F.      The Year Ahead

V.     International Developments

A.     European Union

1.     Developments at the European Union Level

2.     France

3.     Germany

4.     United Kingdom

5.     Other European Nations

B.     Asia-Pacific Region

1.     India

2.     China and Hong Kong

3.     Japan

4.     South Korea

5.     Malaysia

6.     Singapore

C.     Other International Developments of Note


I.   Class Action and Civil Litigation Developments

The pace of litigation related to the alleged unauthorized collection, use, or disclosure of consumer information has continued to increase.  In the past year, a flurry of decisions at the district and circuit court levels have grappled with plaintiff standing, pleading requirements, and the enforceability of arbitration clauses and class action waivers, in addition to substantive data privacy law.

      A.   Article III Standing

As the plaintiffs’ bar continues to adapt and bring claims predicated on novel theories of harm, litigants continue to contest Article III standing challenges in data privacy cases.  As Magistrate Judge Grewal observed in In re Google, Inc. Privacy Policy Litigation:

[D]espite generating little or no discussion in most other cases, the issue of injury-in-fact has become standard fare in cases involving data privacy.  In fact, the court is hard-pressed to find even one recent data privacy case, at least in this district, in which injury-in-fact has not been challenged.  Second, in this district’s recent case law on data privacy claims, injury-in-fact has proven to be a significant barrier to entry.  And so even though injury-in-fact may not generally be Mount Everest, as then-Judge Alito observed, in data privacy cases in the Northern District of California, the doctrine might still reasonably be described as Kilimanjaro.

No. 12-CV-01382, 2013 WL 6248499, at *4 (N.D. Cal. Dec. 3, 2013) (finding that allegations of loss of personal identifying information were insufficient to establish injury-in-fact, but certain alleged economic and statutory injuries were sufficient to support Article III standing); see also In re Google, Inc. Privacy Policy Litig., No. 12-CV-01382, 2014 WL 3707508, at *4 (N.D. Cal. July 21, 2014) (reviewing second amended complaint and dismissing claims premised on allegations of conjectural heightened security risk from data disclosure, but holding other alleged economic theories sufficient to support Article III standing).  Judge Grewal’s statement that “injury-in-fact has proven to be a significant barrier to entry” to data privacy plaintiffs largely continues to hold true, even in the face of recent decisions showing an increased tolerance for claims predicated on theories of future harm or statutes requiring no showing of actual harm.

            1.   Statutory Rights of Action As Substitute for Harm

Where plaintiffs might not otherwise be able to satisfy Article III standing requirements–in particular the element of actual injury–they have seen increased success in predicating privacy claims on statutory rights of action, which some courts have found do not require actual injury.  See Robins v. Spokeo, Inc., 742 F.3d 409, 414 (9th Cir. 2014), petition for cert. filed, No. 13-1339; Edwards v. First Am. Corp., 610 F.3d 514, 517 (9th Cir. 2010) (the injury required by Article III “may exist solely by virtue of ‘statutes creating legal rights, the invasion of which creates standing'”); In re Google, Inc. Privacy Policy Litig., 2013 WL 6248499, at *8-9.  In Robins v. Spokeo, the Ninth Circuit held that the plaintiff could adequately plead Article III standing, despite lack of actual harm, by alleging a claim for a willful violation of the Fair Credit Reporting Act (“FCRA”) (15 U.S.C. § 1681).  742 F.3d at 414.  The Ninth Circuit here followed up on its earlier decision in Edwards v. First American Corporation.  Thus, the Ninth Circuit, at least, has given plaintiffs in putative data privacy class actions a stronger foothold upon which to satisfy the Article III standing requirement and seek enforcement of federal or state statutes concerning data privacy rights.

The federal statutes most frequently utilized by data privacy plaintiffs to allege violations of statutorily imposed duties, and thus standing in the absence of injury, are the Wiretap Act (18 U.S.C. §§ 2510, et seq.) and the Stored Communications Act (“SCA”) (18 U.S.C. §§ 2701, et seq.).  See, e.g., Perkins v. LinkedIn Corp., No. 13-CV-04303-LHK, 2014 WL 2751053 (N.D. Cal. June 12, 2014); see also In re iPhone Application Litig., 844 F. Supp. 2d 1040 (N.D. Cal. 2012); In re Facebook Privacy Litig., 791 F. Supp. 2d 705 (N.D. Cal. 2011).  In suits against electronic platforms that offer video content, plaintiffs also increasingly have alleged violations of the Video Privacy Protection Act (“VPPA”) (18 U.S.C. § 2710).  See, e.g., Sterk v. Redbox Automated Retail, LLC, 770 F.3d 618 (7th Cir. 2014); In re Nickelodeon Consumer Privacy Litig., No. CIV.A. 12-07829, 2014 WL 3012873 (D.N.J. July 2, 2014).

State statutes also may provide a path to Article III standing.  See, e.g., In re Google, Inc. Privacy Policy Litigation, 2013 WL 6248499, at *9.  The court in In re Google, Inc. Privacy Policy Litigation found that the plaintiff could satisfy standing obligations pursuant to a state law, California Civil Code § 3344, which prohibits the commercial use of another’s name or likeness.  Id. (“Where a plaintiff alleges an unauthorized commercial use of a person’s name or likeness, courts generally presume that [injury] has been established for a Section 3344 claim.”) (internal quotation marks omitted).  By contrast, however, in Mendoza v. Microsoft Inc., the court granted Microsoft’s motion to dismiss on standing grounds where the plaintiffs offered little more than “broad conclusory statements and formulaic recitations of the VPPA and [California Customer Records Act] statutes . . . without a single fact to support their allegation that Microsoft allegedly retained and disclosed personally identifiable information.”  No. C-14-316, 2014 WL 4540213, at *3 (W.D. Wash. Sept. 11, 2014).

Practitioners continue to wait on guidance from the U.S. Supreme Court in this area. Action by the Court might be imminent.  In 2012, privacy practitioners had anxiously awaited the Supreme Court’s anticipated ruling in First American Financial Corp. v. Edwards, a decision many hoped would resolve the issue of whether an alleged statutory violation alone is sufficient to create Article III standing where the plaintiff fails to allege any actual harm.  In June 2012, however, the Supreme Court dismissed the petition for certiorari as having been improvidently granted, leaving intact the Ninth Circuit’s decision that the standing requirement had been satisfied.  First Am. Fin. Corp. v. Edwards, 132 S. Ct. 2536 (2012).  Then in March 2014, after the Eighth Circuit found that a plaintiff had standing under the “informational injury” provision of the Electronic Fund Transfer Act (“EFTA”), the Supreme Court denied certiorari, again delaying resolution of the issue.  Charvat v. Mutual First Fed. Credit Union, 725 F.3d 819 (8th Cir. 2013), cert. denied, 134 S. Ct. 1515 (2014).

As of this writing, a petition for certiorari in Robins v. Spokeo is pending.  Robins v. Spokeo, Inc., 742 F.3d 409, 414 (9th Cir. 2014), petition for cert. filed, 82 U.S.L.W. 3689 (U.S. May 1, 2014) (No. 13-1339).  Petitioner Spokeo pointed out a deep circuit split, pitting the Ninth Circuit and several other circuit courts against the Second and Fourth Circuits.  See Kendall v. Employees Ret. Plan of Avon Prods., 561 F.3d 112, 121 (2d Cir. 2009) (holding allegations of breached statutory duties in the ERISA context do not “in and of themselves constitute[] an injury-in-fact sufficient for constitutional standing”); David v. Alphin, 704 F.3d 327, 338 (4th Cir. 2013) (holding that theory of standing-based deprivation of statutory rights without injury-in-fact impermissibly “conflates statutory standing with constitutional standing”).  Prominent technology companies have jointly filed an amicus brief in support of Spokeo’s petition.  On October 6, 2014, the Court called for the views of the U.S. Solicitor General, perhaps signaling the Court’s interest in resolving the circuit split.

Until the Supreme Court speaks, federal courts remain divided on whether the mere assertion that a statutory right has been violated is sufficient to confer Article III standing at the pleadings stage.

            2.   Theories of Harm in the Data Breach Context

While standing based on statutory rights of action alone remains a hotly debated but unsettled issue, plaintiffs also continue to rely on more concrete, albeit attenuated, theories of harm.  The closely watched Target breach litigation raised the issue of whether plaintiffs suffered harm in connection with a data breach targeting a national retail chain.  In this multidistrict litigation, a Minnesota district court found that plaintiffs satisfied the standing requirements, at least at the pleading stage, by alleging plaintiffs suffered “unlawful charges, restricted or blocked access to bank accounts, inability to pay other bills, and late payment charges or new card fees.”  In re Target Corp. Customer Data Sec. Breach Litig., No. MDL 14-2522, 2014 WL 7192478, at *2 (D. Minn. Dec. 18, 2014).  Target had argued that the plaintiffs did not allege injury because they failed to “allege that their expenses were unreimbursed or say whether they or their bank closed their accounts.”  Id.  But the court found that those arguments “set a too-high standard for Plaintiffs to meet” and that “Plaintiffs’ allegations plausibly allege that they suffered injuries that are ‘fairly traceable’ to Target’s conduct.”  Id. (citations omitted).

            3.   Resource Consumption and Overpayment as Theories of Harm

Plaintiffs have also continued the recent trend of alleging theories of harm (1) to their electronic devices in the form of unexpected “resource consumption,” and (2) in the form of “overpayment” (i.e., by asserting that a plaintiff would not have purchased the good or service at issue or would have paid less for it had the “true facts” been disclosed to him or her).  For example, both the first and second amended complaints in In re Google, Inc. Privacy Policy Litigation included allegations regarding battery and bandwidth usage and overpayment, which the court found adequate at the pleadings stage to establish cognizable injury for Article III standing purposes.  2013 WL 6248499, at *6-7; 2014 WL 3707508, at *6-7.

No matter the theory of harm offered by plaintiffs, given courts’ continuing uncertainty regarding speculative damages in the data privacy context, defendants should not lose sight of the standing issue (which goes to the court’s subject matter jurisdiction) if the complaint survives a standing challenge based on the pleadings.  As the U.S. Supreme Court held in Lujan v. Defenders of Wildlife, 504 U.S. 555, 561 (1992), a plaintiff bears the burden of proving standing under Article III “with the manner and degree of evidence required at the successive stages of the litigation.”  At the pleadings stage, “general factual allegations of injury resulting from the defendant’s conduct may suffice,” but “[i]n response to a summary judgment motion, . . . the plaintiff can no longer rest on such ‘mere allegations,’ but must ‘set forth’ by affidavit or other evidence ‘specific facts’ to support standing.”  Id. at 561; see also In re Target Corp. Customer Data Sec. Breach Litig., 2014 WL 7192478, at *2 (“[if] discovery fail[s] to bear out Plaintiffs’ allegations, Target may move for summary judgment on the issue [of standing]”); In re Google, Inc. Privacy Policy Litig., 2014 WL 3707508, at *7 (noting challenges to “causal nexus between [the] alleged conduct and the Plaintiffs’ alleged injury [] require[] a heavily and inherently fact-bound inquiry that the court may not reach at this state in the litigation”).  Accordingly, a defendant in a data privacy case may wish to consider, particularly in developing its discovery strategy, whether standing may be challenged at a later stage of litigation, such as the summary judgment stage.

            4.   Requirement of Certainly Impending Harm

Courts in data breach cases are now grappling with how to apply the holding of Clapper v. Amnesty International, a key 2013 Supreme Court decision focusing on the issue of Article III standing.  133 S. Ct. 1138 (2013).  In Clapper, human rights organizations and media groups challenged the constitutionality of an amendment to the Foreign Intelligence Surveillance Act that made it easier for the government to obtain wiretaps on intelligence targets outside of the United States.  The plaintiffs, all U.S. citizens, alleged that they had standing because their work included telephone and email communications with people who were likely foreign targets of surveillance and such communications could be intercepted in the future.  The plaintiffs also alleged that they had suffered injury by undertaking costly steps to protect their communications from surveillance.

The Supreme Court held that the allegations of potential interception of attorney-client privileged communications were too speculative to sustain a claim, determining that “a highly attenuated chain of possibilities[] does not satisfy the requirement that threatened injury must be certainly impending” and that plaintiffs cannot manufacture standing “merely by inflicting harm on themselves based on their fears of hypothetical future harm.”  Id. at 1148.

Based on Clapper, several lower courts have since held that an increased risk of future harm is not sufficient to establish standing because typically harm is not imminent.  See, e.g., Remijas v. Neiman Marcus Grp., LLC, No. 14-C-1735, 2014 WL 4627893, at *4 (N.D. Ill. Sept. 16, 2014) (“[T]he complaint does not adequately allege that the risk of identity theft is sufficiently imminent to confer standing.”); In re Sci. Applications Int’l Corp. Backup Tape Data Theft Litig., No. MDL-2360, 2014 WL 1858458 (D.D.C. May 9, 2014) (finding that “[t]he degree by which the risk of harm has increased is irrelevant–instead, the question is whether the harm is certainly impending”); Strautins v. Trustwave Holdings, Inc., No. 12-CV-09115, 2014 WL 960816 (N.D. Ill. Mar. 12, 2014) (“To the extent that [plaintiff’s claims] are premised on the mere possibility that her [] [personal information] was stolen and compromised, and a concomitant increase in the risk that she will become a victim of identity theft, Strautins’ claim is too speculative to confer Article III standing.”).

Other district courts, however, have taken a narrower view of Clapper.  In one recent data breach case, a district court found that Clapper did not set forth a new Article III framework; rather, it “simply reiterated an already well-established framework for assessing whether a plaintiff had sufficiently alleged an ‘injury-in-fact’ for purposes of establishing Article III standing.”  In re Sony Gaming Networks & Customer Data Sec. Breach Litig., 996 F. Supp. 2d 942, 961 (S.D. Cal. 2014) (finding allegations that personal information was collected and wrongfully disclosed via a breach and subject to a “credible threat” of impending harm sufficient to establish Article III standing at the pleading stage); see also In re Adobe Sys., Inc. Privacy Litig., No. 13-CV-05226-LHK, 2014 WL 4379916, at *7 (N.D. Cal. Sept. 4, 2014) (holding that the threat of future harm was sufficient to satisfy Article III standing requirements and noting that “the Court is reluctant to conclude that Clapper represents the sea change that Adobe suggests”).  We anticipate that the import of Clapper will continue to be vigorously litigated, but for now, it remains a potentially powerful shield for defendants combatting nonspecific allegations of indeterminate harm.

      B.   Substantive Trends in Data Privacy Class Actions

            1.   Data Breach Litigation

The pace, scope, and sophistication of data breaches and cyberattacks continued to increase in 2014, placing businesses’ data security practices under heightened scrutiny from consumers, private litigants, and regulators.  Breaches can expose the data of millions of individual consumers, resulting in potentially massive liability.  As a result, companies may wish to consider such exposure and consult with experienced counsel when making decisions about data security measures, developing a data breach response plan before an incident occurs, and taking responsive action at the first sign of a potential breach.  Although an early and informed response may not altogether prevent a wave of putative class action suits, it makes it easier for a company to mount an effective defense.

While we have yet to see a data breach class action successfully reach a jury verdict, in 2014 plaintiffs survived motions to dismiss in a number of key cases.  This section examines data breach class action suits in the following postures: (a) those that have been dismissed due to lack of standing; (b) those that have survived motions to dismiss despite alleging only an increased risk of future harm; (c) those that have survived motions to dismiss through allegation of more than a risk of future harm; and (d) those that have just recently been filed.

                    a.   Cases Dismissed for Lack of Standing

Despite the proliferation of data breach class actions, plaintiffs still face significant obstacles in getting their claims into court.  The greatest roadblock–as discussed above–continues to be establishing standing under Article III of the U.S. Constitution, and most suits fail at this stage.  In data breach cases, standing is a significant issue when personal information has been exposed or stolen but there is no evidence that it has been misused.  In these cases, plaintiffs seek to establish standing based on a fear of potential future harm, such as identity theft or fraud.

Several defendants have successfully filed motions to dismiss for lack of standing by relying on the 2013 Supreme Court case, Clapper v. Amnesty International, discussed in greater detail above.

Recent examples of class action data breach lawsuits dismissed for lack of standing demonstrate the difficult standard plaintiffs must reach to demonstrate actual injury.  When Nationwide, P.F. Chang’s China Bistro, and Neiman Marcus each reported massive consumer data breaches, several of their customers filed putative class actions, but failed to move beyond the motion to dismiss stage.  In all three actions, the courts found a lack of standing on the basis that an increased risk of identity theft or costs associated with mitigating that risk did not sufficiently demonstrate a redressable injury.  Galaria v. Nationwide Mut. Ins. Co., 998 F. Supp. 2d 646, 654 (S.D. Ohio 2014) (holding that “an increased risk of identity theft . . . is not itself an injury-in-fact because Named Plaintiffs did not allege . . . that such harm is ‘certainly impending'”); Lewert v. P.F. Chang’s China Bistro, No. 14-cv-4787, 2014 U.S. Dist. LEXIS 171142, at *8-9 (N.D. Ill. Dec. 10, 2014) (holding that speculation of future harm–such as potential identity theft–does not constitute actual injury and any unauthorized charges and bank fees would have been reimbursed by banks) (notice of appeal pending before Seventh Circuit); Remijas v. Neiman Marcus Group, LLC, No. 14 C 1735, 2014 U.S. Dist. LEXIS 129574, at *9  (N.D. Ill. Sep. 16, 2014)  (N.D. Ill.) (holding that while increased risk of fraudulent charges was sufficiently imminent under Clapper because 9,200 stolen cards had already been misused, plaintiffs would not suffer any concrete harm given banks’ reimbursement policies).

                    b.   Cases Where an Increased Risk of Harm Was Sufficient to Confer Standing

While most cases to date have failed when plaintiffs cannot allege that their information has actually been misused, two district courts, both within the Ninth Circuit, found standing this year under exactly those circumstances.

First, in In re Sony Gaming Networks & Customer Data Security Breach Litigation, hackers obtained data for as many as 31 million Sony users through the PlayStation network, including credit and debit card information.  In response to Sony’s first motion to dismiss, the district court cited Krottner v. Starbucks, 628 F. 3d 1139 (9th Cir. 2010), and held that plaintiffs had shown standing based on an increased risk of future harm.  In re Sony Gaming Networks & Customer Data Sec. Breach Litig., 903 F. Supp. 2d 942, 958 (S.D. Cal. 2012).  Sony then asked the court to reevaluate its opinion in light of the Supreme Court’s holding in Clapper, but the court once again found that plaintiffs had standing, holding that neither Krottner nor Clapper requires plaintiffs to allege that information was misused by a third party.  In re Sony Gaming Networks & Customer Data Sec. Breach Litig., 996 F. Supp. 2d 942 (S.D. Cal. 2014).  The court further held that Clapper had not set forth a new Article III framework overruling Krottner‘s standard that injury be “real and immediate.”  Id. at 961.  The court left eight of the fifty-three claims intact, dismissing the others.  In July 2014, Sony agreed to a $15 million preliminary settlement, which the court will review in a final fairness hearing in May 2015.

Second, Adobe Systems was hit with several putative class actions following a 2013 attack on its network that compromised the private information of approximately 38 million customers.  Several of these cases were consolidated in the U.S. District Court for the Northern District of California, and plaintiffs filed a consolidated class action complaint in April 2014.  The court, in response to Adobe’s motion to dismiss for lack of standing, found that “the threatened harm alleged here is sufficiently concrete and imminent to satisfy Clapper” because plaintiffs’ personal information (including names, usernames, passwords, phone numbers, addresses, and credit card numbers) had allegedly been stolen during the breach, and had in some instance already surfaced on the Internet.  In re Adobe Systems Inc. Privacy Litig., No. 13-CV-05226-LHK, 2014 U.S. Dist. LEXIS 124126, at *27 (N.D. Cal. Sep. 4, 2014). Accordingly, the court held that “there is no need to speculate as to whether Plaintiffs’ information has been stolen  . . . [or] whether the hackers intend to misuse the personal information . . . or whether they will be able to do so.”  Id. at *28.  Finally, since the court found that the threatened harm was certainly impending, it held that costs for credit-monitoring services were also an injury that conferred standing.

                    c.   Cases Alleging More Than an Increased Risk of Harm

While plaintiffs have been mostly unsuccessful at establishing standing based on increased risk of future misuse of their personal information, they have more effectively defeated motions to dismiss when their alleged injuries have extended beyond risk of future harm.  In 2012, hackers infiltrated LinkedIn’s computer systems and posted the passwords of approximately 6.5 million users on the Internet.  Within days, plaintiffs filed suit, alleging breach of contract and violations of both the fraud and unfair business act prongs of California’s Unfair Competition Law (“UCL”).  The court dismissed the named plaintiff’s initial complaint for lack of standing because she had only alleged an increased risk of future harm without alleging actual misuse of her information.  In her second amended complaint, the plaintiff alleged that she was among a group of individuals who had paid for LinkedIn’s premium subscription in reliance on LinkedIn’s Privacy Policy, which had stated that LinkedIn had adequate security procedures.  Accordingly, she asserted that LinkedIn’s failure to adhere to industry standards and its Privacy Policy had causing the breach that revealed her password.  The plaintiff’s allegation that she had acted in reliance upon LinkedIn’s misrepresentation in its Privacy Policy, and would not have purchased a premium subscription otherwise, proved sufficient to confer standing under both Article III and California’s UCL.  In re LinkedIn User Privacy Litig., No. 5:12-CV-03088-EJD, 2014 U.S. Dist. LEXIS 42696, at *11 (N.D. Cal. March 28, 2014).  The judge dismissed most of the claims, but allowed the plaintiffs to proceed with the fraud claim under the UCL.  LinkedIn has since agreed to pay $1.25 million to settle this suit, and the court is scheduled to review the parties’ proposed settlement this month.

In December 2013, Target experienced a massive data breach that compromised credit card information for around 40 million customers and personal information for about 70 million customers.  The company was subsequently named in over fifty class actions, both on behalf of consumers and on behalf of issuer banks, which were later consolidated in in the U.S. District Court for the District of Minnesota.  Upon Target’s motion to dismiss the consumer complaint, the court disagreed with Target’s argument that plaintiffs had not sufficiently demonstrated an injury based on unauthorized credit/debit card charges because there was no indication that these charges had gone unreimbursed.  In re Target Corp. Customer Data Sec. Breach Litig., No. 14-md-2522 PAM/JJK, 2014 WL 7192478 (D. Minn. Dec. 18, 2014).  The court held that this argument “set a too-high standard for Plaintiffs to meet at the motion-to-dismiss stage,” and that it was sufficient for plaintiffs to allege that they had suffered injuries that were “fairly traceable” to Target’s conduct.  Id. at *2.  With respect to the issuer banks’ class complaint, the court likewise denied Target’s motion to dismiss.  In re Target Corp. Customer Data Sec. Breach Litig., No. 14-md-2522 PAM, 2014 U.S. Dist. LEXIS 167802 (D. Minn. Dec. 2, 2014).  Notably, standing was not a concern in this instance, since the plaintiff issuer banks had borne the financial losses arising from fraudulent charges on their customers’ payment cards.  Moreover, the court found that Target owed a duty of care to the issuer banks with regard to its data security practices, and that the breach was foreseeable because Target had deliberately disabled one of the security features that could have prevented the harm.  Id. at *9.  The claims brought on behalf of consumers and banks will now move forward to the class certification stage.

                    d.   Recently Filed Complaints

There are several additional data breach class actions currently pending in courts across the country.  For example, plaintiffs filed a class action complaint against eBay in July 2014, stemming from a cyberattack in which up to 233 million consumers’ personal data allegedly was compromised due to eBay’s lack of sufficient data encryption.  See Collin Green v. eBay Inc., No. 2:14-cv-01688-SM-KWR (E.D. La. July 23, 2014).  eBay has filed a motion to dismiss based on lack of standing under Clapper, which is currently pending before the court.  Several other class actions are currently at the filing stage; it remains to be seen how the decisions in these cases will further shape the nature of the burden that plaintiffs and defendants face to prevail in data breach lawsuits.  See, e.g., Shane K. Enslin, et al. v. The Coca-Cola Co., et al., No. 2:14-cv-06476-JHS (E.D. Penn Nov 12, 2014) (putative class action based on theft of 55 computers containing personal information of 74,000 current and former Coca-Cola employees); Barbara Irwin v. Jimmy Johns, No. 2:14-cv-02275-HAB-DGB (C. D. Ill. Nov. 6, 2014) (putative class action based on credit card fraud resulting from data breach at over 200 Jimmy Johns’ locations and theft of thousands of consumers’ personal information); In re The Home Depot, Inc., Customer Data Sec. Breach Litig., No.14-md-02583 (N.D. Ga. Dec. 11, 2014) (putative class action based on data breach exposing up to 56 million credit and debit card numbers); Corona, et al. v. Sony Pictures Entertainment, Inc., No. 2:14-cv-9600 (C.D. Cal. Dec. 15, 2014) (action based on data breach that exposed internal emails and the Social Security numbers, employee files, and medical information of over 47,000 current and former employees, allegedly due to inadequate encryption and password protection).

            2.   Email Scanning Litigation

In the last few years, plaintiffs have filed several class action lawsuits against major players in the Silicon Valley alleging that scanning user emails for use in targeting advertising violates various state and federal laws.  As is often the case in privacy class actions, the initial proposed classes in some of these suits include all or many users of the services, and therefore the scope of these cases, at least at the outset, is potentially massive.  What is more surprising is that these lawsuits allege privacy violations based on what many consider to be standard industry practices.  Companies operating any sort of electronic communications service should consider the issues raised by these suits, particularly with respect to the permissible collection and use of such communications and the kinds of disclosures that may satisfy consent to such collection and use.

In the first of several suits, together collectively known as the In re Google Gmail Litigation, plaintiffs sued Google alleging improper scanning of user emails without consent.  See Dunbar v. Google, Inc., No. 10-cv-194 (E.D. Tex. Nov. 17, 2010).  By May 2013, the Dunbar action and six other actions involving substantially similar allegations against Google were centralized into a multidistrict action before U.S. District Judge Lucy H. Koh of the Northern District of California.  Plaintiffs in the seven actions together filed a consolidated complaint in May 2013, asserting violations of the federal Electronic Communications Privacy Act (“ECPA”) (18 U.S.C. §§ 2510, et seq.), the California Invasion of Privacy Act (“CIPA”) (Cal. Penal Code §§ 631 and 632), and various state laws.  In re Google Gmail Litig., No. 13-md-02430, Dkt. No. 38.  Broadly stated, each plaintiff alleged that Google mined the content of private Gmail messages without users’ permission, for the purpose of targeting advertising, resulting in financial gain for the company.

Google moved to dismiss the consolidated complaint shortly thereafter, asserting, among other things, that scanning emails fell within ECPA’s exemption for activities taking place in the “ordinary course of its business,” and that, in any event, plaintiffs consented to scanning of their emails by agreeing to Google’s terms of service and privacy policies.  Judge Koh denied this motion to dismiss in September 2013.  She held that plaintiffs plausibly alleged that Google’s scanning of emails is not in its ordinary course of business because it is contrary to Google’s stated practices and is not instrumental to Google’s ability to transmit emails.  Judge Koh also held that the plaintiffs neither expressly nor impliedly consented to the scanning of their emails by accepting Google’s terms of service and privacy policies, since those policies merely disclosed the possibility, not the certainty, that Google scans emails, and did not disclose scanning for the specific purposes alleged by plaintiffs.

Judge Koh also denied Google’s motion to dismiss the CIPA § 631 claim, holding that CIPA does apply to email communications and that the public utility exception did not apply.  She did, however, grant Google’s motion to dismiss plaintiffs’ CIPA § 632 claim, holding that Internet-based communications cannot be “confidential” under CIPA.  Finally, Judge Koh granted Google’s motion to dismiss some of plaintiffs’ other state-law claims, but declined to dismiss those that derived from the ECPA claims.  Google then sought interlocutory review of the court’s order denying its motion to dismiss, requesting clarification of the “ordinary course of business” and “consent” exceptions to ECPA, but Judge Koh likewise denied this motion.

In March 2014, Judge Koh also denied plaintiffs’ motion for class certification, holding that individual issues regarding whether members of the various classes consented to the alleged interceptions would predominate over common issues.  The plaintiffs sought permission to appeal the decision under Federal Rule of Civil Procedure 23(f), but the Ninth Circuit denied the request.  The parties then stipulated to dismissal of all claims with prejudice.

In October 2013, shortly after Judge Koh’s decision denying Google’s motion to dismiss, six separate class action complaints were filed against Yahoo! alleging similar theories, each accusing the company of scanning emails for purposes of targeted advertising and user profiling in violation of plaintiffs’ privacy rights.  In January 2014, two plaintiffs stipulated to dismissal of  their claims, and Judge Koh consolidated the remaining four cases.  See Holland et al v. Yahoo! Inc., No. 13-cv-04980, Dkt. No. 27 (Jan. 22, 2014).  Plaintiffs filed a consolidated class action complaint in February 2014, and the following month, Yahoo! filed a motion to dismiss.

In August 2014, Judge Koh issued an opinion, without oral argument, granting Yahoo!’s motion in part and denying in part.  The court granted Yahoo!’s motion to dismiss the ECPA claim, finding that Yahoo!’s terms of service established express consent under ECPA, since they explicitly disclosed Yahoo!’s practice of scanning emails in order to target advertising and create user profiles.  The court also granted Yahoo!’s motion to dismiss plaintiffs’ claim under the SCA alleging that Yahoo! accessed stored communications, since electronic service providers have immunity from such claims.  The court also dismissed plaintiffs’ claim under the California Constitution, which requires that plaintiffs plead specific content in which they allege a privacy interest.  However, the court denied Yahoo!’s motion to dismiss plaintiffs’ CIPA § 631 claim and their claim under the SCA alleging that Yahoo! disclosed emails without authorization.  The plaintiffs did not file an amended complaint, and the parties are conducting discovery.  Plaintiffs have indicated that they will seek to certify only a Rule 23(b)(1) and/or (b)(2) class (not a (b)(3) “damages” class), perhaps in an effort to avoid the predominance issues that doomed the Gmail case.

            3.   VPPA Litigation

Plaintiffs have continued to bring putative privacy class action claims under previously infrequently litigated statutes like the Video Privacy Protection Act (“VPPA”), 18 U.S.C. § 2710.  The VPPA creates significant monetary exposure via a minimum $2,500 per-person liquidated damages provision for “video tape service providers” that knowingly disclose “personally identifiable information concerning any consumer,” subject to certain exceptions.  A plaintiff asserting a VPPA violation typically argues that the website publisher has violated the statute by disclosing the plaintiff’s video viewing information in connection with a device identifier to third-party analytics companies or advertising networks.

A California federal magistrate judge ruled in 2012 that online digital content distributor Hulu was a “video tape service provider” within the meaning of the Act, even though Hulu does not distribute physical video tapes.  In re Hulu Privacy Litig., No. 11-cv-3764 LB, 2012 WL 3282960, at *6 (N.D. Cal. Aug. 10, 2012) (Beeler, Mag. J.) (analyzing the legislative history of the statute and the ordinary meaning of “audio visual materials”).  Hulu subsequently moved for summary judgment on the basis that the plaintiffs had no evidence of actual injury, arguing that such injury is required by the statute.  On December 20, 2013, the court issued an order solely addressing the question of whether the VPPA requires plaintiffs to show actual injury separate from a statutory violation.  In re Hulu Privacy Litig., 2013 WL 6773794 (N.D. Cal. Dec. 20, 2013).  In a decision that adds to the split of authority on this issue, the court rejected Hulu’s argument that that the word “aggrieved” in the statute requires an additional injury, concluding that the VPPA “requires only injury in the form of a wrongful disclosure.  Id. at *4.  The court refused to credit Hulu’s reliance on Sterk v. Best Buy Stores, L.P., No. 11-cv-1894, 2012 WL 5197901 (N.D. Ill. Oct. 17, 2012), for the proposition that actual injury is a prerequisite to recovering any damages under the VPPA.  Id. at *8.  The court instead concluded that actual injury is not required by the statute, in part because “the Ninth Circuit recognizes that a plaintiff satisfies Article III’s injury-in-fact requirement by alleging a violation of a statutorily-created right.”  Id. at *8 (citing Edwards v. First Am. Corp., 610 F.3d 514, 515-16 (9th Cir. 2010)).

Hulu brought a second motion for summary judgment in 2014, arguing that the company’s sharing of anonymized video viewing data with third parties did not constitute a “knowing” disclosure of personally identifiable information, as required by the VPPA.  In April 2014, the court granted the motion as to information Hulu shared with metrics company ComScore, but denied it as to information shared with a social networking company.  In re Hulu Privacy Litig., No. 11-cv-3764 LB, 2014 WL 1724344 (N.D. Cal. Apr. 28, 2014) (observing that “[t]he statute does not require an actual name” and denying defendant summary judgment as to disclosures to third party of the user’s alleged identity, even though no “actual” name was transmitted).  The inquiry was fact-dependent, and the court held that the record contained fact issues concerning Hulu’s knowledge of what information was being transmitted.  The court held that, in appropriate circumstances, disclosing a user ID (rather than an actual name) along video viewing information could constitute a violation of the VPPA.  The most recent development in the Hulu case is the court’s denial of the plaintiffs’ motion for class certification–without prejudice–on June 17, 2014.  The court held that, on the record before it, the plaintiffs had not proposed an ascertainable class.  Hulu currently has another motion for summary judgment pending, which is scheduled for hearing on February 26, 2015.

A recent unpublished federal decision in New Jersey relied on the Hulu court’s analysis with regard to the scope of information covered by the VPPA.  In re Nickelodeon Consumer Privacy Litig., No. 12-cv-7829, 2014 U.S. Dist. LEXIS 91286, at *39 (D.N.J. July 2, 2014).  Agreeing that the statute is triggered by disclosure of something “akin” to a name, the Nickelodeon court found that information disclosed to Google by Viacom did not rise to that level, dismissing the claims in that case.  Specifically, the Nickelodeon plaintiffs had alleged that Viacom collected their gender, age range, and video materials requested and disclosed that information to Google for purposes of targeted advertising.  The court found that such information “does not link an identified person to a specific video choice” and, therefore, did not qualify as personally identifiable information within the meaning of the statute.  Accordingly, the court dismissed the claim.  Id. at *40, *46-47.[1]

Three other recent decisions have narrowed the field regarding what types of disclosure actually constitute “personally identifiable information” under the VPPA.  In Ellis v. Cartoon Network Inc., a plaintiff downloaded an app onto his Android device to watch cartoon video clips, after which the app allegedly transmitted his video-watching history and “Android ID” to a data analytics company without the plaintiff’s consent.   2014 U.S. Dist. LEXIS 143078 (N.D. Ga. Oct. 8, 2014).  The court dismissed the plaintiff’s VPPA claim, finding that an Android ID did not identify a particular person, and thus there was no violation of the VPPA.  Id. at *8-9.  Similarly, in Eichenberger v. ESPN, the court held that the information allegedly disclosed to a third party (the plaintiff’s Roku device serial number and viewing records) did not fall within the VPPA’s definition of personally identifiable information (“PII”).  No. 14-cv-0463 (W.D. Wash. Nov. 24, 2014).  It further added that while ESPN could be found liable for disclosing both “a unique identifier and a correlated look-up table” by which an individual could be identified as a particular person who watched particular videos, the plaintiff had not sufficiently supported his theory that Adobe already had such a “look-up table.”  Finally, in Locklear v. Dow Jones & Co., the court dismissed the plaintiff’s claim that Dow Jones had distributed PII of consumers who used its Wall Street Journal Channel on Roku TV boxes to third parties, in violation of the VPPA.  No. 14-744 (N.D. Ga. Jan. 23, 2015).  The court rejected the plaintiff’s claims that third-party analytics providers could identify her based on Dow Jones’s disclosure of her Roku serial number and the video titles she watched.  In particular, the court deemed fatal the plaintiff’s admission that the third party had to incorporate information from ‘other sources’ in order to link her serial number to her; it concluded that the Roku serial number, without more, did not identify a particular person and did not constitute PII under the VPPA, and thus that no violation could be found.

Still another key aspect of the recent VPPA decisions is whether particular plaintiffs fall within the VPPA’s definition of “consumers.”  The VPPA defines “consumer” as a “renter, purchaser or subscriber of goods or services from a video tape service provider.”  18 U.S.C. § 2710(a)(1).  Defendants have contended in recent VPPA cases that plaintiffs cannot be subscribers, and therefore are not consumers, simply by visiting a website.  While courts seem to accept that visiting a website alone is insufficient, the threshold for qualifying as a subscriber is low.  For example, the Hulu court determined that the plain language of the statute did not require that plaintiffs pay for a company’s services to be considered subscribers.  In re Hulu Privacy Litig., 2012 WL 3282960 at *8 (“If Congress wanted to limit the word ‘subscriber’ to ‘paid subscriber,’ it would have done so.”).  It was sufficient that plaintiffs alleged that “they signed up for a Hulu account, became registered users, received a Hulu ID, established Hulu profiles, and used Hulu’s video streaming services.”  Id. at *7.  Likewise, in Ellis v. Cartoon Network, Inc., the court approved Judge Beeler’s analysis in Hulu and held plaintiff qualified as a subscriber, and accordingly, as a consumer, because “[h]e downloaded the CN App and used it to watch video clips.  His Android ID and viewing history were transmitted to [the data analytics company].” 2014 U.S. Dist. LEXIS 143078, at *5-*6.

The courts have also recently analyzed the reach of the VPPA’s “ordinary course of business” exemption.  The VPPA provides this exemption for disclosures made for “debt collection activities, order fulfillment, request processing, and transfer of ownership.”  18 U.S.C. § 2710(a)(2).  For instance, in Sterk v. Redbox, a district court granted summary judgment to Redbox, holding that its disclosure of consumer information to an outside party that provided customer support services was part of its ordinary course of business under the VPPA.  No. 11-1729, 2013 WL 4451223, at *5-6 (N.D. Ill. Aug. 16, 2013).  On appeal, the Seventh Circuit affirmed and held that Redbox’s actions fell within the VPPA’s exception for disclosures in the ordinary course of business–more precisely, disclosures incident to “request processing.”  Sterk v. Redbox Automated Retail, LLC, No. 13-3037, 2014 WL 5369416, at *2-3 (7th Cir. Oct. 23, 2014).

Finally, various plaintiffs have filed a series of lawsuits in the past year claiming that various online streaming media providers–such as CNN, The Wall Street Journal, and Disney–violated the VPPA.  As of this writing, there have been no substantive orders in these cases.  See, e.g., Perry v. CNN, No. 14-1194 (N.D. Ill.); Robinson v. Disney, No. 14-cv-04146 (S.D.N.Y.); Austin-Spearman v. AMC, No. 14-cv-06840 (S.D.N.Y.).

            4.   ECPA Litigation and the “Contents of Communications”

Over the past several months, several federal courts have weighed in on the scope of the ECPA, providing further color to the statute’s definition of the “contents of communications.”

Most notably, on May 8, 2014, the U.S. Court of Appeals for the Ninth Circuit affirmed a district court’s dismissal of two putative class actions against Facebook and social gaming company Zynga in consolidated cases for alleged violations of the SCA, the federal Wiretap Act, and the ECPA.  In re Zynga Privacy Litig., 750 F.3d 1098 (9th Cir. 2014).  In Zynga, when a user clicked on an advertisement or the Zynga game icon on Facebook, the user’s web browser sent an HTTP request containing a “referer header” in order to access the online resource requested, which contained the user’s Facebook ID and the address of the Facebook page the user was viewing at the time.  According to the plaintiffs, Zynga’s collection and transmission of this information to third-party advertisers violated the ECPA.  The Ninth Circuit rejected the plaintiffs’ argument that Zynga’s actions violated the ECPA, holding that neither Facebook nor Zynga disclosed the “contents” of a communication, as required by the ECPA, in disclosing this referer header information to third-party advertisers.

In so holding, the Ninth Circuit reviewed the plain meaning and history of ECPA and concluded that it distinguishes between disclosure of customer “record information,” such as name, address, and subscriber identity, which is permitted under the law, and disclosure of the “contents of communications,” or the “intended message conveyed by the communication,” which is not.  Zynga, 2014 WL 1814029 at *6-7.  The Ninth Circuit disagreed with the plaintiffs’ argument that a Facebook ID and/or information about the webpage a user was viewing constituted the “contents of communications” because such information could lead advertisers to learn other information about users.  Instead, the court concluded that the “referrer header information at issue here includes only basic identification and address information, not a search term or similar communication made by the user.”

Other federal courts have looked to Zynga for guidance in determining whether information constitutes the “contents of communications” under the ECPA.  For example, in July 2014, a New Jersey federal court dismissed six consolidated MDL class actions alleging that Viacom’s and Google’s practice of installing cookies on personal computers that were used by children to access three Nickelodeon websites violated several federal and state laws, including the Wiretap Act.  In re Nickelodeon Consumer Privacy Litig., MDL No. 2443, 2014 WL 3012873 (D.N.J. July 2, 2014) (see supra for discussion of VPPA claim in Nickelodeon case).  In dismissing the Wiretap Act claim, the court held in part that the cookies that were allegedly intercepted did not constitute the “contents of communications.”  Id. at *14.  Citing Zynga, the court found that “contents” are defined as “information the user intended to communicate, such as the spoken words of a telephone call.”  Id.  Because personal information that is “automatically generated by the communication,” such as an IP address or a URL, have “less in common with ‘the spoken words of a telephone call” than they do with the telephone number dialed to initiate the call, the cookies allegedly intercepted were “more akin to ‘identification and address information.'”  Id. at *15 (quoting In re Zynga Privacy Litig., 750 F.3d 1098 (9th Cir. 2014)).

Additionally, in August 2014, Google won dismissal of an putative class action complaint alleging that Google violated ECPA, among other laws, by sending users’ contact information to developers when they used Google Wallet to make purchases.  Svenson v. Google Inc., No. 13-CV-04080-BLF, 2014 WL 3962820, (N.D. Cal. Aug. 12, 2014).  In dismissing the ECPA claim, the court noted that it did not “read Zynga so narrowly to mean that only automatically generated data may constitute record information,” finding that the information at issue in the case–namely, the user’s name, email address, Google account name, home city and state, zip code, and in some instances, telephone number–is “the type of information that the Ninth Circuit recognized as record information in Zynga.”  Id. at *9.

            5.   California’s Song-Beverly Credit Card Act and Point-of-Service Data Collection

Since the California Supreme Court’s landmark 2013 decision in the Krescent case, 56 Cal. 4th 128 (2013), courts have continued to weigh in on the scope of California’s Song-Beverly Credit Card Act of 1971 (“Song-Beverly”), Cal. Civ. Code §§ 1747, et seq., which prohibits merchants from requesting or requiring a customer’s personal identification information as a condition of accepting a credit card payment.

The court in Krescent held that Song-Beverly “does not apply to online purchases in which the product is downloaded electronically.”  56 Cal. 4th 128 at 133.  Krescent was a significant win for online retailers because–limited statutory exceptions notwithstanding, see Cal. Civ. Code § 1747.08(c)(3)(A)-(C)–the prohibitory language of Song-Beverly sweeps broadly, and those found in violation face potentially ruinous liability: merchants can face a civil penalty of up to $250 for the first violation and up to $1,000 for each subsequent violation.  Id. § 1747.08(e).  The court in Krescent declined to address Song-Beverly’s applicability to online transactions in general; the holding is expressly limited to purchases of electronically downloadable products.  See Krescent, 56 Cal. 4th at 143.  That said, the court based its decision heavily on what it identified as the California legislature’s primary intent when drafting the statute: to protect consumer privacy and prevent fraud.  Id. at 139-41.

While Krescent‘s holding is fairly narrow, the court’s concerns and reasoning about credit card fraud are hardly unique to electronically downloadable products.  Indeed, since Krescent was decided, California courts have tended to place fraud prevention practices beyond Song-Beverly’s reach.  See, e.g., Flores v. Chevron U.S.A. Inc., 217 Cal. App. 4th 337, 340 (2013) (granting summary judgment because requiring California customers to enter ZIP codes in pay-at-the-pump gas station transactions in locations with a high risk of fraud constituted a “special purpose” under §1747.08(c)(4) of the Act).  Moreover, just a few months after Krescent, a California federal district court turned to the question that the California Supreme Court left open.  In Ambers v. Buy.com, Inc., No. 13-cv-0196, 2013 WL 1944430 (C.D. Cal. Apr. 30, 2013), the court held that Song-Beverly does not apply to the online sales of shipped goods because a shipping address–the piece of additional information which the plaintiff conceded the retailer was permitted to collect–was not “equivalent to the ‘brick and mortar’ retailer’s ability to ask for a photo identification card or another ‘reasonable form of positive identification’ as ‘a condition to accepting the credit card’ under Section 1747.08(d).”  Id. at *7.

Applying Krescent, another California federal court held that email addresses constitute “personal identification information” under Song-Beverly, prohibiting offline retailers from collecting email addresses in connection with the completion of credit card transactions.  Capp v. Nordstrom, Inc., No. 13-cv-660 MCE AC, 2013 WL 5739102 (E.D. Cal. Oct. 22, 2013).  In Capp, the court rejected the defendant’s argument that the California legislature could not have intended to include email addresses as “personal identification information” because the passage of Song-Beverly predated the use of email and e-receipts in consumer transactions.  Id. at *7-8.  The court concluded that the basis for the court’s ruling in Krescent was the unavailability of safeguards against fraud in online transactions–not the unforeseeable nature of online transaction technology generally.  Id.

Interestingly, the Ninth Circuit recently affirmed the dismissal of a putative class action alleging that Redbox Automated Retail LLC collects customers’ ZIP codes at Redbox kiosks in violation of the Song-Beverly Act, but it rejected the district court’s theory that Redbox was not liable because the California legislature could not have intended the statute to apply to automated kiosks due to the potential for fraud in kiosk transactions.  Sinibaldi v. Redbox Automated Retail, LLC, 754 F.3d 70, 705 (9th Cir. 2014).  Instead, the court held that Redbox uses credit card information to secure potential future payments, conduct that falls within a statutory exception to Song-Beverly for transactions where the credit card is being used as a deposit to secure payment “in the event of default, loss, damage or similar occurrence” (Cal. Civ. Code § 1747.08(c)(1)).  Id. at 707.  It remains to be seen whether this novel holding will apply beyond the very narrow subset of businesses that engage in similar rental-type transactions.

California’s legislature has considered action in response to Krescent, Ambers, and the other cases described above.  The California Senate in January 2014 passed Senate Bill 383, which would expand Song-Beverly to apply to online transactions for downloadable goods, but the bill is stalled in committee and is “unlikely to move forward this year,” according to a representative in the office of the bill’s sponsor.[2]

Certainty about Song-Beverly’s reach will come only when binding decisions are issued.  But such decisions may be especially elusive given the increasing tendency to settle these cases, as recent six-figure settlements by entities such as Kohl’s Corp. and Ann Taylor Inc. demonstrate.  Whittenburg v. Kohl’s Corp., No. 3:2011-cv-02320 (N.D. Cal.); Foos v. Ann Inc., No.  3:11-cv-02794 (S.D. Cal.).

            6.   TCPA Litigation

In the past two years, the number of lawsuits alleging violations of the Telephone Consumer Protection Act (“TCPA”), 42 U.S.C. §§ 227 et seq., has exploded.  The likely draw for plaintiffs is the TCPA’s authorization for $500 to $1,500 per violation in statutory damages, which can be aggregated in class claims.  This increased pursuit of TCPA claims has led to several large settlements, including a 2014 settlement in which Capital One Financial Corp. and three collection agencies agreed to collectively settle a putative class action suit for $75.5 million–the largest settlement to date under the TCPA.[3]  As companies continue to be targets for class action suits alleging TCPA violations, courts’ varying interpretations of the statute are particularly important.

In recent years, courts and the Federal Communications Commission (“FCC”) have expanded the scope of liability under the TCPA.  In May 2013, the FCC issued a declaratory ruling that sellers using third-party telemarketers can be vicariously liable for third-party violations of the TCPA under principles of agency.  See Joint Petition Filed by DISH Network, LLC, for Declaratory Ruling Concerning the Telephone Consumer Protection Act (TCPA) Rules, Declaratory Ruling, FCC 13-54, 2013 WL 1934349 (May 9, 2013).  The Ninth Circuit expanded upon the FCC’s ruling in Gomez v. Campbell-Ewald Co., 768 F.3d 871 (9th Cir. 2014), when it found that a third party, not just merchants, could be vicariously liable for violations of the TCPA.  See also Thomas v. Taco Bell Corp., 2014 U.S. App LEXIS 12547 (9th Cir. July 2, 2014).  Companies should also be aware of the potential for direct liability even when messages are distributed by third parties.  In Palm Beach Golf Center-Boca, Inc. v. John G. Sarris, D.D.S., P.A., the Eleventh Circuit found there was a genuine dispute as to whether a company could be directly liable for a fax sent on its behalf even when distributed by a third party.  2014 U.S. App. LEXIS 20870 (11th Cir. 2014).  The court reasoned that the TCPA provided for direct liability for an entity on whose behalf goods or services were promoted by unsolicited fax advertisements even though the unsolicited fax was sent by a third party.  Id. at *17.

Consent has been another area of focus for TCPA litigation.  Effective October 2013, telemarketers must have express written consent prior to placing artificial or prerecorded telemarketing calls to a residential phone line or wireless number, sending text messages, or calling a wireless number using an automatic telephone system.  See In re Rules and Regulations Implementing the Telephone Consumer Protection Act of 1991, CG Docket No. 02-278, Report and Order, FCC 12-21, ¶ 4 (February 15, 2012).  The Eleventh Circuit has held that a district court did not have the authority to reject FCC rulings.  See Mais v. Gulf Coast Collection Bureau, Inc., 768 F.3d 1110 (11th Cir. 2014).  Specifically, the FCC ruling that autodialed and prerecorded message calls to wireless numbers provided by the called party to a creditor in connection with an existing debt are permissible, as calls made with the ‘prior express consent’ of the called party continues to control.  Id. at 1118.  Though express consent can be obtained through intermediaries, companies relying on intermediaries should confirm the obtaining of prior express written consent, inasmuch as they can still be liable under the TCPA.  See In the Matter of Groupme, Inc./Skype Commc’ns S.A.R.L Petition for Expedited Declaratory Ruling Rules & Regulations Implementing the Tel. Consumer Prot. Act of 1991, 29 F.C.C. Rcd. 3442 (March 27, 2014).  Express consent may become a powerful tool in defeating TCPA claims.

Courts have also continued to debate whether lack of consent is an element of TCPA claims or an affirmative defense–and consequently, who has the burden of proving that customers have or have not consented to receive certain calls, texts, or faxes.  In 2012, the Ninth Circuit suggested in dicta that lack of consent is an affirmative element of a TCPA claim.  See Meyer v. Portfolio Recovery Assocs., LLC, 707 F.3d 1036 (9th Cir. 2012).  Some courts have relied on this to hold that plaintiffs have the burden of proving non-consent.  See, e.g., Stemple v. QC Holdings, Inc., 2014 WL 4409817, at *6-7 (S.D. Cal. Sept. 5, 2014); Sepehry-Fard v. MB Fin. Servs., 2014 WL 2191994, at *2 (N.D. Cal. May 23, 2014).  Others have stated “prior express consent is not an element of a TCPA plaintiff’s prima facie case, but rather is an affirmative defense for which the defendant bears the proof.”  Sailola v. Mun. Servs. Bureau, 2014 WL 3389395, at *7 (D. Haw. July 9, 2014); see also Heinrichs v. Wells Fargo Bank, N.A., 2014 U.S. Dist. 29910 (N.D. Cal. 2014) (distinguishing Meyer on the grounds that Meyer “did not decide whether lack of consent must be affirmatively pled to survive a Rule 12(b)(6) motion . . .”).  Additionally, a number of circuits still consider the lack of consent an affirmative defense and thus impose the burden on the defendant to establish it.  See Mais, 768 F.3d at 1126 (remanding case with instructions to enter summary judgment in favor of defendant’s “affirmative defense” of prior express consent); see also Crawford v. Target Corp., 2014 U.S. Dist. LEXIS 159203, *7 n.3 (N.D. Tex. Nov. 10, 2014) (“The Court is unpersuaded by Defendant’s argument that lack of consent is an element of the claim that plaintiff must assert.”); Paldo Sign & Display Co. v. Wagener Equities, Inc., 2014 U.S. Dist. LEXIS 123111, *21-22 (N.D. Ill. 2014).  Companies should remain informed as courts continue to grapple with these issues.  A requirement that plaintiffs prove lack of consent could substantially decrease the likelihood of TCPA class actions and, therefore, companies’ potential exposure to TCPA violations.

Another trend in TCPA case law has been the general consensus that customers have a right to revoke consent to be contacted by autodialing systems.  The Eleventh and Eighth Circuit have followed the Gager v. Dell Financial Services, 727 F.3d 265 (3d Cir. 2013) decision and the Third Circuit’s recognition of the right of revocation for consumers who no longer want to be contacted by autodialing systems.  Osorio v. State Farm Bank, F.S.B., 746 F.3d 1242, 1255 (11th Cir. 2014); Brenner v. Am. Educ. Servs., 575 F. App’x 703 (8th Cir. 2014).  Companies should be sure to recognize when customers have revoked their consent to be contacted.

Finally in 2014, courts grappled with the interpretation of “capacity” for automatic telephone dialing systems (“ATDS”)–which are defined as equipment with the capacity: (a) to store or produce telephone numbers to be called, using a random or sequential number generator; and (b) to dial such numbers.  47 U.S.C. § 227(a)(1)(A)-(B).  Most courts have held that a device is considered an ATDS only if it has the present capacity to generate random phone numbers, not if it has the potential capacity to generate numbers or make phone calls.  See Hunt v. 21st Mortg. Corp., 2013 U.S. Dist. LEXIS 132574 (N.D. Ala. Sept. 17, 2013); Gragg v. Orange Cab Co., 995 F. Supp. 2d 1189 (W.D. Wash. Feb. 7, 2014); Dominguez v. Yahoo!, Inc., 8 F. Supp. 3d 637 (E.D. Pa. 2014).  However, it is possible that the potential capacity to generate numbers is relevant to the ATDS inquiry.  See Sherman v. Yahoo! Inc., 997 F. Supp. 2d 1129 (S.D. Cal. 2014).  Companies should also be aware of the possibility of liability for devices that have the capability to store and dial numbers, as at least one court has found that a predictive dialer constitutes an ATDS regardless of whether the system has the capability of random or sequential number generation.  See Davis v. Diversified Consultants, Inc., 2014 U.S. Dist. LEXIS 87867 (D. Mass. June 27, 2014).

II.   Regulatory and Policy Developments

      A.   FTC Enforcement Trends

            1.   Cybersecurity, Data Breaches, and Legal Challenges to the FTC’s Authority

Having pursued more than 50 data security cases since 2000–and with almost half of those cases brought since 2010–the FTC has positioned itself as the de facto federal data-security regulator (despite the continuing lack of a clear congressional directive to fulfill this role).  In the past year, the FTC continued its aggressive pursuit of consent agreements related to cybersecurity and data breaches and other Internet- and mobile-related practices.  These consent agreements and settlements are detailed in Section II.A.3.

Over the past three years, two companies have decided to test the FTC’s authority in this area in closely watched cases.  In 2014, a New Jersey federal court issued the first opinion by any court on whether the FTC has the authority to regulate in the data-security arena pursuant to Section 5 of the FTC Act.  In June 2012, the FTC filed suit against Wyndham Worldwide Corporation, a global hospitality company, alleging that (1) the breach of its franchisees’ computer systems, giving intruders access to Wyndham customers’ personal and financial information, constituted unfair business practices, and (2) Wyndham made deceptive representations to consumers that it employed reasonable and appropriate security measures.

Wyndham moved to dismiss the complaint, raising challenges to the FTC’s authority on two grounds.  First, Wyndham argued that Congress’s passage of various laws that touch on data security (including the Gramm-Leach-Bliley Act and the Children’s Online Privacy Protection Act (“COPPA”)) has effectively limited the FTC’s authority to regulate data security issues.  The court rejected this challenge, holding instead that “the FTC’s unfairness authority over data security can coexist with the existing data-security regulatory scheme.”  Second, Wyndham asserted that the FTC had failed to promulgate sufficiently clear regulations in violation of the due process clause.  The court rejected this challenge as well, finding that the test established under Section 5(n) of the FTC Act, as well as the host of publicly available prior FTC complaints and consent orders, collectively provide actors with sufficient notice of what constitutes noncompliant activity.  The court’s order is currently being challenged in an interlocutory appeal before the Third Circuit, and a decision is expected in 2015.

Another company joined the fight with a more narrowly tailored challenge to the FTC’s data-security authority in November 2013.  LabMD, a cancer-screening medical laboratory, moved to dismiss an administrative complaint that the FTC filed against it in August alleging that it lacked appropriate data security and unfairly exposed the private health and personal data of more than 9,000 consumers.  LabMD argued that the “plain language [of Section 5 of the FTC Act] does not authorize patient-information data-security regulation,” and that only the U.S. Department of Health and Human Services (“HHS”) is empowered to regulate patient-information data-security practices within the healthcare sector.  The Commission–which has the authority to resolve such motions filed in connection with administrative proceedings–disagreed, finding instead that Congress had delegated it “broad authority . . . to determine what practices were unfair, rather than enumerating the particular practices to which [the term ‘unfair’ in Section 5] was intended to apply.”

LabMD further argued that even if the FTC shares joint regulatory authority with the HHS over the healthcare sector, the FTC’s failure to publish data-security regulations, guidance, or standards explaining what is forbidden or required by Section 5 nevertheless deprives LabMD and similarly situated entities of “constitutionally required fair notice.”  The Commission likewise rejected this argument, stating that “such complex questions relating to data-security practices in an online environment are particularly well-suited to case-by-case development in administrative adjudications or enforcement proceedings.”  Nevertheless, the FTC’s administrative action against LabMD was delayed in June 2014, after a letter from a Republican-led House investigative committee surfaced claiming that crucial information in the FTC’s investigation provided by Tiversa, Inc.–a cybersecurity firm and a key player in the agency’s case–was incomplete and inaccurate.  The parties are currently awaiting the testimony of Rick Wallace, a former Tiversa employee, who was granted immunity by the Attorney General in November 2014.

In addition to raising this aggressive defense in an administrative context, LabMD has also pursued a parallel strategy in federal court: in May 2014, the U.S. District Court for the Northern District of Georgia dismissed a motion for preliminary injunction filed by LabMD seeking to stay the FTC action.  The court found that it lacked jurisdiction to enjoin the ongoing proceedings of a federal agency.  LabMD appealed this decision to the Eleventh Circuit, arguing that the FTC’s actions are currently subject to judicial review because LabMD’s constitutional claims need not wait until the agency takes a final action and the Commission’s denial of LabMD’s motion to dismiss solidified the FTC’s position that its authority extends to regulation of medical data-privacy.  In a decision issued on January 20, 2015, the Eleventh Circuit rejected these arguments, ruling that federal courts don’t have jurisdiction to hear LabMD’s claim until the administrative proceeding concludes.  The court reasoned that “[b]ecause we hold that the FTC’s Order denying LabMD’s motion to dismiss was not a ‘final agency action,’ as is required of claims made under the [Administrative Procedure Act],” the district court properly dismissed LabMD’s claims.

Although we will continue to watch these cases with interest, one thing can be said with certainty: these legal challenges to the FTC’s regulatory power of data-security matters do not appear to have inhibited the FTC’s vigor for bringing enforcement actions in this realm.  In 2014, the FTC brought eight additional data security-related enforcement actions–all of which have resulted in consent orders.

            2.   The U.S.-EU Safe Harbor

On January 21, 2014, the FTC announced that it had settled with twelve U.S. companies over noncompliance with international privacy frameworks.  Two other companies were added to this list in February and May 2014.  After a public comment period, the FTC approved final settlement orders on June 25, 2014.

The companies had represented that they abided by the U.S.-EU Safe Harbor framework (and, in three cases, also the U.S.-Swiss Safe Harbor framework) by displaying certification signage or statements in their privacy policies.  The FTC alleged that in reality, the companies did not comply with these data protection frameworks.

The U.S.-EU Safe Harbor enables U.S. companies to transfer consumer data from the European Union (“EU”) to the United States in compliance with EU law.  To participate, a company must comply with the principles required to meet the EU’s adequacy standard: notice, choice, onward transfer, security, data integrity, access, and enforcement.  After opting in, a company must recertify every twelve months.  It can either perform a self-assessment to verify that it complies with the principles or hire a third party to conduct this assessment.  In this series of cases, the FTC focused on companies that allegedly allowed their self-certification to lapse while still asserting through website statements and privacy policies that their certifications were current.

The fourteen companies that settled with the FTC represent a cross-section of industries, including retail, laboratory science, data brokering, debt collection, information security, online gaming, and professional sports (including three NFL teams–the Atlanta Falcons, Denver Broncos, and Tennessee Titans).  Under the settlements, the companies agreed to cease misrepresenting the extent of their participation in privacy or data security programs sponsored by the government or any other self-regulatory or standard-setting organization.  This wave of consent decrees may be just the start of an increased focus on the Safe Harbor and the self-certification process at least in part in response to increased European scrutiny of U.S. data transfer and surveillance revealed by Edward Snowden.

The FTC has also directed attention to third-party privacy certifications.  On November 17, 2014, the FTC announced a settlement with True Ultimate Standards Everywhere, Inc. (“TRUSTe”).  TRUSTe is a leading provider of privacy certifications for online businesses.  TRUSTe provides certification seals that indicate that an online business complies with privacy standards such as the U.S.-EU Safe Harbor Framework, the COPPA, and TRUSTe-specific programs.  The FTC’s complaint alleged that TRUSTe represented that it conducted annual recertification of businesses displaying its privacy seals but in fact failed to conduct these recertification examinations in over 1,000 instances.  The complaint also alleged that TRUSTe–which converted from a non-profit to a for-profit entity in 2008–failed to require businesses to update website and privacy policy language that referred to TRUSTe as a non-profit entity.  Under the consent order, TRUSTe will be required to refrain from misrepresenting its certification process or timeline as well as its corporate status.  TRUSTe will also be required to pay $200,000 and to provide increased reporting and records to the FTC in relation to its COPPA certification activities.

            3.   High-Profile FTC Consent Decrees

                    a.   Consent decrees regarding faulty data security practices

Much of the FTC’s work in the data security arena involves policing companies’ adherence to advertised security policies and practices via consent decrees and settlements.  For example, in March 2014, Fandango and Credit Karma settled with the FTC over charges that the companies’ apps had placed consumers’ personal data at risk, in contravention of the companies’ security promises, by disabling SSL certificate validation.[4] According to the FTC, this left the apps open to interception of data by third parties, particularly when users were connected on a public Wi-Fi network. [5]  These settlements require Fandango and Credit Karma to establish comprehensive security programs and consent to biennial privacy audits for the next twenty years.  The Fandango and Credit Karma settlements are indicative of the settlement conditions the FTC routinely seeks (and obtains) in data security consent decrees.  Indeed, several other settlements in the past year include nearly identical terms.  For example, recent settlements with Accretive Health,[6] Genelink,[7] and GMR Transcription Services, Inc.[8] all include requirements that the companies adopt comprehensive information security programs and undergo biennial monitoring for the next twenty years.

A recent high-profile decision by the FTC not to sue Verizon, meanwhile, offers some insights into steps that companies can take to minimize the likelihood of this type of intrusive and far-reaching consent decree.  The FTC was investigating Verizon’s use of an outdated encryption method as the default security setting on Internet routers that Verizon shipped to customers.[9]  The practice allegedly made Verizon customers vulnerable to hackers.[10]  After investigation, however, the FTC declined to bring a complaint and cited factors including “Verizon’s overall data security practices related to its routers, along with efforts by Verizon to mitigate the risk to its customers’ information.”[11]  In addition to having relatively robust data security policies, Verizon aggressively responded to the router issue by resetting all new routers with a more robust security setting and implementing an outreach campaign to all customers who were using the outdated security standards.[12]   Notably, FTC’s letter emphasized that “data security is an ongoing process” and that “what constitutes reasonable security changes over time as new risks emerge and new tools become available to address them.”[13]   Though the full import of the FTC decision not to bring an action against Verizon has yet to be determined, the letter at least affirms that the FTC will consider a company’s overall data security practices and responsiveness in light of a quickly evolving threat landscape.

                    b.   Consent decrees regarding deceptive practices in collection of PII

The FTC also continued its crackdown on deceptive practices related to the use of PII throughout the past year, particularly related to various technology companies, from the perspective of both web and mobile applications.  Snapchat, the popular communications app, settled charges of misleading consumers over exactly how much PII it was collecting, as well as users’ abilities to store and share messages that Snapchat claimed were only temporary and would disappear.[14]  In addition, failure to secure certain PII resulted in release of nearly five million user names and phone numbers following a serious data breach.  As a part of the settlement, Snapchat is subject to ongoing privacy monitoring for the next twenty years.  The FTC was clear that it focused on Snapchat in part due to its business model focused on privacy.  According to the FTC, “If a company markets privacy and security as key selling points in pitching its service to consumers, it is critical that it keep those promises….  Any company that makes misrepresentations to consumers about its privacy and security practices risks FTC action.”[15]

In another app settlement, the FTC settled with mobile app developer Goldenshores Technologies, LLC (“Goldenshores”) over allegations its popular app, “Brightest Flashlight Free app,” collected much more personal information than disclosed.[16]  In fact, the app collected precise geolocation information, along with persistent device identifiers, and then shared that information with third parties, including advertising networks.  Notably, the app was already collecting and sending information to third parties–even before the user had accepted the deficient terms in the end user license agreement.  The settlement required Goldenshores to provide a “just-in-time” disclosure, fully informing consumers when, how, and why their geolocation information is being collected, used and shared, and requires affirmative opt-in from consumers prior to collection.

Finally, in a parallel set of actions, against both the entity and its principal, the FTC entered into a proposed consent order with PaymentsMD, LLC (“PaymentsMD”) and Michael Hughes (former CEO, sole employee, and partial owner of PaymentsMD).[17]  PaymentsMD obtained consumer authorization to collect sensitive health information for one purpose–to track medical bills–but in fact was using that authority to collect other sensitive health information, including treatment information, from various third parties.  In turn, PaymentsMD then used that information to create a comprehensive “Patient Health Report” for each consumer.  The FTC has proposed enjoining Hughes and PaymentsMD from continuing this activity, along with increasing disclosures to consumers regarding exactly what information will be collected, and what it will be used for.

                    c.   Consent decrees regarding app purchases by children

In the past year, the FTC also reached high-profile consent agreements with technology companies over accusations that the companies unfairly charged consumers for application purchases made from applications downloaded from mobile application stores.[18]  The FTC alleged that these companies violated Section 5 of the FTC Act by failing adequately to notify parent account holders that entering a password to install an application or to approve an in-app purchase would open up a window of fifteen minutes or more where a user could make subsequent in-app purchases without further authorization.[19]  This led to instances where children made purchases within applications, without parental approval.  As part of the settlement agreement, the companies must provide a refund to users who incurred such unauthorized or accidental charges.  Furthermore, the companies must obtain express consent from customers before billing them for in-app purchases.

                    d.   Settlements over mobile cramming

The FTC also reached settlement agreements with several online marketing and advertising companies over allegations that they engaged in a pattern of unfair and deceptive advertising by sending unwanted text messages to millions of consumers.  The FTC alleged that these companies sent text messages to consumers with offers for supposedly free merchandise as part of a scheme to collect and sell consumer information, cram unwanted charges on their mobile bills, and drive them to paid subscriptions for affiliate services.  As part of the agreements, the accused companies agreed to pay over $9.2 million in damages and stop engaging in similar unlawful and deceptive business practices in the future.  In related settlements, the FTC reached agreements with two telecommunications providers over allegations the companies unlawfully charged their customers with unwanted third-party mobile services.  The FTC noted that the companies did not take steps to fix the issue despite a large number of customer complaints about unauthorized third-party charges, and instead crammed the charges deep in phone bills.  In addition to paying fines to the FTC and state attorneys general, the companies agreed to provide refunds to their customers for the unauthorized charges.

      B.   The FTC’s Revised COPPA Rule

In recent years the FTC has maintained an aggressive focus on children’s privacy–perhaps most notably by revising the COPPA Rule to reflect changes in technology in 2013.  The COPPA Rule was originally mandated under the Children’s Online Privacy Protection Act of 1998, and it requires operators of websites or online services that are directed at children under 13, or that have actual knowledge that they are collecting personal information from children under 13, to notify parents and get their verifiable consent before collecting, using, or disclosing such information.  The COPPA Rule also requires operators who fall within the above parameters to take steps to protect and secure any personal information that they collect from children under 13.  After more than two years of FTC review, and following approval by the Commission in December 2012, a revised version of the COPPA Rule went into effect on July 1, 2013.  Amendments to the Rule give parents greater control over the online collection of their children’s personal information.

Under this revised COPPA Rule, the term “website or online services” is now broadly defined to include: standard websites; mobile apps that send or receive information online; Internet-enabled gaming platforms; plug-ins; advertising networks; Internet-enabled location-based services; and voice-over Internet protocol services.

The term “personal information” now includes: full name; home or other physical address including street name and city or town; online contact information like an email address or other identifier that permits someone to contact a person directly–for example, an IM identifier, VoIP identifier, or video chat identifier; screen name or user name where it functions as online contact information; telephone number; Social Security number; a persistent identifier that can be used to recognize a user over time and across different sites, including a cookie number, an IP address, a processor or device serial number, or a unique device identifier; a photo, video, or audio file containing a child’s image or voice; geolocation information sufficient to identify a street name and city or town; or other information about the child or parent that is collected from the child and is combined with one of these identifiers.

Additionally, operators are also required to post a “privacy policy” that clearly and comprehensively describes how personal information is be collected from children under 13, including by any affiliated collectors (for example, via website plug-ins or ad networks of which the operator is a member).  This closes a loophole that existed under the previous iteration of the COPPA Rule.  This privacy policy must include a list of all operators collecting this information, as well as a description of parental rights, and in fulfilling this final requirement, operators must implement a situationally reasonable “verification” method for obtaining affirmative parental consent.  The COPPA Rule Amendments added several new methods that operators may use to obtain parental consent, including: electronic scans of signed parental consent forms; video-conferencing; use of government-issued identification; and alternative payment systems, such as debit cards and electronic payment systems (provided that they meet certain criteria).  In December 2013, the FTC approved knowledge-based identification as an additional verifiable parental consent method, provided that the process uses dynamic, multiple-choice questions that are difficult for a child to guess the answers to.

Once an operator collects information from children under 13, the revised COPPA Rule imposes heightened ongoing duties to adopt reasonable procedures for data retention and security–including limitations on when, and to whom, this information can subsequently be released.

The FTC has also conferred “safe harbor” status on seven designated organizations, empowering them to create comprehensive self-compliance programs for their own members. Companies that voluntarily become members of one of these participating organizations are generally subject to intra-organizational review and disciplinary procedures, in lieu of formal FTC investigation and law enforcement.  The COPPA Rule safe harbor programs currently recognized by the FTC include: iKeepSafe; kidSAFE; Aristotle International, Inc.; Children’s Advertising Review Unit of the Council of Better Business Bureaus; ESRB Privacy Certified; PRIVO; and TRUSTe.

The FTC initially suspended enforcement of these 2013 revisions to allow companies time to develop and deploy conforming policies–but this grace period ended in September 2014, when online review site Yelp, Inc., and mobile app developer TinyCo, Inc., separately agreed to settle charges that they improperly collected children’s information in violation of the COPPA Rule.[20]  Under the terms of these respective settlements, Yelp agreed to pay a $450,000 civil penalty, TinyCo agreed to pay a $300,000 penalty, and both companies agreed to submit compliance reports to the FTC in 2015 outlining revamped internal COPPA Rule compliance programs.  Most recently, on December 17, 2014, the FTC sent a letter to BabyBus, a China-based developer of mobile applications directed to children, warning that the company may be in violation of the revised COPPA Rule because it appears to collect precise geolocation information about its users without obtaining parental consent beforehand.

The 2013 revisions to the COPPA Rule–and the FTC’s aggressive enforcement of these provisions in late 2014–suggest that this is likely to be an area of continuing FTC focus for the foreseeable future.  Accordingly, businesses should take reasonable precautions to ensure that their data collection and storage policies are fully in compliance with the revised COPPA Rule.

      C.   FCC Guidance and Amendments to the TCPA

In October 2013, a report and order by the FCC modifying the implementation rules and regulations of the TCPA went into effect.  See Rules and Regulations Implementing the Telephone Consumer Protection Act of 1991, CG Docket No. 02-278, Report and Order, 27 FCC Rcd. 1830 (2012) (hereinafter the “FCC Guidance”).  The modifications include requiring prior express written consent for telemarketing calls to wireless numbers and residential lines and eliminating the business relationship exemption for telemarketing calls to residential lines.  Id. at 1831, par. 2.[21]  The FCC stated that the changes were made to offer greater protections to consumers in the privacy arena and to maximize consistency with the analogous rules of the FTC.  Id.  Over the past year, these rules have led to an increase in TCPA litigation.  See Section I.B.6.

Along with the increase in TCPA litigation, a related development is the increasing number of entities petitioning the FCC to make rulings interpreting various provisions of the TCPA.  There are currently over 20 petitions pending before the FCC asking the Commission to clarify the applicability of the TCPA to issues such as: (1) the definition of the called party as the intended recipient of a call;[22] (2) the delivery of voicemails directly to users;[23] (3) the revocation of  prior express consent for non-telemarketing calls;[24] (4) the definition of an automatic telephone dialing system;[25] (5) vicarious liability for individuals who aide telemarketers;[26] (6) liability for calls to reassigned cell phone numbers;[27] (7) liability for social network text-messaging systems;[28] (8) liability for automatic text messages generated in response to user requests;[29] (9) the requirement of prior express consent for notifications to users affected by data breaches and suspicious transactions;[30] and (10) the implementation of call blocking technology.[31]  These open petitions underscore the wide variety of unresolved TCPA issues that impact TCPA litigation today.

The FCC closed out only a few of these petitions during the past year.  In two rulings issued on March 27, 2014, the Commission interpreted provisions of the TCPA that prohibit auto-calling or auto-texting cell phones without the recipients’ prior express consent.  See Order, In re GroupMe, Inc./Skype Communications S.A.R.L Petition for Expedited Declaratory Ruling, 59 Communications Reg. (P&F) 1554 (F.C.C. Mar. 27, 2014); see also In the Matter of Cargo Airline Assn. Pet. for Expedited Declaratory Ruling, 59 Communications Reg. (P&F) 1509 (F.C.C. Mar. 27, 2014).  In the GroupMe ruling, the FCC found that text-based social networks may send administrative text messages confirming consumers’ interest in joining text message groups, without violating the TCPA.  The Commission found that the consumers must provide express consent to participate in the groups but that the consent may be conveyed to the text-based social network by an intermediary.  In the Cargo Airline ruling, the FCC granted an exemption under the TCPA to allow package delivery services to provide automatic delivery notification alert calls and texts to cell phones of recipients of packages, even without their prior express consent.  However, this exemption was granted only under narrow conditions: the sender of the package must indicate that the recipient consents; the delivery notifications must be purely informational; the recipient of the call/text must not be charged; and the recipient must be able to easily opt out of future messages.  Finally, in an October 2014 ruling addressing issues raised by 24 pending FCC petitions, the Commission decided that the TCPA required “opt-out” language on all fax advertisements, even those sent with the prior express consent of the recipient.  In the Matter of Rules & Regulations Implementing the Tel. Consumer Prot. Act of 1991, 61 Communications Reg. (P&F) 671 (F.C.C. Oct. 30, 2014).  The ruling also granted a retroactive waiver to the petitioners and other similarly situated parties since the requirement was previously ambiguous.

      D.   The NIST Cybersecurity Framework

On February 12, 2014, the National Institute of Standards and Technology (“NIST”) released its Cybersecurity Framework (the “Framework”).[32]  The Framework is NIST’s response to President Obama’s direction set forth in Executive Order 13636, Improving Critical Infrastructure Cybersecurity, to develop a voluntary cybersecurity framework for reducing cybersecurity risk to critical infrastructure.[33]  The Framework is intended to provide a “prioritized, flexible, repeatable, performance-based, and cost-effective approach”[34] to assist organizations in the critical infrastructure sectors to manage cybersecurity risk.  NIST develop this Framework based on input from various constituencies regarding existing standards, guidelines, and best practices for managing cybersecurity threats.  The process involved more than 3,000 critical infrastructure owners and operators, industry leaders, government partners, and other stakeholders.  The final Framework was released with the guidance that it is to be a “living” document shaped by user feedback and experiences.[35]

The Framework, which is essentially a voluntary cybersecurity risk management tool, is intended to encourage private and public sector organizations to develop more effective approaches to managing cybersecurity threats.  The voluntary Framework is specifically intended to serve as a resource for organizations in the sixteen critical infrastructure sectors identified by the Administration.  The Framework broadly defines “critical infrastructure” to include both organizations traditionally associated with national security, such as those in the defense industrial base, and organizations that one may not automatically associate with national security concerns, such as food- and agriculture-related enterprises, commercial facilities (including sports arenas, shopping malls, and apartment buildings), and certain manufacturing enterprises.

The Framework seeks to provide a common language and mechanism for organizations to achieve five main objectives: (1) describe their current cybersecurity posture; (2) describe their target state for cybersecurity; (3) identify and prioritize opportunities for improvement within the context of risk management; (4) assess progress toward the target state; and (5) foster communication among internal and external stakeholders.[36]  The Framework itself comprises three parts: the Framework Core, the Framework Profile, and the Framework Implementation Tiers.  The Core consists of five Functions–Identify, Protect, Detect, Respond, and Recover–that provide a high-level strategic categorization of cybersecurity risks.[37]  These functions are, in turn, broken into categories and subcategories, and matched with existing domestic and international standards, guidelines, and best practices.[38]  The Framework Profile is designed to align industry standards and best practices with the specific business requirements, resources, and risk tolerance of an organization.[39]  Organizations can use the Profile to develop a roadmap to reduce cybersecurity risks and conduct self-assessments.  The final part, the Implementation Tiers, categorize an organization’s cybersecurity practices into one of four levels based on the organization’s current risk management practices, threat environment, legal and regulatory requirements, business/mission objectives, and organizational constraints.[40]  This categorization allows organizations to assess their cybersecurity practices, ranging from informal, reactive implementations to flexible and risk-informed approaches.

In conjunction with the release of the Cybersecurity Framework in February 2014, NIST also published a Cybersecurity Framework Roadmap that detailed high-priority areas for development, alignment, and collaboration, with the intent to address these areas in future versions of the Cybersecurity Framework.[41]  These areas include the development of better identity and authentication technologies, automated indicator sharing, conformity assessments, data analytics, the cybersecurity workforce, supply chain risk management, and technical privacy standards.[42]  Pursuant to the Roadmap, NIST continues to serve as a convener and coordinator to assist organizations in private industry and the public sector to understand, use, and improve the Framework.

Throughout 2014, NIST continued engagement with and sought input from stakeholders in government, industry and academia.  NIST focused specifically on the topic of privacy engineering, which “focuses on providing guidance to information system users, owners, developers and designers that handle personal information.”[43]  Despite the significance of privacy today, the field has yet to fully develop models, technical standards and best practices for the protection of individuals’ privacy and civil liberties.  NIST held two privacy engineering workshops, one in April and a second in September, to address this gap and consider draft privacy engineering definitions and concepts.”[44]

NIST has also sought to increase awareness of the Framework and encourage organizations to use the Framework as a tool to manage cybersecurity risks.  For instance, NIST issued a formal Request for Information in August to solicit feedback on the level of awareness of the Framework and the Roadmap and initial experiences with the Framework from critical infrastructure organizations as well as government organizations and other stakeholders, including consumers and solution providers.[45]  And in October 2014, NIST hosted a workshop to gather input from critical infrastructure stakeholders about their awareness of and initial experiences with the Framework.[46]  These engagement efforts are intended to inform NIST’s planning and decision-making relating to the Framework, including both future versions of the Framework as well as the development of tools and resources to enable more effective use of the Framework.  In addition, the RFI responses are intended to inform the Department of Homeland Security’s Critical Infrastructure Cyber Community C3 Voluntary Program, which was established as a public-private partnership to increase awareness and use of the Framework.[47]

In 2015, NIST will continue to focus on increasing awareness of the Framework and facilitate its use through the development of information and training materials.[48]  NIST does not intend to revise the Framework itself in 2015, although it will continue to focus on the areas identified in the Roadmap.[49]  NIST plans to develop publicly available reference materials that will help organizations understand how to better use the Framework and how to integrate the cybersecurity risk management approach of the Framework into an organization’s broader risk-management program.[50]  Finally, NIST expects to continue to hold workshops, webinars, and similar meetings with stakeholders on the Framework.

III.   Legislative Developments

In the United States, legislative debates in the past two years have focused on disclosures of the NSA’s surveillance programs, data breach notification laws and cybersecurity, digital privacy, and other issues.  At the federal level, there has been much debate but little progress on passage of legislation in these areas.  In the days leading up to the State of the Union address on January 20, 2015, the White House announced a new legislative proposal outlining significant cybersecurity and data privacy initiatives intended to reboot the administration’s stalled efforts to pass cybersecurity legislation over the last few years.  Meanwhile, several states have moved to fill the void left by perceived Congressional inaction.

      A.   Proposed Federal Data Breach Notification and Cybersecurity Legislation

            1.   Legislation Arising From Prominent Retailer Data Breaches

The many attacks on computer systems of major companies over the past year (discussed in detail in Section I.B.1 above) inspired a wave of legislation aimed at preventing such massive data breaches.  One prominent piece of proposed legislation is the Personal Data Privacy and Security Act of 2014, S. 1897, sponsored by Senator Patrick Leahy (D-VT) and cosponsored by five Democratic senators, which was introduced in the Senate on January 8, 2014.  An identical version of this bill was sponsored in the House by Rep. Carol Shea-Porter (D-NH) and introduced on February 4, 2014 (H.R. 3990).  This proposed legislation would create a federal standard for notifying customers of a data breach and impose additional restrictions on the storage of customer data, including requiring the implementation of a comprehensive data privacy security program.  Specifically, the bill would require businesses to comply with FTC guidelines for the protection of sensitive personally identifiable information and implement comprehensive personal data privacy and security programs.  In addition, businesses would be required to: (1) identify reasonably foreseeable vulnerabilities that could result in unauthorized access, disclosure, use, or alteration of sensitive information; (2) assess the likelihood of and potential damage from unauthorized access to, or disclosure, use, or alteration of sensitive information; (3) assess the sufficiency of their policies, technologies, and safeguards to minimize risks from unauthorized access, disclosure, use, or alteration of sensitive information; (4) assess the vulnerability of sensitive information during destruction and disposal of such information; (5) design their personal data privacy and security programs to control risks; (6) adopt measures commensurate with the sensitivity of the data as well as the size, complexity, and scope of activities of the entities that control access to systems and facilities containing sensitive information; (7) establish procedures for minimizing the amount of sensitive information maintained; and (8) take steps to ensure appropriate employee training and regular testing of key controls, systems, and procedures of the entity’s personal data privacy and security program.  Senator Leahy’s bill defines “personally identifiable information” broadly; the definition includes “any information, or compilation of information, in electronic or digital form that is a means of identification.”  It would exempt from its provisions, however, certain financial and health-care institutions already subject to the data security requirements of the Gramm-Leach-Bliley Act or HIPAA.  The Senate bill is currently in the Committee on the Judiciary, where it has remained since January 2014.  In February 2014, the House bill was referred for consideration to the Committees on the Judiciary, Energy and Commerce, Financial Services, Oversight and Government Reform, and the Budget.

Another bill introduced the same week, the Data Security Act of 2014, S. 1927, sponsored by Senator Tom Carper (D-DE) and cosponsored by Senator Roy Blunt (R-MO), would provide “clarity and certainty to all parties involved” by setting up a coherent set of national standards to replace the “patchwork” of 49 separate data security laws in U.S. states and its territories, according to the bill’s sponsors.  The Data Security Act’s definition of personal information requiring protection is narrower than the definition in Senator Leahy’s bill, and it explicitly excludes “publicly available information that is lawfully made available to the general public,” and omits, for example, biometric data.  The bill would require notification of affected individuals only in the event of a breach that discloses information “reasonably likely to be misused in a manner causing substantial harm or inconvenience” (S. 1927, § 3(c)), while Senator Leahy’s bill requires notification when there is a “reasonable basis to conclude” that access to the information “is for an unauthorized purpose” (S. 1897, § 3(10)(A)).  The Data Security Act has been under consideration by the Committee on Banking, Housing, and Urban Affairs’ Subcommittee on National Security and International Trade and Finance since February 2014.

Additionally, the Data Security and Breach Notification Act of 2014, S. 1976, sponsored by Senator John D. Rockefeller IV (D-WV) with three cosponsors, would–like Senator Leahy’s bill–give the FTC authority to set security standards for companies that hold consumers’ personal and financial information, and would also obligate companies to notify affected customers “following the discovery of a breach of security” of their data system.  The bill defines “breach of security” broadly: it is a compromise in data security that results in “unauthorized access to or acquisitions of personal information.” Like Senator Carper’s bill, Senator Rockefeller’s bill defines “personal information” more narrowly than Senator Leahy’s bill; such information includes any “non-truncated social security number,” credit card/account number with the access code or password “that is required for an individual to withdraw funds, or engage in a financial transaction,” or an individual’s full name in combination with another piece of specific identifying information, such as a driver’s license number, unique account identifier, or biometric data.  See S. 1976 § 6(9)(a).  No action has been taken on this proposed legislation since its referral to the Committee on Commerce, Science, and Transportation on January 30, 2014.

            2.   Cybersecurity Legislative Efforts

Following President Obama’s call for comprehensive cybersecurity legislation in his 2013 State of the Union address, members of Congress proposed several bills in that area, but it is unclear whether any legislation will soon pass.

Most notably, the Cyber Intelligence Sharing and Protection Act (“CISPA”), H.R. 624, introduced by Rep. Mike Rogers (R-MI), would create procedures for private entities to share cybersecurity threats with the Director of National Intelligence.  The bill was approved by the House and is in the Senate Select Committee on Intelligence.

In November 2013, Department of Homeland Security (“DHS”) Acting Undersecretary for National Protection and Programs Suzanne Spaulding called for legislation to exempt certain critical infrastructure operators (including banks and power grids) from liability for providing information about cyberattacks to the Department.  No movement on such specific legislation has yet occurred.  However, there has been a recent flurry of related legislation.   Congress passed two related statutes pertaining to cybersecurity and federal agencies as attachments to the Border Patrol Agent Pay Record Act, S. 1691.  The first, the DHS Cybersecurity Workforce Recruitment and Retention Act of 2014, authorizes DHS to establish cybersecurity positions in the agency as positions in the “excepted service” and not subject to the regular federal pay scale, and sets forth DHS’s authority to make appointments, fix pay rates, and provide incentives and allowances for such positions.  The second, the Homeland Security Cybersecurity Workforce Assessment Act, further requires federal agencies to identify and code cybersecurity workforce positions within the agency, directs each agency head to submit a report identifying critical needs in the agency’s cybersecurity workforce, and requires the Office of Management and Budget (“OMB”) to provide guidance to agencies on identifying cybersecurity workforce needs.  The President signed both bills into law on December 18, 2014.

Relatedly, the Cybersecurity Workforce Assessment Act, introduced as H.R. 2952 by Rep. Patrick Meehan (R-Pa) on August 1, 2013, and signed into law by President Obama on December 18, 2014, directs DHS to develop a comprehensive strategic plan to enhance the readiness, capacity, training, recruitment, and retention of the cybersecurity workforce of DHS, and to report to Congress about the progress of certain critical infrastructure security technologies.  The statute requires DHS to develop a plan for a Cybersecurity Fellowship Program offering a tuition payment plan for students pursuing undergraduate and doctoral degrees who agree to work for DHS for an agreed-upon period of time.

The National Cybersecurity and Critical Infrastructure Protection Act, H.R. 3696, introduced by Rep. Michael T. McCaul (R-TX) with three cosponsors, would require the Secretary of Homeland Security to conduct and share the results of certain cybersecurity activities.  It also would establish a federal civilian information sharing interface to share cyberthreat information among public and private entities and critical infrastructure owners and operators.  The bill was approved by the House Homeland Security Subcommittee on Cybersecurity, Infrastructure Protection, and Security Technologies in January 2014, and was to be reported in February by the full Homeland Security Committee.  While there has been no further action since February, President Obama recently signed into law similar legislation, the National Cybersecurity Protection Act of 2014 (introduced as S. 2519 by Sen. Thomas Carper (D-DE) on June 24, 2014).  This statute codifies DHS’ National Cybersecurity and Communications Integration Center (“NCCIC”) as a “federal civilian interface” to provide both federal and nonfederal entities “shared situational awareness” to address cybersecurity risks, coordinate the sharing of cybersecurity information, conduct and share analysis and provide technical assistance and recommendations on network security.  Notably, the statute makes clear that nothing in the Act shall be construed as providing new regulatory authority.

The CyberSecurity Enhancement Act of 2014 (introduced as S. 1353 by Sen. John Rockefeller (D-WV) on July 24, 2013), signed into law by President Obama on December 18, 2014, codifies NIST’s process for developing industry-driven, consensus-based, voluntary cybersecurity standards for critical infrastructure.  Also without conferring any new regulatory authority, it directs and authorizes the federal government to support research, raise public awareness of cyber risks, and improve the nation’s cybersecurity workforce.  Finally, Congress recently passed two more general statutes that address cybersecurity on a more administrative level.  First, the Consolidated and Further Continuing Appropriations Act, H.R. 83, was signed into law on December 16, 2014 at Public Law No. 113-235.  The relevant provision prohibits the Departments of Commerce and Justice, the National Aeronautics and Space Administration, or the National Science Foundation from acquiring high-impact or moderate-impact information systems without first assessing the risk of cyberespionage or sabotage associated with the acquisition of such systems from any country posing a cyber threat, including China.  The legislation further directs the Securities and Exchange Commission to submit a report to Congress on its efforts to modernize disclosure requirements, including an update on cybersecurity.

The Federal Information Security Modernization Act of 2014, signed by President Obama on December 18, 2014, codifies DHS’ role in administering the implementation of information security policies and practices in civilian federal information systems, while retaining OMB’s role in overseeing the security of federal government information systems generally.  It further describes the information security responsibilities of various federal agencies, including eliminating the requirement that such agencies file annual checklists that show the steps taken to secure systems.  Instead, the statute requires agencies to continuously diagnose and mitigate against cyber threats and vulnerabilities.  The statute overall increases DHS’ role in overseeing the cybersecurity efforts of federal agencies.

            3.   Health Exchange Security and Transparency Act

On January 10, 2014, the House of Representatives passed the Health Exchange Security and Transparency Act of 2014 (“H.R. 3811”) by a 291-122 vote.  This bill would require the Department of Health and Human Services to notify consumers participating in health insurance marketplaces (also known as insurance exchanges) of any breach of their personal information within two days of discovering a breach.  The one-sentence bill, introduced by Representative Joe Pitts (R-PA) and 75 cosponsors on January 7, 2014, would apply to “any system maintained” by a federal or state-run insurance exchange.

Dozens of House Democrats sided with Republicans in support of H.R. 3811, likely in response to the epidemic of nationwide cybersecurity breaches and well-publicized issues surrounding the rollout of the HeathCare.gov website.  After passage by the House, the bill was referred to the Senate Committee on Health, Education, Labor, and Pensions.  To date there has been no action in the Senate.  The White House issued a statement opposing H.R. 3811, stating that the measure “would impose an administratively burdensome reporting requirement that is less effective than existing industry standards and those already in place for federal agencies that possess such information.”[51]

            4.   The Law Enforcement Access to Data Stored Abroad Act

On September 18, 2014, Senators Orin Hatch (R-UT), Chris Coons (D-DE) and Dean Heller (R-NV) introduced bipartisan legislation in the Senate that would amend the ECPA to address conflicts of laws and safeguard Americans’ electronic data stored abroad.  ECPA, discussed in detail above in Section I.B.4, seeks to balance individuals’ rights to privacy of electronic communications and the legitimate needs of law enforcement to access records stored by service providers by authorizing governmental entities to obtain certain categories of data from providers using warrants and subpoenas.  However, ECPA does not extend this power extraterritorially, and therefore, does not permit courts to issue warrants for law enforcement to seize covered data that service providers store abroad.  The Law Enforcement Access to Data Stored Abroad (“LEADS”) Act, S. 2871, would amend ECPA to explicitly require a search warrant (and authorize the issuance of such extraterritorial warrants) for law enforcement to obtain the contents of electronic communications stored overseas which belong to a “U.S. person”–defined as a U.S. citizen, permanent resident, or company incorporated in the U.S.  To address a concern of service providers, the LEADS Act also would require the court to modify or vacate the warrant if compliance would require the service provider to violate the laws of the country in which the electronic data is stored.  To address users’ data privacy interests, the bill would require notifying the user of the warrant, the law enforcement inquiry, and any user data disclosed pursuant to the warrant, although notice may be delayed for up to 10 business days.  The proposed bill is currently in the Senate Judiciary Committee.

            5.   Protecting Student Privacy Act

On July 30, 2014, Senator Edward Markey (D-MA) introduced the Protecting Student Privacy Act of 2014.  The bill currently has three cosponsors in the Senate.  This proposed legislation would require all state educational agencies or institutions receiving federal funding to implement information security policies that: “(i) protect personally identifiable information from education records maintained by the educational agency or institution; and (ii) require each outside party to whom personally identifiable information from education records is disclosed to have information security policies and procedures that include a comprehensive security program designed to protect the personally identifiable information from education records.”  S. 2690, § 2(2).  The bill was introduced amid increased concern over how schools are using the sensitive student data they collect and seeks to amend the Family Education Rights and Privacy Act of 1974 to address this concern.

The bill would further safeguard student data by requiring each educational agency or institution receiving federal funds to ensure that any third party with access to student data holds the data in a manner that gives parents the right to access the information and to challenge, correct, or delete inaccurate information, to have a policy that promotes data minimization, and to have a policy requiring that personally identifiable information is destroyed when no longer needed for the specified purpose of its collection.  The bill has been in the Senate Committee on Health, Education, Labor, and Pensions since July 30, 2014.

            6.   Do Not Track Kids Act

The Do Not Track Kids Act of 2013 (H.R. 3481; S. 1700), sponsored in the House of Representatives by Rep. Joe Barton (R-TX) and 46 cosponsors, and sponsored in the Senate by Senator Edward Markey (D-MA) and four cosponsors, was introduced as a response to what the sponsors say is an increasing amount of time spent online among children, especially through the use of mobile devices, and at younger ages.  The bill addresses the collection, use, and disclosure of the personal information of children and minors, following a failed attempt to enact a similar law in 2011.  It would update the COPPA, discussed previously in the context of FTC enforcement in Section II.B., which requires operators of commercial websites and online services directed to children under the age of 13 to abide by various privacy safeguards as they collect, use, or disclose personal information collected from children.

The Do Not Track Kids Act would impose age-based restrictions beyond those in the current COPPA law by prohibiting Internet companies from collecting personal and location information from anyone 13 to 15 years old without the user’s consent, while also requiring consent of the parent or teen prior to sending targeted advertising to the teen.  The bill also would create an “eraser button” by requiring companies to permit users to eliminate publicly available personal information content when technologically feasible, and empower the FTC to promulgate rules requiring operators to implement appropriate “eraser button” mechanisms.  The “eraser button” provision is similar to legislation recently enacted in California, which allows minors under 18 to request that companies delete specified information that the requestor has previously posted online.  (We discuss this law in Section III.B.6 below.)  The Do Not Track Kids Act also would prohibit companies from collecting personal information from minors without adopting a “Digital Marketing Bill of Rights for Teens” that is consistent with the Fair Information Practices Principles established by the bill.  Companies would be required to explain the types of personal information collected and how that information is used and disclosed, and to disclose any personal information collection policies.

The House and Senate versions of the bill are substantially identical.  The Senate bill is currently in the Committee on Commerce, Science and Transportation, while the House bill is in the Energy and Commerce Committee’s Subcommittee on Communications and Technology, where they have remained since November 2013.

            7.   The Edward Snowden Affair and NSA Surveillance

                    a.   Background

In 2013, Edward Snowden’s leaks regarding the U.S. National Security Agency (“NSA”) “PRISM” program revealed that the government collects massive amounts of telephone and Internet data about foreigners and Americans.  Snowden’s revelations have transformed the landscape of the national and international discussion about privacy and national security.

Mr. Snowden’s leaks led to revelations that the NSA collects, retains, and can search a large trove of data from domestic and foreign communications, acting under authority granted to it under Section 215 of the USA PATRIOT Act.  Such surveillance includes bulk collection of telephonic metadata, including phone numbers called, the time a call was made, and the duration of a given call.  NSA analysts may search a database of such information based on a reasonable, articulable suspicion that the telephone number is connected to terrorism.  PRISM was first authorized during the Administration of President George W. Bush in the Protect America Act of 2007 and the FISA Amendments Act of 2008.  PRISM’s data collection practices also have been approved by the Foreign Intelligence Surveillance Court (“FISC”).  Yet the extent of the government’s surveillance was unknown to the general public until Snowden’s disclosures.

Mr. Snowden’s leaks also revealed, among other things, that the NSA’s interception of foreign targets’ communications pursuant to Section 702 of the Foreign Intelligence Surveillance Act (“FISA”) also resulted in the collection of the communications of American citizens, despite legal protections against domestic surveillance.

                    b.   Significant Disclosures in 2014

Snowden’s initial revelations were published in a series of articles for British paper The Guardian in summer and fall 2013.  Since then, additional disclosures and the release of certain court documents have shed additional light on U.S. and international government surveillance programs.

There were several significant disclosures about the different types of NSA surveillance and monitoring programs that currently exist or are in development:

  • Journalists for The Intercept described an NSA computer program called TURBINE, which allows the NSA to use an automated program to infect, on a mass scale, computers and phone networks around the world with spyware.  The spyware allows the NSA to break into targeted computers and siphon data from Internet and phone networks located abroad.
  • It was also revealed that the NSA intercepts routers, servers, and other networking equipment before it is exported outside the United States to impact surveillance tools into the systems.[52]
  • In an interview, Snowden discussed the MonsterMind program, a cyber-warfare program under development by the NSA, intended to discover known or suspected cyberattacks from abroad, and automatically fire back.[53]
  • It was revealed that the NSA harvests millions of faces from Internet images for use in a facial recognition database.[54]

There were additional disclosures about the scope of the NSA’s surveillance program, and the extent to which the government monitors individuals who are not suspected terrorists and organizations not traditionally affiliated with terrorists.  For example, Mr. Snowden informed the Council of Europe that the United States has monitored confidential communications of the leaders of a number of civil and non-governmental organizations, including Amnesty International and Human Rights Watch.  And in March, Director of National Intelligence James Clapper admitted that U.S. intelligence agencies had searched the contents of emails and other electronic communications of U.S. citizens without warrants.  Clapper asserted that FISA, which prohibits the government from targeting Americans, authorizes the collection of Americans’ data because the data was obtained to eventually target foreign suspects.[55]  Additionally, The Washington Post, relying on information provided by Snowden, reported that 90% of those placed under surveillance in the U.S. are not intended targets.

There were also disclosures about the extent to which international governments cooperate with the NSA.  For example, it was revealed that the NSA’s Australian counterpart spied on communications between U.S. law firm Mayer Brown and its client, the government of Indonesia, and offered to provide the information so acquired to the NSA.  Mayer Brown was representing Indonesia in a trade dispute with the U.S. government, and the surveillance may have included information protected by the attorney-client privilege.  Additionally, a German newspaper revealed that Germany’s secret service shared at least 5% of the Internet data it has collected about German citizens with the NSA.[56]

In September, The Washington Post published a story that shed light on the genesis of the PRISM program.  The Post reported that in 2008, the government had threatened to fine Yahoo!, Inc. $250,000 per day if it did not comply with a FISC order on appeal granting the government access to Yahoo! emails and email metadata.[57]  Yahoo! had originally contested the government’s demand for user data, arguing that it violated the Fourth Amendment’s prohibition against unreasonable searches and seizures, but was unsuccessful.  Yahoo! appealed the decision, and the government threatened Yahoo! with the fine if it did not begin handing over data while the case was on appeal.  Yahoo! complied, and ultimately lost the appeal.  The information regarding the government order came to light when 1,500 pages were unsealed in the FISC case in September 2014, after Yahoo! won its long battle to declassify and un-seal the documents.  It was revealed that this FISC ruling became the key decision in the development of PRISM, helping government officials to convince several companies to comply with its demand for data.[58]

                    c.   Proposed Reform Legislation

In late 2013 and early 2014, several representatives drafted bills aimed at reforming PRISM.  Of the proposed bills, the one that has come closest to passing is the Uniting and Strengthening America by Fulfilling Rights and Ending Eavesdropping, Dragnet-collection, and Online Monitoring Act (the “USA FREEDOM Act”).  The bill was introduced on October 29, 2013 by Rep. Sensenbrenner (R-WI) and Sen. Leahy (D-VT).

A version of the bill passed the US House of Representatives on May 22, 2014.  The House version did not ban bulk government collection of data, but rather allowed collection if approved by a FISC order based on reasonable, articulable suspicion of wrongdoing.  The bill also renewed the USA PATRIOT Act until the end of 2017.  Because the bill permits bulk collection of Americans’ data, it was criticized by many civil libertarians and technology companies.

In July 2014, Senator Leahy introduced a new version of the bill in the Senate.  The Senate version requires the government to limit the scope of its bulk data collection–for example, it specifies that the government may not gather in bulk data relating to a particular phone or Internet company or to a broad geographic region.  Further, the Senate bill would have left the phone and Internet data of Americans in the hands of the service providers, not the government.  The government could obtain records of calls made and received by individual Americans who were the target of a terrorist communication after the government demonstrated that it has a reasonable, articulable suspicion that the conversation involves a terrorist.  The Senate version of the USA FREEDOM Act had wide support from the technology industry, many privacy advocacy groups, Democrats, some Republicans, the White House, and the intelligence community.  However, in November 2014, the bill failed to obtain the 60 votes needed to prevent a Republican filibuster by two votes.[59]

Although the Obama administration supported the USA FREEDOM Act, in December 2014, the administration announced that it would renew the PRISM program.  The government sought a 90-day reauthorization of the existing program, as modified by changes directed by President Obama in January 2014.  Those changes require the NSA to obtain a court order before searching the NSA’s database of metadata and phone and Internet data, and limits the search to phone numbers two “hops,” or connections, away from a target (instead of the previous rule of three hops).[60]

                    d.   Technology Sector Response

The technology industry faced significant criticism in 2013 and 2014 due to what many characterized as aiding or at least being complicit with handing over troves of consumer data to the US government.  This led to US-based technology companies losing many international customers, with industry experts predicting that the US cloud computing industry could lose between $35 and $180 billion by 2016.[61]  As a response to the criticism, several technology companies have begun to build data centers overseas.

                    e.   Legal Challenges to Surveillance Practices

There have been three significant decisions about the legality of government surveillance since the first Snowden revelation.  In Klayman v. Obama, 957 F. Supp. 2d 1 (D.D.C. 2013), Judge Richard Leon of the U.S. District Court for the District of Columbia held that broad-scale collection of Americans’ telephone metadata is likely unconstitutional.  Judge Leon called the program “almost Orwellian” and questioned the efficacy of the program in combatting terrorism.  He granted an injunction ordering the government to stop collecting the plaintiffs’ telephone data and to destroy the existing records; however, the injunction was stayed pending appeal.  Appellate arguments took place in November 2014.

In ACLU v. Clapper, 959 F. Supp. 2d 724 (S.D.N.Y. Dec. 27, 2013), Judge William Pauley held that the NSA phone records collection is constitutional and necessary to national security.  The case is currently on appeal, and arguments before the Second Circuit took place in September 2014.

The decisions in Klayman and ACLU v. Clapper took divergent views on the precedential value of Smith v. Maryland, 442 U.S. 735 (1979).  There, the Supreme Court held that there is no reasonable expectation of privacy in information voluntarily turned over to third parties such as telephone companies.  Klayman distinguished Maryland as outdated, while ACLU v. Clapper determined that it was controlling precedent.

More recently, a June 2014 decision of the District of Idaho held that Maryland precluded that court from ruling in the plaintiff’s favor on allegations that the government violated her Fourth Amendment rights by collecting cellphone tracking location data.  Smith v. Obama, 24 F. Supp. 3d 1005 (D. Idaho June 3, 2014).  Judge B. Lynn Winmill wrote, however, that he believed Maryland to be outdated.  Judge Winmill called Judge Leon’s decision in Klayman “thoughtful and well-reasoned,” urging that it should “serve as a template for a Supreme Court opinion.”  Id. at 1009.  Smith v. Obama is currently on appeal.

      B.   Recently Enacted State Privacy Laws

State legislatures have continued to pass laws covering a wide range of topics relating to information privacy and security, with important impacts on private sector businesses.

            1.   Data Breach Notification

Several states enacted new data breach notification laws, and those with preexisting laws reformed their data breach reporting requirements.  For example, in 2013, California amended its groundbreaking data breach notification law by broadening the definition of “personal information.”  Under Section 1798.82 of the California Civil Code, a breach of the following types of information now triggers a notification obligation: passwords, usernames, and security questions.  These categories of information are in addition to Social Security Numbers, driver’s license numbers, credit card information, and medical and health insurance information.

In 2014, California further amended its data breach notification law by passing Assembly Bill 1710.  Under the amendment, which took effect on January 1, 2015, the law applies to businesses that merely maintain personal information (in addition to businesses that own and license personal information, which were already covered).  Importantly, this amendment requires third-party service providers that obtain personal information from an owner or licensee of the personal information to implement data security practices.

Iowa also made an interesting modification its data breach notification law, by amending the definition of “breach” to include the acquisition of personal information that is maintained in paper form.  See S.F. 2259, 2013-2014 Reg. Sess. (Iowa 2014) (also requiring notification to state attorney general within five days if breach affects more than 500 Iowa residents).

New York’s S. 2605-D, enacted in 2013, also made minor changes to the state’s data breach law by requiring public or private entities’ breaches of “private information” to be disclosed to the newly-formed Office of Information Technology Services instead of the Office of Cyber Security & Critical Infrastructure Coordination.  The New York law also continues to require data breach notification to the affected individual, the New York Attorney General, and the Consumer Protection Board.  See A. 3005-D, S. 2605-D (N.Y. 2013).[62]

Florida also enacted an updated data privacy law, which went into effect on July 1, 2014.  See Information Protection Act, Fla. Stat. § 501.171.  Following California’s lead, Florida expanded the definition of “personal information,” for which unauthorized disclosure can trigger breach notification obligations.  Among other things, Florida’s new law also requires notification to affected persons within 30 days after discovery of a breach as well as notification to the state’s Department of Legal Affairs following any breach involving 500 or more individuals in Florida.

Texas, Vermont, and North Dakota are among other states that have recently amended their data breach laws.[63]  With the passage of Kentucky’s law in April 2014, only three states–New Mexico, South Dakota, and Alabama–have no form of a data breach notification law.  See H.B. 232 2014 Gen. Assemb., Reg. Sess. (Ky. 2014).

            2.   Credit Card Monitoring After Data Breach

In 2014, California enacted a law regulating the way in which companies may offer credit card monitoring to individuals whose data is compromised by a data security breach.  So far it is the only state to do so.  Several other states, however, have considered legislation requiring businesses to offer credit monitoring services to individuals impacted by data breaches.

As of January 1, 2015, if a business is the source of a security breach, “an offer to provide appropriate identity theft prevention and mitigation services [to California residents], if any, shall be provided at no cost to the affected person for not less than 12 months.”  (emphasis added).  The business must also provide any information necessary for residents to take advantage of the services.  Some commentators have read this provision to require businesses to provide prevention and mitigation services after a security breach, but because the law includes the words, “if any,” it merely regulates the type of credit monitoring a company must offer if the company chooses to offer credit monitoring at all.  The bill is unlikely to have a major impact, as most companies that currently offer customers credit monitoring offer at least 12 months of cost-free service.[64]

            3.  Social Media Access

Following Maryland’s lead, which enacted the first such bill (S.B. 433/H.B. 964, 2012 Reg. Sess. (effective Oct. 1, 2012)), a majority of states have enacted or have considered enacting legislation that would enhance employees’ privacy by prohibiting employers from requiring or requesting current or prospective employees to provide passwords to their social media accounts.[65]  In an interesting inverse of these new laws, Delaware enacted a law which provides heirs with access to a deceased person’s digital assets.  Fiduciary Access to Digital Assets and Digital Accounts, H.B. 345, 147th Gen. Assemb. (Del. 2014).  New Mexico and several other states have extended this principle by enacting legislation prohibiting colleges from requiring students or applicants to provide access to social media accounts.[66]

            4.   Drone Regulation

Over a dozen state legislatures have taken action on the use and regulation of drones, typically called unmanned aircraft systems (“UAS”).  To date, these laws typically regulate how a government agency, primarily law enforcement, can utilize UAS.  For example, Florida’s Freedom from Unwanted Surveillance Act, S.B. 92, enacted on April 26, 2013, limited UAS use to law enforcement, and established a warrant requirement unless there is a terrorist threat or “swift action” is necessary to save a life or search for a missing person.  Any evidence obtained in violation of the law is inadmissible, and civil remedies are authorized if an individual is harmed by the inappropriate use of UAS.[67]  Louisiana created a crime for the unlawful use of an UAS order to conduct surveillance without the owner’s consent.  H.B. 1029, 2014 Reg. Sess. (La. 2014).

An increasing number of states are taking steps to regulate the use of UAS by private individuals.  For example, North Carolina’s law created a wide swath of regulations for UAS, including a similar prohibition of UAS surveillance without consent, creating a civil cause of action for anyone whose privacy is violated.  S.B. 744 (N.C. 2014).  In October 2014, California passed a law some consider specifically aimed at paparazzi photographers, which creates a cause of action for the violation of someone’s privacy, and authorizes treble damages if the violating conduct was for commercial gain.  A.B. 2306 (Cal. 2014).  The Texas Privacy Act, H.B. 912, enacted on June 14, 2013, created 19 different categories of lawful public UAS use and criminalized capturing, possessing, and distributing an image captured by a UAS with the intent to conduct surveillance.[68]  See also S.B. 1892, 108th Reg. Sess. (Tenn. 2014) (creating a misdemeanor offense for intentional surveillance of another using UAS, but creating 18 lawful uses).

            5.   California’s “Do Not Track” Law

California’s “Do Not Track” law, Assembly Bill 370 (“A.B. 370”), went into effect on January 1, 2014.  A.B. 370 amends the California Online Privacy Protection Act (“CalOPPA”) to require additional disclosures in corporate privacy policies.  Intended to facilitate transparency as to how a company tracks and shares user data, it requires disclosures dealing with three areas: (1) “do not track” signals; (2) third-party tracking; and (3) conspicuous opt-out notices.  In May 2014, the California attorney general issued guidelines for compliance with the Do Not Track law.[69]

First, A.B. 370 requires companies to disclose how they respond to “do not track” signals.  A “do not track” signal is an HTTP header field emitted by an Internet browser when a user selects “Do Not Track” in his or her browser settings.  To date, there is no regulatory or industry consensus on the appropriate response to a “do not track” signal.  The Federal Trade Commission has informally called for companies to honor “do not track” requests in its educational publications, though it has not introduced formal rules on the subject.  Without a specific requirement to honor such signals, many companies choose not to do so.  A.B. 370 is intended, in part, to create pressure for companies to honor “do not track” signals by forcing them to reveal whether and how they honor the signal.  The attorney general guidelines clarify that this disclosure is only required if an online service collects personally identifiable information about a consumer’s online activities over time and across third-party websites or online services.

Second, A.B. 370 requires companies to disclose whether third parties may collect personally identifiable information about a consumer’s online activities when they visit the company’s website.  Importantly, the amendment only requires companies to disclose whether third parties collect information, not details regarding what information the third parties track.[70]

Finally, A.B. 370 also permits a company to satisfy the “do not track” disclosure requirement by providing a “clear and conspicuous” hyperlink in its privacy policy to an explanation of the company’s opt-out program, and a mechanism for the user to opt-out of the company’s tracking practices.  However, the attorney general guidelines recommend that online services directly disclose how they respond to do not track requests, rather than hyperlinking, and treat the linking option as the less transparent method for complying with A.B. 370.  Also, linking to opt-out procedures only satisfies a company’s obligation to disclose how it treats “do not track” signals; it does not satisfy A.B. 370’s third-party tracking disclosure requirement.[71]

            6.   California’s “Digital Eraser” Law

California Senate Bill 568, “Privacy Rights for California Minors in a Digital World,” (“S.B. 568”) became effective on January 1, 2015.  S.B. 568 includes a provision known as the “Delete Button” or “Eraser” law, which allows minors under the age of 18 to request that companies delete specified information that the requestor had previously posted online. California is the first state to impose such an obligation on website and mobile app operators.  Additionally, the law bans companies from marketing prohibited items, including alcohol, tobacco, guns, and other products or services to minors or compiling underage users’ personal information in order to market the prohibited items to them.

The “Delete Button” law applies to companies operating websites, mobile and Internet-based “apps,” and online services; however, it only covers websites and apps “directed” to minors or whose operator has actual knowledge that a minor is using it.  The law defines a site “directed to minors” as one “created for the purpose” of reaching predominately those under 18.

All covered companies must notify minors of their right to request removal of unwanted information posted by the minor on the company’s web site, and must remove such information upon request.  Alternatively, companies can comply with this law by providing minors with clear instructions as to how to directly remove information that they posted.

The “Delete Button” law has a number of enumerated limits that affect its scope.  First, minors can request deletion only of information that they posted.  S.B. 568 does not allow a minor to request deletion of information that was stored, republished, or reposted by a third party.  Second, only “registered users” of a company’s website can request deletion.  Third, if a minor fails to follow the procedures for deletion, a company need not delete the information.  Fourth, those receiving compensation for posted content cannot request deletion.  Finally, minors cannot request deletion of posted content that is inaccessible to third parties.[72]

            7.   California’s Privacy for Student Records Laws

A number of privacy protections for primary students’ records went into effect in California on January 1, 2015.

Senate Bill 1177 prohibits an operator of an online service that the operator knows is marketed, designed, and primarily used for K-12 school purposes from knowingly engaging in targeted advertising to students or parents, creating a profile of a student using any information gathered through the service, or selling or disclosing a student’s information.  The operator must also maintain reasonable security measures to protect the student’s information from unauthorized access, destruction, use, modification or disclosure and delete school-controlled student information upon request from the school.

Assembly Bill 1584 governs contracts between local educational agencies and third-party digital record and educational software providers.  It permits a school to use a third party for the “digital storage, management and retrieval of pupil records, or to provide digital educational software, or both.”  But any contract with a third party must contain a number of provisions, including a description of the actions the third party will take “to ensure the security and confidentiality of pupil records,” a description of procedures that will be used to notify affected students or parents of any unauthorized disclosure, a prohibition against using students’ information for purposes other than those contractually required, and a certification that students’ information will not be available to the third party upon completion of the contract.

Finally, Assembly Bill 1442 establishes restrictions on school districts’ collection and use of pupils’ social media information.  In order to gather students’ information, a school must first notify students and parents and provide an opportunity for public comment.  If the school gathers social media information, it must notify each parent that information is being collected and must only gather information that pertains directly to school or student safety, provide the student with access to his or her information and an opportunity to correct or delete it, and destroy information after the student turns 18 or is no longer enrolled in the school.  Third parties retained by schools to gather students’ social media information may not use the information for any purpose other than to satisfy the contract, may not sell or share the information, and must destroy the information immediately upon conclusion of the contract.

      C.   Legislative Outlook

Prompted by events such as the Snowden leaks, major retailer security breaches, and Sony hacking incident, both state and federal lawmakers are expected to continue to address surveillance and data privacy issues and data breach notification legislation as priorities.  Additional potential legislative emphases on the horizon are likely to include: mobile data collection, retention, and sharing issues (addressing text messaging and mobile chat applications as well as other services); continued emphasis on children’s online and mobile privacy; strengthening European Union privacy legislation (stemming from the European Union Data Protection Regulation and “right to be forgotten” cases); intensified health care data protections; and an increased focus on geo-location/GPS privacy issues.

On January 13, 2015, President Obama presented an update to the Administrations’ 2011 Cybersecurity Legislative Proposal.  The updated proposal identifies three priorities:

1) enhancing cyber threat information sharing within the private sector and between the private sector and the Federal Government;2) protecting individuals by requiring businesses to notify consumers if personal information is compromised; and 3) strengthening and clarifying law enforcement’s ability to investigate and prosecute cyber crimes.

Under the proposal, the Department of Homeland Security’s National Cybersecurity and Communications Integration Center (NCCIC) would play a key role in sharing cyber threat information received from private sector entities with the relevant federal agencies and other private sector organizations.  Companies that share information would also be eligible to receive “targeted” liability protection. The proposal also aims to protect individuals by establishing a federal data breach notification scheme and creating a consumer privacy bill of rights.  The proposed legislation would also expand existing penalties for cybersecurity crimes, law enforcement authority to deter the sale of certain spyware, and court authority to shut down certain networks engaged in criminal cyberattack activity.[73]

It is unclear whether the recent Republican takeover of Congress will have an impact on the success or trajectory of legislative efforts in the privacy arena.  A Republican will now chair the Senate Select Committee on Intelligence, which some would expect to chill NSA oversight, but the public response to the depth of government surveillance revealed in the last few years has generated support for reform from both sides. Whether there will be enough bipartisan support to achieve federal legislation on these issues remains to be seen.

IV.   Criminal Enforcement

      A.   Fourth Amendment Developments

            1.   U.S. v. Ringmaiden

The multi-year saga of United States v. Ringmaiden, No. 08-cr-814 (D. Ariz.), recently came to an end.  In 2008, the government indicted Ringmaiden on 74 counts of mail and wire fraud, aggravated identity theft, and conspiracy.  In the indictment, the government alleged that Ringmaiden devised a scheme to obtain fraudulent tax refunds by filing electronic tax returns in the names of hundreds of people, both deceased and living.  The government was able to locate and arrest Ringmaiden after surveillance involving use of the StingRay, a device used to track the International Mobile Subscriber Identity (IMSI) of cellular devices.  In 2013, Ringmaiden filed a motion to suppress evidence relating to his wireless aircard, historical cellular-site information, destination IP addresses, data from the security company that serviced Ringmaiden’s former apartment complex, the search of his apartment and computer, and the use of mobile tracking devices.  Citing earlier Ninth Circuit precedent, the district court concluded that Ringmaiden had no societally recognizable expectation of privacy in a computer or other equipment obtained through fraud–Ringmaiden had used fraudulent identities and credit cards to purchase his laptop and wireless aircard.  For the same reason, Ringmaiden had no reasonable expectation of privacy in the apartment and storage unit he rented with stolen and fraudulent identities.  The court rested this conclusion on Supreme Court authority recognizing that wrongful interests do not give rise to legitimate expectations of privacy.  Turning to the government’s use of electronic communications to isolate the location of Ringmaiden’s computer/aircard, the court declined to find a privacy violation where the government’s use of such technology was for the purpose of finding the devices being used to perpetrate an extensive fraudulent scheme through the defendant’s own use of electronic communications.  With respect to the government’s collection of historical cell-site data, the court found that even if Ringmaiden had a protected privacy interest in the aircard, the government’s collection of historical records (e.g., cell-site data, destination IP addresses associated with the aircard) pursuant to the Stored Communications Act (“SCA”) did not violate Ringmaiden’s rights.  The court also noted that, in any event, suppression is not an available remedy for an SCA violation.  Distinguishing recent Supreme Court authority, the court further concluded that using cell-site information to triangulate the location of Ringmaiden’s aircard, pursuant to the SCA, was not tantamount to attaching a GPS device to a person’s vehicle over an extended period of time.  With respect to the historical IP addresses and data obtained from the security company, the court found this information covered by the third-party doctrine.

Ringmaiden also challenged the warrant used to justify the use of a mobile tracking device to isolate his location, arguing it was not supported by probable cause and any searches conducted thereunder exceeded the warrant’s scope.  The ACLU filed an amicus brief in support of the scope argument.  The court found that the affidavit underlying the warrant supported probable cause and that the warrant was sufficiently particular with respect to the mobile tracking device to be used.  While the court acknowledged that the tracking warrant was no “model of clarity,” it nonetheless concluded that the warrant contained all sufficient elements.  Moreover, the court found it irrelevant that the warrant did not disclose that the mobile tracking device would capture data of other cell phones and aircards in the vicinity of the subject aircard.  Lastly, although the government conceded its failure to comply with Rule 41(f) (requiring service of the warrant on a defendant), the court explained that suppression is not the appropriate remedy where there is no causal connection between the government’s failure to comply with this rule and its location of the aircard.  The court rejected Ringmaiden’s argument that he was prejudiced by this action where, had he been served, he would have fled and evaded capture.  Lastly, to the extent any Fourth Amendment violation occurred in searching Ringmaiden’s apartment and computer, which the court concluded did not happen, the court found that the good faith exception applied.  In light of the fact that Ringmaiden had filed many suppression-related motions during the case, the court ordered Ringmaiden not to file any additional motions of this sort.  Bringing this saga to an apparent end, on April 7, 2014, Ringmaiden pled guilty.  The Court sentenced him to 60 months’ imprisonment followed by three years of supervised release.

            2.   Cell Phones and Warrantless Searches

On April 29, 2014, the U.S. Supreme Court held that police generally may not, without a warrant, search digital information on a cell phone seized from an individual who has been arrested.  Riley v. California, ___ U.S. ___, 134 S. Ct. 2473 (2014).  Noting that a warrantless search is reasonable only if it falls within a specific exception to the Fourth Amendment’s warrant requirement, a unanimous Court refused to extend the “search incident to arrest” exception to searches of smart phones and other cell phones.  In so doing, the Court distinguished United States v. Robinson, 414 U.S. 218 (1973), in which the Court upheld the search of a cigarette pack found on an arrestee’s person.  Although the precise impact of the Riley decision remains to be seen, at least one federal district court has suggested that the Supreme Court’s holding likely prohibits the warrantless search of a digital camera.  See United States v. Whiteside, No. 13 Cr. 576 (PAC) (S.D.N.Y. Sept. 30, 2014).

      B.   Identity Theft and Carding Crimes

            1.   United States v. Lazar (E.D. Va.)

While many identity theft crimes are motivated by financial gain, one notable case this past year was not.  In United States v. Lazar, 1:14-cr-213 (E.D. Va. June 12, 2014), the Department of Justice indicted Marcel Lehel Lazar, the hacker known as “Guccifer” on charges of wire fraud, unauthorized access, aggravated identity theft, and cyberstalking.  Lazar allegedly broke into the email and social media accounts of several high level government officials and celebrities and was linked to the release of private photos and portraits painted by former President George W. Bush.[74]  At the time of the indictment, Lazar was imprisoned in his native Romania.  It remains to be seen whether the United States will seek Lazar’s extradition after his release from Romanian prison.

            2.   United States v. Vega (E.D.N.Y)

Recent cases have resulted in increasingly severe sentences for those found guilty of identity theft and carding crimes.  In a New York federal case, Roman Vega was sentenced to 18 years in prison for his role as co-founder of CarderPlanet, one of the Internet’s first marketplaces for stolen data.  See United States v. Vega, No. 07-cr-707 ARR (E.D.N.Y. Dec. 18, 2013).  Vega conspired to steal personal information, including credit card numbers, through sophisticated means such as hacking, and used his website to sell the stolen data.  Vega pled guilty in 2009 to conspiracy to commit access device fraud, in violation of 18 U.S.C. § 1029, and conspiracy to commit money laundering, in violation of 18 U.S.C. § 1956.  Commenting on Vega’s lengthy sentence, Mythili Raman, former Acting Assistant Attorney General of the Justice Department’s Criminal Division, explained, “Vega helped create one of the largest and most sophisticated credit fraud sites in the cybercrime underworld–a distinction that has earned him the substantial sentence he received today.”

      C.   Money Laundering

            1.   United States v. Dotcom (E.D. Va.)

The United States continues its efforts to extradite Kim Dotcom for his involvement with the website Megaupload, an online file-sharing site which the U.S. alleges is at the center of an “international organized criminal enterprise” engaged in racketeering, money laundering, and copyright infringement.  United States v. Dotcom, No. 12-cr-003 (E.D. Va.).  Dotcom remains in New Zealand, where in March 2014 the New Zealand Supreme Court denied a request by Dotcom and three colleagues also facing extradition to gain broad access to all U.S. evidence against them.  Finding that such extensive disclosure would delay the process, the Court concluded that a summary of the U.S. case against Dotcom would be sufficient for purposes of an extradition hearing.  Meanwhile, it has been reported that the extradition hearing has been delayed until February 2015.

            2.   United States v. Faiella (S.D.N.Y)

In another notable money laundering case, the DOJ filed charges against Robert M. Faiella, an underground Bitcoin exchanger, and Charlie Shrem, the CEO of a Bitcoin exchange company, BitInstant, for selling over $1 million in Bitcoins to users of “Silk Road,” an underground website that (among other things) enabled users to buy and sell illegal drugs anonymously.  United States v. Faiella, No. 14-cr-243 (S.D.N.Y).  Law enforcement shuttered the original Silk Road website in October 2013, and has since been engaged in a cat-and-mouse game with new anonymous marketplaces, seizing Silk Road 2.0 in November 2014.[75]  The Bitcoin-related charges in the Faiella case allege that Mr. Faiella and Mr. Shrem conspired to commit money laundering and operated an unlicensed money transmitting business.  The charges also allege that Mr. Shrem violated the Bank Secrecy Act.

            3.   United States v. Liberty Reserve S.A. (S.D.N.Y)

In 2013, the DOJ also filed charges against Liberty Reserve, a currency exchange that formerly operated out of Costa Rica, along with charges against seven individuals.  The charges allege conspiracy to commit money laundering, conspiracy to operate an unlicensed money-transmitting business, and operation of an unlicensed money-transmitting business.  United States v. Liberty Reserve S.A., No. 13-cr-368 (S.D.N.Y).  The government alleges that Liberty Reserve laundered billions of dollars in 55 million transactions worldwide.  Liberty Reserve traded in virtual currency, which allegedly provided the anonymity sought by criminals.  While individual users were asked to provide a name, address, and date of birth, fictitious information could be used to create an account.  The case is reported to be the largest online money laundering case in history, and officials dubbed it the launch of the “cyber age of money laundering.”  Along with filing criminal charges, law enforcement seized five domains and froze forty-five bank accounts.  Thus far, one defendant has pled guilty and received a five-year prison sentence.  Although Liberty Reserve is incorporated in Costa Rica, officials used a USA PATRIOT Act provision to target the entity.

      D.   Economic Espionage Act

            1.   United States v. Aleynikov (2d Cir.) and United States v. Agrawal (2d Cir.)

As we reported last year, the Second Circuit reversed the conviction of Sergey Aleynikov, a former computer programmer for a financial institution, who was found guilty of stealing computer source code under the Economic Espionage Act (“EAA”).  United States v. Aleynikov, 676 F.3d 71 (2d Cir. 2012).  The court found that the program embodying the stolen source code was not “produced for” or “placed in” interstate commerce, because the company had no intention of licensing or selling the program.  Id. at 82.  Judge Calabresi’s concurrence noted that he believed Congress, in drafting the EEA, intended to capture the type of conduct at issue in this case.  In response, Congress passed the Trade Secrets Clarification Act (“TSCA”), on which we also reported last year.  The TSCA removed the requirement that the underlying trade secret be “used or intended for use in” interstate commerce.  Instead, the law now requires only that the trade secret be “related to” or “included in” a product produced for or placed in interstate or foreign commerce.

In 2010, a jury convicted defendant Samarth Agrawal for similar conduct of stealing computer code from his employer.  In August 2013, despite the Aleynikov precedent, the same court upheld Agrawal’s conviction.  United States v. Agrawal, 726 F.3d 235 (2d Cir. 2013), cert. denied 134 S. Ct. 1527 (2014).  In Agrawal, the defendant worked for Société Générale (“SocGen”), a French bank.  Like the defendant in Aleynikov, Agrawal took source code from his employer.  Agrawal printed the source code onto thousands of sheets of paper and took it to his home in New Jersey to replicate SocGen’s trading system to sell to a competitor for hundreds of thousands of dollars.  Although Agrawal raised challenges similar to the defendant in Aleynikov, the court distinguished the earlier case, writing that the “product” relied upon in Aleynikov was the proprietary source code while, in Agrawal’s case, the “product” was the publicly traded securities bought and sold by SocGen using the software embodying the stolen code.  The court found that the securities satisfied the jurisdictional requirement without raising the concerns present in Aleynikov (i.e., the fact the proprietary software was not intended for use in interstate commerce).

Judge Rosemary Pooler authored a dissent, arguing that the majority ignored the narrow construction of the EEA set forth in Aleynikov in order to “retroactively apply Congress’s statutory change made during the interim period.”  Judge Pooler’s dissent noted that the government claimed at trial that the source code was the “product,” whereas for the first time on appeal the government looked to the securities bought and sold through use of the software.

            2.   United States v. Liew (N.D. Cal.)

On March 6, 2014, a federal jury convicted Walter Liew of charges brought under the Economic Espionage Act.  United States v. Liew, No. 11-cr-573 (N.D. Cal.).  The DOJ claims Liew is the first person to be convicted for violations of the Economic Espionage Act in a jury trial.

Liew met with Chinese officials in the 1990s and agreed to procure chloride-route titanium dioxide (TiO2) technology for them.  TiO2 technology is used to create pigment in paint, plastics, and paper, and also has uses in aerospace materials.  The jury found that Liew, along with co-conspirators, stole TiO2 trade secrets from the DuPont chemical company and sold those secrets to state-owned companies in China.  The jury also convicted Liew on charges of obstruction of justice, witness tampering, filing false tax returns, and making false statements in connection with a bankruptcy filing.  In July 2014, Judge Jeffrey White sentenced Liew to a fifteen-year prison sentence, and ordered Liew to pay over $28 million in forfeitures and restitution.

            3.   United States v. Wang Dong (W.D. Penn.)

In May 2014, a grand jury in Pennsylvania federal court indicted five Chinese military hackers for computer hacking, economic espionage, trade secret theft, and other offenses directed at six U.S. companies in the nuclear power, metals, and solar products industries.  United States v. Wang Dong, No. 14-cr-118 (W.D. Penn.).  The indictment drew an angry response from China’s Foreign Ministry.  The defendants are alleged, inter alia, to have conspired to hack into the U.S. companies, to maintain unauthorized access to computers, and to steal information that would be beneficial to Chinese competitors, including state-owned enterprises.  However, because the U.S. does not have an extradition treaty with China, it is unlikely that the defendants will be brought to the U.S. to face charges.

U.S. Attorney General Eric Holder reported that the Wang Dong indictment represents “the first ever charges against a state actor for this type of hacking,” but Holder signaled that it would not be the last of its kind, warning that the U.S. “will not tolerate actions by any nation that seeks to illegally sabotage American companies and undermine the integrity of fair competition.”  Echoing that sentiment, FBI Director James B. Comey promised to “use all legal tools at [the FBI’s] disposal to counter cyber espionage from all sources.”

            4.   United States v. Leroux (D. Del.)

In July 2013, the DOJ indicted four individuals for allegedly stealing trade secret information from a number of U.S. businesses.  United States v. Leroux, 13-cr-0078 (D. Del.).  The indictment alleges that the hackers stole popular Microsoft Xbox games such as “Call of Duty: Modern Warfare 3” and “Gears of War 3” before their release.  The hackers also allegedly broke into the servers of a U.S. Army contractor and accessed the software used to train Apache helicopter pilots.  Victims of the hacking ring include the computer networks of Microsoft Corporation, Epic Games Inc., Valve Corporation, Zombie Studios, and the U.S. Army.

The defendants were based in both the United States and Canada; the government arrested the Canadian defendant when he attempted to enter the United States at the Lewiston, NY port of entry.  In September 2014, the Canadian defendant and one other defendant pled guilty to conspiracy to commit computer fraud and copyright infringement.  The DOJ asserts that the Canadian defendant’s guilty plea marks the first conviction of a foreign-based individual for hacking into U.S. businesses to steal trade secret information.  In January 2015, the third defendant likewise pled guilty to the same conspiracy charge.  The three are to be sentenced in spring 2015.

      E.   Computer Fraud and Abuse Act

            1.   United States v. Nosal (N.D. Cal.)

In Nosal, the government alleged that David Nosal, an executive recruiter in San Francisco, stole trade secrets from his former employer in order to open a competing firm.  After the Ninth Circuit Court of Appeals clarified the scope of the Computer Fraud Abuse Act (“CFAA”) in a United States v. Nosal, 676 F.3d 854 (9th Cir. 2012), which we discussed in our 2013 Outlook and Review, the court returned Nosal’s case to the district court for trial.  United States v. Nosal, No. 08-cr-237 EMC (N.D. Cal.).  In April 2013, a jury convicted Nosal of conspiracy to gain unauthorized access to his former employer’s computer systems, along with other computer intrusion and theft of trade secrets.  At the sentencing hearing, prosecutors asked the court to impose incarceration, arguing that “the sentence you give . . . will go through Silicon Valley like a bell.”  The district court sentenced Nosal to one year and one day in prison.  In addition to incarceration, the Court recently ordered Nosal to reimburse his former employer over $800,000 in attorney’s fees and costs under the Mandatory Victims Restitution Act.  Nosal has again appealed to the Ninth Circuit but has yet to brief the issues on appeal.

            2.   Hacktivism

                    a.   Overview

“Hacktivism” refers to computer hacking for social or political causes, typically free speech or information access.  Supporters often liken “hacktivism” to protests or civil disobedience.  While the prosecution and subsequent suicide of Aaron Swartz (described in our 2013 Outlook and Review) led to closer media scrutiny of criminal treatment of “hacktivism,” the incident has not prompted meaningful legal changes.  In June 2013, “Aaron’s Law” was introduced in the U.S. House of Representatives, and companion legislation was introduced in the U.S. Senate, representing a bi-partisan proposal to reform the CFAA.  The bill has not been enacted, and the Justice Department continues zealously to prosecute hacking activity, whether activist or otherwise.

A common tool of “hacktivists” and other cybercriminals is “distributed denial of service,” or DDOS, attacks.  A DDOS attack is designed to cripple computer networks or servers by flooding them with irrelevant Internet traffic and rendering them inaccessible to legitimate users.  Another kind of attack, an SQL injection attack, exploits security vulnerabilities in software to steal information, such as personally identifying information, from targeted networks or servers.  Motives for such attacks vary.  Some are the means by which other crimes occur, such as a DDOS attack that locks up a company’s systems while wire transfers from its accounts are occurring, or an SQL Injection attack that steals information for the purpose of identity theft.  Others are politically or socially motivated–“hacktivist” activities, like the attacks that likely caused the state-owned Syrian Arab News Agency (“SANA”) to go down in the wake of an alleged August 2013 chemical attack in disputed areas outside of Damascus.  Hacking networks, such as the international group called Anonymous and its offshoots, which include LulzSec, often orchestrate this type of activist attack.

                    b.   Rejection of Argument that “Hacktivism” Is Victimless Civil Disobedience

In November 2013, U.S. District Judge Loretta Preska of the Southern District of New York sentenced self-proclaimed “hacktivist” Jeremy Hammond to 10 years in prison and 3 years of probation.  Hammond, an affiliate of the international “hacktivist” network Anonymous and has a cybercriminal history, pled guilty to numerous computer hacking offenses.  These crimes included: stealing and/or deleting data from the computer servers of the private intelligence firm Strategic Forecasting Inc.; publishing tens of thousands of credit card numbers belonging to that firm’s clients and encouraging others to use the numbers to donate to charities; hacking into the Arizona Department of Public Safety and publishing the personal information of Arizona law enforcement agents and their families; and attacking several other entities, ranging from state and federal governmental agencies to police officers’ associations to private corporations.  Hammond had been indicted in 2012 along with four other defendants on charges of computer hacking and conspiracy to commit computer hacking.  Indictment, United States v. Hammond, No. 12-cr-185 LAP (S.D.N.Y. May 2, 2012).  Some of Hammond’s co-defendants were prosecuted and sentenced in the United Kingdom and remain under indictment in the United States.

Hammond and his lawyers argued that his actions were political activism, aimed at exposing law enforcement policies and surveillance practices that he opposes.  Speaking at his sentencing hearing, Hammond, who also had been active in the “Occupy” movement, claimed that his crimes were “acts of civil disobedience” intended “to expose and confront injustice and to bring the truth to light.”  Conceding he broke the law, Hammond proclaimed, “I believe that sometimes laws must be broken in order to make room for change.”  Hammond’s lawyers drew on historical “moments where resistance has led to important social change,” noting that actors like the founding fathers, Martin Luther King, and Nelson Mandela were “not always understood in the moment” and were often considered “criminals.”  His lawyers highlighted the issue of surveillance technology as “one of the defining issues of our times” and emphasized Hammond’s community activism and the lack of personal gain obtained from his crimes.

In a stern oral opinion, Judge Preska rejected the characterization of Hammond’s actions as victimless civil disobedience: “These are not the actions of Martin Luther King, Nelson Mandela, John Adams, or even Daniel Ellsberg . . . [Mr. Hammond’s] hacks harmed many individuals and entities with little or no connection to Mr. Hammond’s supposed political motivation for the crime.”  Judge Preska pointed out that his hack of the Arizona Department of Public Safety shut down vital computer systems, such as the sex offender website and the Amber alert system, and that all of the attacked entities suffered financial and reputational harm.  Judge Preska cited a need for both individual deterrence (this was not Hammond’s first brush with the law for cybercrime) and general deterrence, writing that “there’s certainly nothing high-minded or public-spirited about causing mayhem.”  Judge Preska accepted the government’s recommended penalty, 10 years’ imprisonment, and imposed an additional 3 years’ probation.  See Sentencing Transcript, United States v. Hammond, No. 12-cr-185 LAP (S.D.N.Y. Nov. 13, 2013).

                    c.   Prosecution of the LulzSec Attacks

First-time offenders also have recently earned jail time.  Two college-student members of the Anonymous-affiliated hacker group LulzSec were each sentenced in the Central District of California to serve a year and a day in prison, to serve one year subsequent home detention, to complete 1,000 hours community service, and to pay $605,633 in restitution.  The defendants both pled guilty in 2012 to conspiracy and cybercrime-related offenses in connection with their participation in hacking the computer systems of Sony Pictures Entertainment.  The defendants used a SQL injection attack against the Sony Pictures website that compromised the company’s computer network and resulted in personal information of more than 138,000 individuals being posted online.  In its sentencing memorandum in the case, the United States Attorney’s Office for the Central District of California described LulzSec’s stated goal in the attacks: to see the “raw, uninterrupted, chaotic thrill of entertainment and anarchy” and to provide stolen personal information “so that equally evil people can entertain us with what they do with it.”  See United States v. Rivera, No. CR 12-798-JAK (C.D. Cal. July 24, 2013).

Law enforcement officials outside the United States also targeted LulzSec-affiliated hackers connected with the Sony Pictures attack and other attacks.  Four defendants (two of whom had been Hammond’s co-defendants in the Southern District of New York Hammond prosecution, discussed above) were sentenced in the United Kingdom in mid-2013 for cyberattacks on an number of private and government institutions, including attacks on Sony Pictures, the CIA, and the FBI.  Mostly first-time offenders, their jail time ranged from 1 year and 8 months to 2 years and 8 months.

                    d.   Prosecution of Dozens of Anonymous-Affiliated Hackers for Widespread DDOS Attacks

In October 2013, federal prosecutors filed a grand jury indictment in Virginia federal court accusing thirteen members of Anonymous of conducting a worldwide series of cyberattacks against government agencies, banks, anti-piracy organizations, individuals, and intellectual property law firms, among others.  For orchestrating these coordinated cyberattacks–part of a campaign dubbed “Operation Payback” that occurred between September 2010 and January 2011–the thirteen men were each charged with one count of conspiracy to intentionally cause damage to a protected computer.  The defendants allegedly synchronized DDOS attacks on each of the target’s networks, causing their websites to shut down.  The attacked institutions, the indictment alleged, were those that “Anonymous claimed opposed its stated philosophy of making all information free for all, including information protected by copyright laws or national security considerations.”  An Anonymous flier quoted in the indictment described the motivation behind “Operation Payback”: “We [are] sick and tired of these corporations seeking to control the Internet in their pursuit of profit.  Anonymous cannot sit by and do nothing while these organizations stifle the spread of ideas and attack those who wish to exercise their rights to share with others.”  See Indictment, U.S. v. Collins et al., No. 13-cr-383 (E.D.Va. Oct. 3, 2013).  The government subsequently dismissed all charges against one defendant, and the other twelve defendants pled guilty.  Thus far, the court has sentenced eight of those defendants to time served and a period of supervised release.  The court has deferred ordering restitution until the remaining defendants are sentenced.

One of the defendants that pled guilty, Dennis Owen Collins, was also one of fourteen purported Anonymous hackers indicted in 2011 in the Northern District of California on various charges related to the 2010 cyberattack of PayPal Inc.’s website.  United States v. Collins, No. 11-cr-471 DLJ (N.D. Cal.).  All fourteen accused initially pled not guilty.  But in December 2013, Collins’s thirteen co-defendants entered into plea agreements with prosecutors, in which they admitted to participating in DDOS cyberattacks against PayPal in December 2010 as part of hacktivist group Anonymous.  The plea agreements describe the background of the coordinated attacks, which Anonymous called “Operation Avenge Assange.”  In November 2010, the website WikiLeaks released a large trove of classified United States State Department cables on its website.  In reaction to the release of the classified information, and citing violations of the PayPal terms of service, PayPal suspended WikiLeaks’ accounts.  This meant WikiLeaks could no longer receive donations from supporters via PayPal.  Anonymous claimed to have executed the DDOS attacks in retribution for PayPal’s termination of WikiLeaks’ donation account.  See U.S. Department of Justice, U.S. Attorney’s Office for the Northern District of California, Press Release: Thirteen Defendants Plead Guilty For December 2010 Cyberattack Against PayPal (Dec. 6, 2013).

In October 2014, the court entered judgment against Collins’s thirteen co-defendants; each has since been sentenced to one year of probation and ordered to pay $5,600 in restitution.  Collins has maintained his plea of “not guilty” and awaits a trial date.  Meanwhile, Senator Patrick Leahy has introduced a bill in the U.S. Senate–entitled the “Personal Data Privacy and Security Act of 2014” (S. 1897)–that would strengthen the CFAA by making attempted hacks and conspiracies to hack subject to the same punishment as successful intrusions, while clarifying that mere violations of terms of service are not actionable.

                    e.   Computer Crimes and Venue

A jury convicted Andrew Auernheimer of violating the CFAA in New Jersey federal district court, and the court sentenced Auernheimer to 41 months’ imprisonment.  Auernheimer was found to have participated in an attack on AT&T servers in order to steal email addresses associated with iPad users.  Auernheimer, represented by the Electronic Frontier Foundation, appealed to the Third Circuit Court of Appeals, arguing that the New Jersey venue was improper because, at all relevant times, he and his co-conspirator were in Arkansas and San Francisco, respectively, and the affected servers were in Dallas and Atlanta.  The case received broad attention from various amici regarding the constitutionality of the charges against Auernheimer.

In April 2014, the Third Circuit vacated Auernheimer’s conviction on the basis of improper venue.  See United States v. Auernheimer, 748 F.3d 525 (3d Cir. 2014).  In so doing, the Third Circuit rejected the district court’s conclusion that venue was proper because Auernheimer’s disclosure of the email addresses of about 4,500 New Jersey residents affected them in New Jersey and violated New Jersey law.  The Third Circuit cautioned: “As we progress technologically, we must remain mindful that cybercrimes do not happen in some metaphysical location that justifies disregarding constitutional limits on venue.  People and computers still exist in identifiable places in the physical world.”  Id. at 541.

      F.   The Year Ahead

As cybercrime shows no signs of slowing in 2015, law enforcement officials have signaled that they will respond with increasingly robust enforcement tactics.  On December 4, 2014, shortly after the revelation that Sony Pictures had been the target of a sophisticated cyberattack, the Department of Justice announced the launch of a new Cyber Security unit, to be housed within the DOJ’s exiting Computer Crime and Intellectual Property Section.  Assistant Attorney General Leslie Caldwell explained that “[g]iven the growing complexity and volume of cyberattacks, as well as the intricate rubric of laws and investigatory tools needed to thwart the attack, the cybersecurity unit will play an important role in this field.”  She also emphasized the importance of a “robust enforcement strategy as well as a broad prevention strategy.”

The Department of Justice has recognized that prevention depends in part on the ability of U.S. companies to share information with one another and the government concerning rapidly evolving cyber threats.  However, as the DOJ has emphasized, this information sharing “must occur without contravening federal law [e.g., the Stored Communications Act, 18 U.S.C. § 2701 et seq.] or the protections afforded individual privacy and civil liberties.”  In an effort to facilitate lawful information sharing, the DOJ issued a white paper in May 2014, which articulates the DOJ’s interpretation of the Store Communications Act as permitting providers to share aggregated non-content data with governmental entities, so long as that data does not reveal information about a particular customer or subscriber.

V.   International Developments

      A.   European Union

            1.   Developments at the European Union Level

                    a.   Draft EU Data Privacy Regulation

The EU Data Privacy Regulation is intended to succeed the operative 1995 Data Privacy Directive (Directive 95/46/EC, hereinafter “EU Data Privacy Directive”). It was initially intended for enactment before the end of 2014, but due to the voting process and reestablishment of the EU Commission, the legislative process was significantly delayed. Thus, the new regulation has not yet been enacted and will likely not come into effect before 2017.

Two particularly important issues discussed during the legislative process involve exemptions for the public sector and rules concerning data portability.  Core substantive elements of the current proposed regulation include the following:

  • The draft regulation would implement a “right to be forgotten” (also officially called the “right to erasure”) whereby personal data must be deleted when an individual no longer wants his or her data to be processed by a company and there are no legitimate reasons for retaining the data.  This part of the draft regulation may impose significant burdens on affected companies, as the creation of selective data destruction procedures often may impose significant costs.
  • The draft law also would establish a right to data portability, which is intended to make it easier for individuals to transfer personal data from one service provider to another.  Upon request, individuals are entitled to obtain personal data that they have provided to a business in an interoperable and commonly used format.  This provision has also come under particular scrutiny due to its potential to significantly increase companies’ administrative burdens.
  • Privacy by design and privacy by default would be established as essential principles of the new EU data protection rules.  These principles would require data controllers to design data protection safeguards into their products and services right from the inception of the product development process.  Privacy-friendly default settings also would be standard.
  • Data controllers and processors would be required to designate a Data Protection Officer (“DPO”) in certain circumstances.  In the age of cloud computing, where even very small controllers can process large amounts of data through online services, the applicable threshold for a mandatory DPO may apply to even relatively small companies.
  • Biometric and genetic data would be expressly defined as special categories of personal data.  Biometric data would be defined as any personal data relating to the physical, physiological, or behavioral characteristics of an individual that allow unique identification of the individual–e.g., facial images or fingerprints.
  • The draft regulation also expressly sets out the requirements for Binding Corporate Rules (“BCRs”) to enable the free transfer of data within global organizations to countries outside the EU.  A national supervisory authority would approve BCRs as a means of lawful intra-group data transfer, provided that the BCRs are legally binding and apply to, and are enforced by, every member within the controller’s group of affiliates (including employees) and external subcontractors.  BCRs also must expressly confer enforceable rights on data subjects and fulfill a set of minimum requirements, including specification of their legally binding nature and general data protection principles applicable within the particular group of companies.

These requirements would be supplemented by a much more rigid regime of fines for violations.  Standard fines for data privacy violations ranging from 1% to 5% of a company’s annual worldwide turnover have been discussed.  As a result of the extra-territorial application of the draft law, companies located outside the EU also would have to take this into account.

On the positive side, implementation of the draft regulation would allow a single EU regime to replace 28 different national data privacy laws with one directly applicable regulation.  The current EU Data Privacy Directive does not have direct effect and, therefore, was implemented by 28 different national laws–which gave rise to differences in scope, interpretation, and enforcement.  Thus, the new draft regulation also would create a “one-stop shop” for businesses concerned with privacy law compliance, because a company would be able to interact with the various national supervisory authorities through one lead authority.

                    b.   Review of Safe Harbor Agreement

As discussed above in Section II.A.2, the EU-U.S. Safe Harbor Agreement (“Safe Harbor”) enables compliant data transfers between EU Member States and the United States provided that the U.S. company receiving the data adheres to certain minimum data privacy standards.  This adherence is ensured via a process of self-certification.  Following disclosures of extensive collection of EU citizens’ data by U.S. intelligence authorities, the current Safe Harbor regime came under scrutiny by EU policymakers.

Specifically, the EU Commission issued a set of recommendations designed to implement stricter Safe Harbor rules.  The goal is to further increase the level of data protection for EU citizens.  The conflict between data privacy and surveillance activities can be particularly sharp with regard to the Safe Harbor rules, because they contain exceptions for national security purposes.  Hence, personal data legally transferred to the United States may be disclosed by U.S. companies to intelligence agencies on the basis of national security interests.

The EU Commission issued recommendations for tightening the Safe Harbor requirements.  The key recommendations are as follows:

  • Privacy policies of self-certified companies as well as the privacy provisions in their agreements with sub-contractors should be disclosed publicly.
  • Privacy policies of self-certified companies should include information about the extent to which public authorities in the United States are allowed to collect and process personal data transferred under the Safe Harbor.
  • Data transfers under the Safe Harbor’s national security exception should take place only to the extent strictly necessary and proportionate.
  • The Department of Commerce should enforce the Safe Harbor framework by means of investigations in order to ensure that self-certified companies comply with privacy standards.
  • The Department of Commerce should inform EU data protection authorities when there are concerns or complaints about an entity’s Safe Harbor compliance.

The EU Commission has asked the U.S. Department of Commerce to provide feedback on its proposals.  In the meantime, the European Parliament passed a resolution in March 2014 calling for the immediate suspension of the Safe Harbor regime; this resolution had no immediate legal effect, but it may be indicative of sentiment among European policy makers.  Should negotiations on the EU Commission’s proposed amendments fail, resulting in a suspension of the Safe Harbor, the business community on both sides of the Atlantic could face substantially greater hurdles to compliant cross-border data transfers.

Additionally, the European Court of Justice received a request for a preliminary ruling from the Irish High Court on the compatibility of the Safe Harbor framework with Article 8 of the Charter of Fundamental Rights of the EU. Although the Irish court in its June 2014 ruling held that data protection authorities are in principle bound by the Safe Harbor Agreement as long as it remains in place, a review of its compatibility with the Charter of Fundamental Rights was considered necessary by the court. Companies should, therefore, not solely rely on Safe Harbor certifications but initiate additional measures before they transfer data to the US. In this context, German data protection authorities recommend, for instance, to check data importer policies for potential conflicts with Safe Harbor principles, to verify whether individuals may exercise information rights and to check whether onward transfers to third parties are covered by data transfer agreements or sufficient consent requirements.

                    c.   Opinions Issued by the Article 29 Working Party

The Article 29 Working Party consists of representatives of national data privacy enforcement agencies, the EU Commission, and other EU institutions.  It has an advisory status and is regarded in Europe as an independent opinion leader in EU data privacy enforcement.  The Working Party’s opinions are frequently relied upon as interpretive guidance by national courts and the EU Commission.

The Article 29 Working Party also published an opinion that addresses key data privacy risks in the context of mobile apps (WP 202 from February 2013).[76]  It found that mobile apps can raise particular privacy concerns due to their ability to collect large quantities of personal data from a user’s device, including contact information and location data.  The Working Party further wrote that certain data collection without user consent can transgress EU data privacy laws and that mobile apps must provide sufficient information about what data they are processing in order to allow for meaningful user consent.

In April 2013, the Article 29 Working Party adopted an explanatory document (WP 204) concerning BCRs for data processors.[77]  These BCRs ensure that data transfers by a data processor who acts on behalf of his clients and in accordance with their instructions are compliant with requirements for the transfer of data outside the EU.  The explanatory document aims at providing further guidance to companies on the required content for data processor BCRs.

The Article 29 Working Party also adopted an opinion concerning the use of cookies and similar tracking technologies for various purposes (WP 208 from October 2013).[78]  Based on the so-called e-Privacy Directive, 2002/58/EC, the opinion describes a framework for a compliant website across all EU Member States.  In so doing, it places a consent requirement at the heart of relevant compliance measures, recommending that consent mechanisms for cookies include specific information about cookies’ purposes, prior consent (before data processing starts), precise information about how users can actively signify their consent, and the provision of real choice whether to accept cookies.

Furthermore in November 2014, the Article 29 Working Party adopted Guidelines Concerning the Implementation of the European Court of Justice’s ruling regarding the “Right to be Forgotten.”[79]  As a key requirement, the Article 29 WP demands that, delisting decisions must be implemented in such a way that they guarantee the effective and complete protection of data subjects’ rights.  Therefore, delisting must not be limited to EU domains but instead must include all relevant domains (e.g. also “.com” domains).

                    d.   Service Provider Data Breach Notification Obligations

In August 2013, Regulation No. 611/2013 came into force.  This regulation seeks to harmonize the standards for notifications of personal data breaches.  A personal data breach is defined under EU law (Directive No. 2002/58/EC) as a breach of security resulting in accidental or unlawful destruction, loss, alteration, unauthorized disclosure of, or access to personal data processed in connection with the provision of a publicly available electronic communications service in the EU.  A notification obligation is imposed on providers of publicly available electronic communication services, i.e., telecom companies and Internet service providers.  When a data breach occurs, the affected service provider must notify the competent national authorities within 24 hours of the detection of the breach, where feasible.  In addition, the individuals concerned must be notified without undue delay if the personal data breach is likely to adversely affect the personal data or privacy of the individual.

                    e.   Proposed EU Cyber Security Directive

In March 2014, the EU Parliament assented to a proposed directive governing network and information security across the EU (the “Proposed Cyber Security Directive”).  Among other things, the directive would seek to establish network information security strategies and common requirements for technical and organizational measures relating to IT security risk management.  Another core element of the proposal is an EU data security network that interlinks various authorities carrying out cyber security tasks.  The Proposed Cyber Security Directive also would establish a stricter breach notification requirement for critical infrastructure operators such as energy and transport companies, banks, and health care service providers.  Compliance requirements for businesses would be enforced with audits and inspections, binding instructions, and sanctions.

To become law, the Proposed Cyber Security Directive requires consent of the EU Member States which is currently expected to be granted during the course of 2015.  Member States would be granted an additional transition period of approximately 18 months to transpose the Directive into national law.  Currently, the most disputed issues concern the degree of information exchange within an envisaged EU data security network and the exact scope of the law, i.e., which industries in particular should be made subject to relevant obligations.

                    f.   Google Held Subject to EU Data Privacy Law and the Right to Be Forgotten

In May 2014, the European Court of Justice held that there is a right to be forgotten that individuals may invoke against operators of search engines.[80]  The case was brought by a Spanish citizen seeking the removal or concealment of information related to him available through the Google/Google Spain search engine.  The search results in question were links to a newspaper article which, 16 years prior, had announced a real estate auction following attachment proceedings for the recovery of social security debts owed by the individual.

As a threshold matter, the court cleared the path for the application of EU data privacy law when it held that Google’s subsidiary in Spain qualifies as an “establishment” under the EU Data Privacy Directive, even though the subsidiary has only marketing functions and is not engaged in actual data processing (which occurs outside the EU).  The court held that it is sufficient that the subsidiary is intended to promote and sell, in the Member State in question, advertising space offered by the search engine in order to make the services offered by the search engine more profitable.

The court then held that, by means of automatic, constant, and systematic searches for information published on the Internet, the search engine operator collects data within the meaning of the EU Data Privacy Directive.  It further held that activities performed by indexing programs–such as retrieving, recording, organizing, and disclosing information available on the Internet–qualify as data processing as far as personal data of individuals is concerned.  Because the search engine operator determines the purposes and means of these processing activities, the Court considered the search engine operator to be the data controller and the entity responsible for data privacy-related claims of affected individuals.

As to the actual scope of the right to be forgotten, the court based its judgment primarily on a balance of interests between an individual’s right to privacy and the protection of personal data, on the one hand, and the legitimate interest of the public in having access to that information, on the other.  The outcome of this balancing exercise may vary in individual cases depending on the nature of information in question, its sensitivity for the individual’s private life, and the interest of the public in having the information, which is largely determined by the role played by the individual in public life.  The court also highlighted the importance of the information’s staleness.  Even accurate data may, in the course of time, become inadequate, excessive, or no longer relevant to the public interest; hence, its processing by the search engine operator may become incompatible with the EU Data Privacy Directive.  In such a case, erasure of relevant links and information displayed on the list of search results is logically required by the EU Data Privacy Directive, according to the court.

Google responded by establishing a process for EU persons to request the erasure of relevant search results linked to individuals’ names, and it began responding to such requests in June 2014.  Microsoft, which operates its Bing search engine in Europe, followed suit in July by publishing a form that would allow EU persons to request erasure.  The EU’s Article 29 Working Party has requested input from search companies like Google, Microsoft, and Yahoo! to finalize guidelines addressing implementation of the European Court of Justice’s decision.

            2.   France

In France, the protection of personal data is governed by the Loi Informatique et Libertés of January 6, 1978 (hereafter the “Data Protection Act”), which is implemented by the CNIL, a national agency.  The Data Protection Act applies to personal data, which is defined as any data allowing for direct or indirect identification of an individual (e.g., name, telephone number, photo, national identification card number, email address, or family status) or covering “sensitive information” (e.g., health, sexual orientation, political affinity, or membership in a trade union).

                    a.   International Data Transfers

In principle, any transfer of personal data outside of the EU is prohibited unless adequate protection of personal data has been implemented by the recipient of the data.

Because the United States has not been deemed to provide a sufficient level of protection of personal data, the transfer of such personal data outside of France to the United States triggers thorny issues, notably in the context of discovery requests.  In order to be legitimate, transfer of data outside of France must comply with the requirements set forth at the European level which have been implemented under French law.  Indeed, the CNIL deliberation No. 2009-474 of July 23, 2009 organizes the transfer of data to the Unites States in the context of discovery requests, which must be done either via Binding Corporate Rules, specific contractual clauses or to entities that have been Safe Harbor certified (see Section V.A.1.b).  When transferred to US judicial authorities, the CNIL requires that such authorities issue court orders to ensure a sufficient level of protection of the transferred personal data.

The ongoing negotiations of the transatlantic trade and investment partnership (TTIP) between the European Union and the United States raise major concerns in France as to whether such negotiations will cover, and consequently ease, the transfer of personal data.

                    b.   CNIL Enforcement Actions

In 2014, the CNIL has pursued a number of enforcement actions, with several resulting in sanctions of up to €150,000, the maximum amount that may be imposed.

In particular, the CNIL fined Google Inc. €150,000 for failure to comply with the requirements of the Data Protection Act.  In this matter, the CNIL took issue with, among other things, Google’s “potentially unlimited combination of users’ data” across different Google services.  Aware that the fine was insignificant compared to Google’s revenues, the CNIL broadly publicized the fine in an attempt to impact the company’s public image.  Following the investigations performed by EU data protection authorities, the Article 29 Data Protection Working Party decided “to help Google” with its compliance efforts and thus adopted a compliance package of dedicated measures.  This package aims to offer specific and practical measures that could be implemented quickly by Google to meet the requirements of the European data protection framework.  The package was presented to representatives of Google on July 2, 2014, during a meeting held in Paris in presence of five EU data protection authorities.  In a letter to Google dated September 23, 2014, the Article 29 Working Party indicates that it may also consider issuing guidance on specific issues to the entire industry, at a later stage.

In addition, France has (alongside Germany) urged upon Google to put an end to its anticompetitive practices and to foster transparency for the ranking of websites.  The two countries seek to have the European Commission issue a more stringent regulatory framework designed to take a tougher line on Google (as well as on the other GAFAs), either in its antitrust investigation into the company or through the introduction of laws to curb its reach.  A draft motion is now calling for the European Commission to consider the “unbundling” of search engines from other commercial services as one possible solution to Google’s dominance, in a similar way to the electricity and gas or telecoms networks.  In theory, unbundling would mean preventing Google’s other commercial services (such as YouTube and Google Shopping) from benefiting from the company’s dominance in search.

Other of the CNIL’s recent decisions compelled two French banks to comply with the Data Protection Act due to the malfunctioning of their recording system in the National Register of Household Credit (so-called “FICP”) and for breach of confidentiality of their clients’ banking data. Over the past two years, the CNIL had indeed recorded several complaints from the banks’ clients arguing that certain payment incidents had been wrongly registered on the FICP, or that they should have been removed from the register. Certain clients also received confidential information about other of the banks’ clients. One of the two banks was sanctioned by an official warning and the other bank was under formal notice to comply with applicable legal requirements.

                    c.   Social Networking

On November 7, 2014, the French Commission on unfair terms (Commission des clauses abusives) issued a recommendation advocating for the removal of several unfair terms generally included in contracts of so-called “social networking services,” notably in connection with data protection.  For instance, it recommends removing clauses according to which the user implicitly agrees to the processing of his/her personal data by the professional, or which organize the transfer of personal data to undesignated third parties, with no need for any formal consent from the user, or which provide for longer retention period than what is provided for by the CNIL, etc.  Interestingly, the Commission on unfair terms claims that the user of social networks still qualifies as a consumer even if such user participates in the functioning of the network (which could thus have resulted in qualifying the user as a service provider).  The Commission also innovates when asserting that the use of so-called “social networking service agreements” is not free, since these agreements rely on the processing of personal data to allow for targeted advertising, which should thus be analyzed as a compensation potentially valuable to the professional.

                    d.   Right to Be Forgotten

In the aftermath of the decision issued by the Court of Justice of the European Union (CJEU) dated May 13, 2014[81] which first recognized a “right to be forgotten” on the Internet, French courts have started ruling on delisting requests from plaintiffs seeking protection of their personal data.

In a decision dated December 19, 2014, the Paris High Court ruled in favor of a plaintiff who sought to have Google delist an article discussing the plaintiff’s conviction for fraud in 2006, which came up as one of the first results for a Google search on her name. Interestingly, the plaintiff did not seek to have the article itself removed from the Internet, but rather to hinder its availability online, because it jeopardized her job search, and she had already been forced to resign from her electoral mandate following a tip-off from an anonymous source. She also argued that the personal data yielded by the search associated with her name was now inadequate and excessive considering that her conviction was over eight years old.  Finally, the plaintiff asserted that this conviction was not even mentioned on the publicly accessible version of her criminal record.

In its decision, the Paris High Court applied the CJEU’s ruling and determined that the plaintiff had legitimate grounds to petition for the delisting of the incriminating search results. It thus ordered Google Inc., to remove the links to the disputed article within ten days. In so ruling, the High Court rejected Google’s argument that the public had a legitimate interest in information about the plaintiff’s conviction.  While several other French courts have urged Google to remove disputed articles because they included discriminatory statements, following the CJEU’s ruling, this decision is the first enforcement of the “right to be forgotten” in France.

            3.   Germany

                    a.   Regulatory Enforcement and Developments

The German courts and data protection authorities also have been very busy recently.  In November 2013, a Berlin court held various clauses in Google’s terms of use and data privacy statements to be void.  In addition, in April 2013, Google was fined approximately €145,000 for the unintended collection of certain data during its Google Street View recording operations.

In September 2014, the Higher Administrative Court of Schleswig-Holstein (Schleswig-Holsteinisches Oberverwaltungsgericht) held that operators of Facebook fanpages are not responsible for user data being further processed by Facebook.  The judgment was delivered upon appeal by the regional data protection authority in Schleswig-Holstein which had initially ordered a local chamber of commerce to deactivate its Facebook fanpage.  The Higher Administrative Court rejected the notion the Facebook fanpage operator had data control due to the fact that the fanpage operator had no influence on the technical and legal aspects of the data processing by Facebook itself.  Data control may neither be derived from the fact that Facebook provides statistical information to operators of fanpages.  As a result, the data protection authority did not have the necessary power to order the fanpage operator to deactivate the fanpage.

Interestingly, Facebook had already obtained a significant favorable ruling in April 2013 before the same court concerning the applicable data privacy law.  This decision held that the European Facebook network was validly governed by Irish data privacy laws and fell under the competence of the Irish data privacy regulators.  (Facebook’s European headquarters are in Ireland.)  This division of competence also was true with respect to regulations affecting Facebook’s users in Germany and other EU member states.  The Court therefore revoked data privacy orders imposed against Facebook by a German regulator who had requested that Facebook implement a feature through which German Facebook users could anonymously use the Facebook network.

In September 2014, the Higher Administrative Court of Lower-Saxony (Niedersächsisches Oberverwaltungsgericht) provided important guidance regarding the practical implementation of CCTV surveillance.  The court balanced the legitimate interests of individuals subject to CCTV surveillance against the CCTV operator’s rights to undisturbed possession of the protected property and legitimate interest of preventing abstract and concrete dangers of crime.  In the case under consideration, the cameras would only turn on if they detected movement, were pointed at a fixed observation area and did not have a zoom function.  Recordings were immediately transferred into a blackbox (no monitor observation) which was itself password-protected and after ten days, any recordings were deleted automatically.  In addition, signs were installed indicating that CCTV was in operation.  Consequently, the court held that the concrete CCTV measures had not severely intruded into the privacy of individuals because it had not been possible to recognize faces or generate movement profiles.  With regard to storage periods, the court also held that a storage period of up to 10 working days instead of just three days as typically requested by German data protection authorities would be reasonable in light of the objective to detect crime and given the potential absence of relevant employees due to holidays.

In another important decision from August 2013, the Higher Regional Court of Hamm (Oberlandesgericht Hamm) decided that YouTube did not have to remove a video clip revealing information about a German diplomat who had escaped prosecution for causing a car accident in Moscow based on diplomatic immunity.  The diplomat was ultimately sentenced by a German court, and the Higher Regional Court found that, in this case, the public interest in the information outweighed the diplomat’s privacy interests.

On the regulatory front in June 2013, the Bavarian data privacy authority fined an employee of a company for using “open” email distribution lists.  The employee had unintentionally sent mass emails to customers disclosing the recipients’ identities in the “to” and “cc” lines of the email, enabling all recipients to obtain personal data (e.g., name and email address information) of other customers, which in the regulator’s view constituted a data privacy violation.

Additionally, a data privacy regulator in Lower Saxony prohibited private companies from copying personal identification cards and passports, for data privacy reasons.  This decision was appealed but upheld by the competent appellate court (Administrative Court of Hannover) in November 2013.  Copying customer identification documents is a widespread practice in many industries; if other regulators share the very strict view of the Lower Saxony data protection authority, the results could significantly impact businesses operating in Germany.

Finally, in December 2014, the data protection authority of North-Rine Palatinate closed an investigation concerning alleged data privacy violations by German insurance company Debeka.  The regulator had launched investigations in response to assertions that Debeka employees had illegally acquired personal data and information about public service candidates in order to gather and use the information on prospective insurance clients, without the consent of the individuals concerned.  Debeka agreed to a settlement of €1.3 million and additionally to fund university research on data privacy protection with an additional €600,000.  This fine significantly topped the fine of €1.1 million imposed on Deutsche Bahn in 2009 for the mass screening of 173,000 employees, and is a strong signal that German regulators are willing to rigorously enforce data privacy laws.

                    b.   Internal Investigations and Email Reviews

On May 27, 2013, the Administrative Court of Karlsruhe (Verwaltungsgericht Karlsruhe) issued an important judgment that brings more certainty into the process of reviewing emails during an internal investigation or in similar circumstances.  Under German law, a provider of telecommunications services may be held criminally liable for certain violations of user data privacy.  Authorities on the subject have debated whether an employer’s review of an employee’s emails violates this provision, which is part of the German Act for Telecommunications Services.  The Administrative Court of Karlsruhe ruled that an employer cannot be classified as a provider of telecommunications services under the Act, because the Act is not designed to regulate the internal relationship between employers and employees.  Despite this ruling, however, it is important to note that hurdles to the review of employee emails remain.  In particular, German data protection laws stipulate that an email review is permitted only if it is necessary and proportionate.  Moreover, where there is an investigation of an alleged criminal offense, concrete grounds for suspicion must exist with regard to the specific employee whose electronic data is the subject of the review.

                    c.   Non-Enactment of EU Directive

In April 2014, the European Court of Justice declared EU Directive No. 2006/24 to be incompatible with fundamental human rights.  That directive attempted to harmonize different national laws for the storage of telecommunications data for the purpose of criminal investigations.  The European Court of Justice decided that the storage of communication data as foreseen by the directive disproportionately infringed upon privacy rights.  In particular, the court held that the directive did not sufficiently distinguish between the seriousness of crimes, did not appropriately distinguish between separate data categories for the purpose of determining storage periods, and did not provide for sufficient preconditions for data access by national authorities.  The German government had decided to wait until the European Court of Justice’s opinion was handed down before enacting the directive.  Following the court’s nullification of the directive, there is a debate in Germany about whether a national initiative for the storage of telecommunications data should be pursued.

                    d.   Draft Bill on Standing of Consumer Associations in Data Privacy Proceedings

The German legislator intends to strengthen enforcement of data privacy laws by allowing consumer rights associations to bring actions for injunction and demand removal of infringements on behalf of consumers.  Relevant changes shall be included in the German Act Governing Collective Actions for Injunction (Unterlassungsklagengesetz–UklaG).  The draft has been heavily criticized for creating additional burdens for businesses and the risk of parallel decision-making as well as loss of legal certainty, particularly given that consumer protection organizations already often demand deletion of data collected in breach of data privacy laws.  Online service providers might ultimately be required to delete relevant user data even though individual users do not oppose to data processing by a particular company.  As of today, it remains in doubt whether and in what form the draft law will eventually be enacted and whether collective enforcement will in fact play a significant role in German data privacy law.

            4.   United Kingdom

The Information Commissioner’s Office (“ICO”) has remained active following a marked increase in activity in 2012, and in July 2014 it was reported that it had received a record number of complaints in the preceding financial year.

                    a.   ICO Activity and Enforcement Actions

The ICO’s recent activities have included clamping down on unsolicited text messages and calls, and continuing ongoing dialogues on state surveillance in light of the Edward Snowden revelations and recent controversies relating to the NHS’s handling of confidential medical records.

While fines issued by the ICO had previously been limited to local authorities and financial services, in January 2013, Sony’s European subsidiary was fined 250,000 GBP for a “serious breach” of the Data Protection Act 1998 (the “Data Protection Act”) for failing to protect the personal details of PlayStation network users.  In 2011, hackers had accessed the names, email and postal addresses, dates of birth and passwords of millions of customers, and the ICO held that the hack could have been prevented if Sony had used more up-to-date software.

In line with its current priorities, in December 2014 the ICO issued a 70,000 GBP fine to the organizers of Manchester’s annual festival for sending unsolicited text messages, and fined a boiler insurance firm 90,000 GBP for continuously making nuisance sales calls to vulnerable people.  In August 2014 it also raided a call center in Llanelli, Wales, suspected of being connected to spam text operations.

The ICO recently found that Caerphilly Council in Wales had breached the Data Protection Act in ordering the covert surveillance of an employee suspected of fraudulently claiming to be sick, holding that the council did not have sufficient grounds to undertake the surveillance, particularly as it began only four weeks into the employee’s sickness absence, and that no other measures were taken to discuss the employee’s absence before the covert surveillance commenced.[82]

In addition, the ICO recently commented that users of Google Glass (and other similar wearable technology) would be subject to the same rules as CCTV, meaning that in some situations, the Data Protection Act could be breached.  In August 2014, the ICO warned barristers and solicitors to keep personal information secure (particularly paper files) following numerous breaches reported to the ICO involving the legal profession.[83]  Further, in November 2014, Grampian Health Board in Scotland was ordered to take action to ensure better protection of patient information after six data breaches in a thirteen month period involving the abandonment of papers containing sensitive personal data in public areas.

                    b.   ICO Best Practice Guidance

The ICO issued an updated CCTV Code of Practice, acknowledging that “[s]urveillance cameras are no longer a passive technology that only records and retains images, but is now a proactive one that can be used to identify people of interest and keep detailed records of people’s activities…”  It warned that surveillance cameras should only be used as a necessary and proportionate response to a “real and pressing problem.”  In addition, new guidance for drone operators was also issued by the ICO, which stated that drone pilots should protect the privacy of individuals when flying, and that if the drone has a camera, its use could pose a “privacy risk to other people” and be covered by the Data Protection Act.

            5.   Other European Nations

In November 2014, the Dutch government published the latest in a series of draft proposals for a new law regarding telecom data retention.  This newest proposed bill follows the European Court of Justice’s determination that the European Data Retention Directive (2006/24/EC) was invalid.  In response to the European Court’s judgment, this new proposal introduces several additional requirements for law enforcement agencies to gain access to the retained telecommunications data, although it leaves the existing set of regulations largely intact.  For instance, while telecom data providers would still be required to retain all traffic data falling under the retention obligation for a period of 12 months (telephony data) or 6 months (Internet data), they would now be required to retain all such required data within the Netherlands or another EU Member State.  As for law enforcement agencies, the proposed bill would require them to seek prior authorization from an examining judge before accessing the retained telecom data.  In addition, these agencies would only be able to access telephony data that is more than a year old in connection with investigating crimes for which the sentence is 8 years or more.  Dutch opposition parties have called for the new proposal to be scrapped, and may try to vote on an alternative bill that would revoke the data retention obligation altogether.

In another interesting development, the Irish government has sided with Microsoft in latter’s battle to oppose a US court order demanding access to emails stored in the Microsoft data center in Dublin.  This issue arose at the end of July 2014, when U.S. District Judge Loretta Preska ruled that Microsoft had to give the U.S. Department of Justice access to Outlook.com emails stored on its Irish servers.  Microsoft appealed the ruling, arguing in a filing that the emails “are located exclusively on a computer in Dublin, where they are protected by Irish and European privacy laws.” The Irish government has now openly backed Microsoft’s argument, indicating that Microsoft’s provision of this data could seriously compromise international sovereignty and digital privacy.  The Irish government’s submission in the case stated that its lack of participation in the U.S. court proceedings does not constitute a waiver of its sovereignty rights, and that the DOJ should make a request under the Mutual Legal Assistance Treaty as the appropriate mechanism to obtain the information it seeks.  In addition, a European Parliament member from Germany, Jan Philipp Albrecht, submitted a separate filing in the case highlighting the clash between European and US data privacy laws; his submission stated, among other things, that “[t]he refusal of the U.S. Attorney to recognize that the email account at issue is located in a foreign jurisdiction and subject to foreign data protection rules is not only offensive to the sensitivities of European citizens but also reinforces the already strong sentiment of many EU citizens that their data is not ‘safe’ when they use IT services offered by U.S. corporations.”

      B.   Asia-Pacific Region

Data privacy remained in the Asia headlines during the latter part of 2013 and 2014, with record data breaches and fresh legislative action in key markets.  Countries in the Asia-Pacific region also have been active on the legislative front, with many new laws and regulations coming into effect in the past year.

            1.   India

In the first part of 2014, media reports were swirling that the Central Government was drafting a new data protection bill to significantly beef up its data privacy legal framework.  The purported bill, which has not been made public, is largely focused on providing protections against unauthorized surveillance by both individuals and government agencies.  If made into law, those illegally intercepting private communications sent by others will face significantly increased fines.  The bill takes particular aim at telecommunications companies, providing for suspensions or license revocation for allowing unauthorized interception of communications.  The bill would also create a new agency to enforce the law.[84]  Passage of the bill will also help to assuage recent fears of alleged cyber-snooping by the U.S. government.[85]

A major breach in August allowed hackers to break into the Central Government’s National Informatics Centre (“NIC”), the agency charged with building the country’s information and communications technology infrastructure.[86]  The hackers were able to use the NIC’s credentials to issue a series of fake digital certificates.  The incident prompted fears by major IT players such as Microsoft, which wrote to the Indian government to express their displeasure at both the breach and NIC’s response.[87]

            2.   China and Hong Kong

China’s data privacy regime continues to evolve in an attempt to keep pace with its increasingly tech-savvy citizenry.  For instance, China recently amended its Consumer Protection Law in response to high-profile thefts of customer data.  Among other things, the amendments require business operators to obtain consent prior to the collection and use of consumers’ personal information, to expressly inform consumers of the purposes of the data collection, and to obtain explicit consent prior to marketing to consumers.  The amendments also prohibit businesses from selling consumers’ personal information to others.  The amendments went into effect on March 15, 2014.

On January 17, 2014, China promulgated forty-five implementing regulations for the Law on Guarding State Secrets (the “Regulations”).  Many of the Regulations instruct Chinese government agencies on the proper classification and labeling of items designated as state secrets.  The Regulations also mandate that the security mechanisms of enterprises that work on the production, duplication, maintenance, or destruction of state secret carriers, integration of information systems involving state secrets, research or manufacture of weaponry equipment, or other business involving state secrets, shall be subject to review by authorities.  An enterprise engaging in business involving state secrets must further meet certain criteria: it must have been duly established in the PRC for over three years; it must not have a criminal record; and it must use PRC citizens to engage in any business involving state secrets.

China-based Alipay, which accounts for 61% of the country’s market share for third-party payment companies, apologized to customers in January 2014 after media reported that a former employee confessed to downloading 20 gigabytes of personal information, including customers’ names, email addresses, home addresses, and purchase records.  The former employee allegedly sold the information to e-commerce websites in search of potential customers.

In October 2014, China’s Supreme Court issued new judicial interpretations allowing for civil suits against individuals posting personal details on the Internet without the subject’s consent.  The move is widely seen as a response to the “human flesh search engine” phenomenon, where groups of web users search out and post personal details of unpopular individuals.[88]  China also continued to take steps to strengthen its comparatively weak data infrastructure during the latter part of 2014 with the announcement of a communication cable linking Beijing and Shanghai.  The cable, according to media reports, features “quantum encryption” technology, which involves writing encryption codes on single photons of light.  Supporters of the technology call the forthcoming cable “unhackable.”

China also sent waves through the data privacy community when two corporate investigators were convicted in August of having illegally obtained information about Chinese citizens, including phone records and household registration data, which they subsequently resold to clients.  The investigators, who had purchased the data for clients in connection with their background and due diligence check services, were both sentenced to prison and received fines.

Hong Kong’s Office of the Privacy Commissioner for Personal Data (“PCPD”) recently published guidance on cross-border data transfers, an area of ambiguity in Hong Kong’s Personal Data (Privacy) Ordinance (“PDPO”).  Currently, the PDPO contains prohibitions on transferring personal data outside of Hong Kong except in cases where (1) the data subject has consented in writing, (2) the destination has in place an adequate data privacy legal framework (as specified by the Privacy Commissioner), or (3) the data user reasonably believes that the destination provides protections similar to the PDPO.  The PCPD guidelines provide context to these exceptions, as well as examples and model data transfer agreement clauses.  Importantly, the guidelines are considered voluntary as the relevant portions of the PDPO have not yet come into force, but the PCPD states that the guidance “assists organizations to prepare for the eventual implementation” of the provisions.[89]

            3.   Japan

On April 30, 2014, Japan signaled its intent to further develop its data privacy regime when it was announced that it had become the third member of the Asia-Pacific Economic Cooperation (“APEC”) Cross-Border Privacy Rules System (“CBPRS”).  The CBPRS aims to facilitate cross-border data sharing consistent with a set of principles, with the stated goal of optimizing both protection of data as well as transfer efficiency.  Japan, which joins fellow APEC members United States and Mexico as CBPRS participants, was approved for the system after submitting a notice of intent to join and providing assurances that its current data privacy regime is consistent with CBPRS principles.

The latter half of 2014 was marred by record data breaches in Japan.  In July 2014, a systems engineer at Benesse Corp., a children’s correspondence education provider, was arrested on suspicion of stealing 10 million customers’ data and reselling it to potentially hundreds of companies.  In August 2014, Japanese authorities discovered that the former engineer had, in fact, stolen an additional 20 million customers’ information, including names, phone numbers and birthdates, prior to his arrest.[90]  Further investigation revealed that up to 48 million customers had information compromised as a result of the breach.[91] Following these reports, Japan’s Ministry of Economy, Trade and Industry (“METI”) announced enforcement proceedings against Benesse Corp. for violations of the Personal Information Protection Act.[92]

In September, the country’s flagship carrier Japan Airlines reported the possible theft of personal information of up to 750,000 customers.  Information stolen by hackers included names, birthdates, addresses and places of work.[93]  These incidents have prompted METI to announce forthcoming amendments to data privacy rules.[94]

            4.   South Korea

While the South Korean government likely hoped for a reprieve from data breaches that have plagued the country, major issues persisted into 2014.  In August, it was revealed that personal details of fully half of South Korea’s population, including full names and national registration numbers, were stolen from online gaming and movie ticket websites.  Among other things, the hackers used personal information to buy and sell virtual currency.

On the legislative side, the government sought to respond to demands for increased personal data protection by passing several amendments to the Act on the Promotion of Information Communication Network Utilization.  The amendments, which apply to IT service providers such as telecommunications companies and website operators, require businesses to obtain opt-in consent before sending consumers marketing messages, and provides for monetary compensation to victims of lax personal data security.  The amendments also raise the amount of potential fines, while simultaneously lowering the liability threshold for data processors.  One particularly unique aspect of the legislation also allows for fines of up to 3% of company revenue for violations of data protection laws.

            5.   Malaysia

On November 15, 2013, Malaysia published its long-awaited “Personal Data Protection Act 2010.”  The comprehensive law is modeled after European data protection regimes and contains strict requirements as to consent, notification, and transfer of personal data.  One unique aspect of the law is its extraterritorial application.  According to the act, data collection occurring outside of Malaysia must comply with the law if that data is intended to be further “processed” in Malaysia.  This provision potentially may affect the practices of companies that store data in Malaysia, regardless where the data is collected.  The law and its accompanying regulations also require data processors in several major economic sectors to register with the government and to provide details about their data privacy programs.

A day after Malaysia Airlines Flight MH370 disappeared en route from Kuala Lumpur to Beijing, several government agencies in Malaysia fell victim to a cyberattack, resulting in the loss of classified data from around 30 computers in the Department of Civil Aviation, the National Security Council and Malaysia Airlines.  According to media reports, government departments were sent an virus disguised as a news story about the disappearance of the plane.  The attack was traced back to Chinese hackers, and halted by CyberSecurity Malaysia.

            6.   Singapore

Singapore has recently issued its first comprehensive data privacy law, the Personal Data Protection Act (“PDPA”), with most key provisions coming into effect throughout 2014.  The law’s provisions regarding notice, consent, data transfer, and disclosure come into effect on July 2, 2014 and are based on data privacy laws in jurisdictions such as the EU, Canada, Hong Kong, and Australia, as well as OECD guidelines.  While the law imposes strict conditions on the conduct of businesses in their interactions with customers, it contains several exemptions to key provisions where businesses are dealing with their own employees.  For example, collection of employees’ personal data does not require consent where “the personal data is collected by the individual’s employer and the collection is reasonable for the purpose of managing or terminating an employment relationship.”   Consent, use, and disclosure requirements also are relaxed in the context of “investigations,” which could include internal investigations conducted by a company in connection with potential violations of law.  The Personal Data Protection Commission (“PDPC”) has since issued a series of guidelines on PDPA compliance for telecommunications, real estate, social service, education and healthcare sector companies.[95]

As expected, it has not taken long for the financial hub to begin investigating possible violations under the new legal framework.  The Personal Data Protection Commission commenced an investigation against China smartphone maker Xiaomi after users complained of receiving unsolicited marketing phone calls,[96] and publically announced prosecutions against a property salesperson and an education company for violations of the PDPA’s “Do Not Call” provisions.[97]

The government of Singapore also announced in June 2014 that approximately 1,500 online “SingPass” accounts, which contain sensitive personal information and are used by residents to access government services, may have been compromised.  The breach came to light when users received unexpected messages from SingPass notifying them that their passwords had been reset.  This is one of a series of incidents in a country attempting to get its fledgling data privacy regime off the ground.  Other data privacy breaches in the latter part of 2014 include the leaking of an internal database containing names, phone numbers and identity card numbers of 300,000 customers of a popular karaoke bar chain.[98]

      C.   Other International Developments of Note

Canada has also been steadily strengthening its protections for individual data and, correspondingly, regulations on cybersecurity and collection of individual data.  For example, Industry Canada–the Canadian governmental department tasked with fostering and enhancing a robust Canadian economy–has issued final regulations under Canada’s Anti-Spam Legislation (CASL).  CASL will be implemented in three phases: while the majority of CASL came into force July 1, 2014 (including substantive amendments to the Competition Act and the Personal Information Protection and Electronic Documents Act), the rules that apply to computer programs came into force January 15, 2015, followed by the private right of action on July 1, 2017.  Industry Canada has provided interpretive guidance on several issues under CASL, including the definition of a commercial electronic message (CEM), the retroactive application of CASL to express consent obtained before CASL came into force, the application of CASL to IP addresses and cookies, and the interaction between the “unsubscribe” requirement and implied consent.  In addition, as of early January 2015, the Office of the Privacy Commissioner of Canada is launching an effort to determine how advertisers monitor consumers’ online behavior, and whether such advertisers are in fact complying with Canadian privacy laws, and in particular, the Personal Information Protection and Electronic Documents Act (PIPEDA).

Meanwhile in Kenya, citizens and international human rights groups are protesting the proposed Security Laws Bill 2014, which would amend Kenya’s existing anti-terrorism legislation in ways that, according to the concerned citizenry, would seriously impinge upon individuals’ right to basic expectations of privacy and right of free expression.  For instance, the bill also would empower Kenya’s National Intelligence Service to intercept and record telephone conversations without a court order.  In addition, the new bill would make it a felony punishable by a fine of up to 1 million shillings (or USD $11,000) or three years in jail to distribute “obscene, gory or offensive material which is likely to cause fear and alarm to the general public.” Media outlets and journalists who publish or broadcast photographs of terror victims without their consent or permission from the police would also receive a jail sentence of up to three years or a fine of up to 5 million shillings (or USD $55,200), or both, according to the bill.  The bill further removes the security of tenure of the inspector general, director general of intelligence and that of the directorate of criminal investigations, which some opponents say will hamper job performance and undermine their independence, making them vulnerable to manipulation by the appointing authority.

Meanwhile, in a spate of data breaches in the Cayman Islands reported in late 2014, hackers gained access to emails with bank transfer details and the overseas thieves were able to transfer money out of accounts from several local banks.  Hackers stole more than $300,000 from one victim.  Current banking regulations in the Cayman Islands do not require banks (or any other industry players) to tell customers if their data has been compromised by hackers.  The Cayman Islands Monetary Authority, which regulates banks, has guidance for banks on cybersecurity, but no actual requirements.  However, new data protection legislation, which has been circulating in the Legislative Assembly for over five years, would add consumer protections and could potentially force banks to notify customers when their data is stolen.  In August 2014, the government released a final consultation on the bill, known as the Data Protection Bill and it could come up for debate in the Assembly again in 2015.  The bill, which is based on European Union and United Kingdom regulations from the 1990s, has come under fire for being outdated, confusing, and overly complex, but may nevertheless be important simply by virtue of being the first law requiring banks and other entities to notify consumers about data breach incidents.

Rounding out the efforts to tighten data privacy protections around the world, major attempts to reform privacy law in Australia and New Zealand also went underway in 2014.  Australia, in particular, after a decade-long-effort, has put in place a set of thirteen principles that regulate the handling of personal information by either Australian governmental entities or certain private entities.  The New Zealand government similarly indicated in May 2014, that it intends to reform its privacy laws to include a new requirement to report data breaches to any affected individuals, as well as the NZ privacy commissioner, and hike up fines for violators.


   [1]   Despite the plaintiffs’ attempts to amend their complaint’s deficiencies, the court again dismissed the VPPA claim more recently, this time with prejudice.  In re Nickelodeon Consumer Privacy Litig., No. 12-7829, Opinion (N.D. Cal. Jan. 20, 2015).  The court reiterated its earlier holding regarding the VPPA’s specific definition of PII and held that “[n]othing in the amended Complaint changes the fact that Viacom’s disclosure does not – ‘without more’ – identify individual persons.”  Op. at 5.  The court went on to state that plaintiffs’ allegations that Google could take the information it received from Viacom and combine it with other information Google possessed to personally identify the plaintiffs was “entirely hypothetical.”  Id. at 6.

   [2]   Kat Greene, 2 Tech-Opposed Consumer Bills Die In Calif. Assembly, Law360 (June 25, 2014), available at http://www.law360.com/articles/551523.

   [3]   Margaret A. Dale, Capital One to Pay Largest Settlement on Record (Aug. 19, 2014), available at http://www.natlawreview.com/article/capital-one-to-pay-largest-tcpa-settlement-record-0.

   [4]   Press Release, Federal Trade Commission, Fandango, Credit Karma Settle FTC Charges that They Deceived Consumers By Failing to Securely Transmit Sensitive Personal Information (March 28, 2014), available at http://www.ftc.gov/news-events/press-releases/2014/03/fandango-credit-karma-settle-ftc-charges-they-deceived-consumers.

   [5]   Id.

   [6]   Press Release, Federal Trade Commission, FTC Approves Final Consent Settling Charges that Accretive Health Failed to Adequately Protect Consumers’ Personal Information (Feb. 24, 2014), available at http://www.ftc.gov/news-events/press-releases/2014/02/ftc-approves-final-consent-settling-charges-accretive-health.

   [7]   Press Release, Federal Trade Commission,FTC Approves Final Consent Orders Settling Charges that Companies Deceptively Claimed Their Genetically Modified Nutritional Supplements Could Treat Diseases (May 12, 2014), available at http://www.ftc.gov/news-events/press-releases/2014/05/ftc-approves-final-consent-orders-settling-charges-companies.

   [8]   Press Release, Federal Trade Commission, Provider of Medical Transcript Services Settles FTC Charges That It Failed to Adequately Protect Consumers’ Personal Information (Jan. 31, 2014), available at http://www.ftc.gov/news-events/press-releases/2014/01/provider-medical-transcript-services-settles-ftc-charges-it.

   [9]   Letter from Maneesha Mithal, Associate Director, Federal Trade Commission, to Dana Rosenfeld, Counsel, Verizon Communications, Inc., (Nov. 12, 2014), available at http://www.ftc.gov/enforcement/cases-proceedings/closing-letters/verizon-communications-inc.

  [10]   Id.

  [11]   Id.

  [12]   Id.

  [13]   Id.

  [14]   Press Release, Federal Trade Commission, Snapchat Settles FTC Charges That Promises of Disappearing Messages Were False (May 8, 2014), available at http://www.ftc.gov/news-events/press-releases/2014/05/snapchat-settles-ftc-charges-promises-disappearing-messages-were.

  [15]   Id.

  [16]   Press Release, Federal Trade Commission, Android Flashlight App Developer Settles FTC Charges It Deceived Consumers (December 5, 2013), available at http://www.ftc.gov/news-events/press-releases/2013/12/android-flashlight-app-developer-settles-ftc-charges-it-deceived.

  [17]   Press Release, Federal Trade Commission, Medical Billing Provider and its Former CEO Settle FTC Charges That They Misled Consumers About Collection of Personal Health Data (December 3, 2014), available at http://www.ftc.gov/news-events/press-releases/2014/12/medical-billing-provider-its-former-ceo-settle-ftc-charges-they.

  [18]   Press Release, Federal Trade Commission, FTC Approves Final Order in Case About Google Billing for Kids’ In-App Charges Without Parental Consent (December 5, 2014), available at http://www.ftc.gov/news-events/press-releases/2014/12/ftc-approves-final-order-case-about-google-billing-kids-app.

  [19]   The FTC also alleged Google, in 2011, failed to require any authorization at all for certain in-app purchases.

  [20]   Press Release, Federal Trade Commission, Yelp, TinyCo Settle FTC Charges Their Apps Improperly Collected Children’s Personal Information (September 17, 2014), available at http://www.ftc.gov/news-events/press-releases/2014/09/yelp-tinyco-settle-ftc-charges-their-apps-improperly-collected.

  [21]   Other changes include requiring all telemarketing calls to allow the consumer to opt-out of future calls during the call, limiting permissible abandoned calls on a per-calling campaign basis, and exempting telemarketing requirements for pre-recorded calls to residential lines made by healthcare-related entities governed by the Health Insurance Portability and Accountability Act of 1996 (“HIPPA”).  FCC Guidance at 1831, par. 2.

  [22]   See Petition for Declaratory Ruling, CG Docket No. 02-278, filed by Consumer Bankers Association on Sept. 19, 2014. (Petition)

  [23]   See Petition for Expedited Declaratory Ruling, CG Docket No. 02-278, filed by Vo Apps, Inc. on July 31, 2014. (Petition).

  [24]   See Petition for Expedited Declaratory Ruling, CG Docket No. 02-278, filed by Santander Consumer USA, Inc. on July 10, 2014 (Petition).

  [25]   See Petition for Expedited Declaratory Ruling, CG Docket No. 02-278, filed by Milton H. Fried, Jr. and Richard Evans on May 27, 2014 (Petition).

  [26]   See Petition for Expedited Declaratory Ruling, CG Docket No. 02-278, filed by Vincent Lucas on June 18, 2014 (Petition).

  [27]   See Petition for Expedited Declaratory Ruling, CG Docket No. 02-278, filed by Stage Stores, Inc. on June 4, 2014 (Petition).

  [28]   See Petition for Expedited Declaratory Ruling and Clarification, CG Docket No. 02-278, filed by TextMe, Inc. on Mar. 18, 2014 (Petition).

  [29]   See Petition for Declaratory Ruling, CG Docket No. 02-278, filed by the Retail Industry Leaders Association on Dec. 30, 2013 (Petition).

  [30]   See Petition for Exemption, CG Docket No. 02-278, filed by the American Bankers Association on Oct. 14, 2014 (Petition).

  [31]   See Letter from Indiana Attorney General Greg Zoeller et al. to Tom Wheeler, Chairman, Federal Communications Commission (Sept. 9, 2014) (Letter).

  [32]   White House Press Release, “Launch of the Cybersecurity Framework” (Feb. 12, 2014), available at http://www.whitehouse.gov/the-press-office/2014/02/12/launch-cybersecurity-framework.

  [33]   See Executive Order 13636, “Improving Critical Infrastructure” (Feb. 12, 2013), available at http://www.whitehouse.gov/the-press-office/2013/02/12/executive-order-improving-critical-infrastructure-cybersecurity.

  [34]   Id.

  [35]   NIST, “Framework for Improving Critical Infrastructure Cybersecurity” (Feb. 12, 2014), available at http://www.nist.gov/cyberframework/upload/cybersecurity-framework-021214.pdf.

  [36]   Id. at 4.

  [37]   Id. at 7.

  [38]   Id.

  [39]   Id. at 11.

  [40]   Id. at 9.

  [41]   See NIST, “NIST Roadmap for Improving Critical Infrastructure Cybersecurity” (Feb. 12, 2014), available at http://www.nist.gov/cyberframework/upload/roadmap-021214.pdf.

  [42]   Id.

  [43]   NIST, “2nd Privacy Engineering Workshop” (July 28, 2014), available at http://www.nist.gov/itl/csd/privacy-engineering-workshop-september-15-16-2014.cfm.

  [44]   NIST, “Privacy Engineering Workshop” (Feb. 13, 2014), available at http://www.nist.gov/itl/csd/privacy-engineering-workshop.cfm; NIST, “2nd Privacy Engineering Workshop” (July 28, 2014), available at http://www.nist.gov/itl/csd/privacy-engineering-workshop-september-15-16-2014.cfm.

  [45]   “Experience with the Framework for Improving Critical Infrastructure Cybersecurity,” 79 FR 50891 (Aug. 26, 2014), available at https://federalregister.gov/a/2014-20315.

  [46]   See NIST, “6th Cybersecurity Framework Workshop” (Dec. 3, 2014), available at http://www.nist.gov/cyberframework/6th-cybersecurity-framework-workshop-october-29-30-2014.cfm

  [47]   See http://www.us-cert.gov/ccubedvp.

  [48]   NIST, “Update on the Cybersecurity Framework” (Dec. 5, 2014), available at http://www.nist.gov/cyberframework/upload/nist-cybersecurity-framework-update-120514.pdf.

  [49]   Id.

  [50]   Id.

  [51]   Statement of Administration Policy, Executive Office of the President, (Jan. 9, 2014), available at http://www.whitehouse.gov/sites/default/files/omb/legislative/sap/113/saphr3811h20140109.pdf.

  [52]   Paul Szoldra, Snowden:  Here’s Everything We’ve Learned In One Year Of Unprecedented Top-Secret Leaks (June 7, 2014), available at http://www.businessinsider.com/snowden-leaks-timeline-2014-6.

  [53]   James Bamford, The Most Wanted Man in the World, available at http://www.wired.com/2014/08/edward-snowden/.

  [54]   Paul Szoldra, Snowden:  Here’s Everything We’ve Learned In One Year Of Unprecedented Top-Secret Leaks (June 7, 2014), available at http://www.businessinsider.com/snowden-leaks-timeline-2014-6.

  [55]   See Bloomberg, NSA Searched Americans’ Email, Phone Calls, Clapper Says (Apr. 1, 2014), available at http://www.bloomberg.com/news/2014-04-02/nsa-searched-americans-email-phone-calls-clapper-says.html.

  [56]   Germany’s Merkel Under Fire Over NSA Scandal (Oct. 5, 2014), available at http://www.worldbulletin.net/news/145683/germanys-merkel-under-fire-over-nsa-scandal.

  [57]   Kim Zetter, Feds Threatened to Fine Yahoo $250K Daily for Not Complying with PRISM, (Sept. 11, 2014), available at http://www.wired.com/2014/09/feds-yahoo-fine-prism/.

  [58]   Craig Timberg, U.S. Threatened Massive Fine to Force Yahoo to Release Data, (Sept. 11, 2014), available at http://www.washingtonpost.com/business/technology/us-threatened-massive-fine-to-force-yahoo-to-release-data/2014/09/11/38a7f69e-39e8-11e4-9c9f-ebb47272e40e_story.html.

  [59]   Charlie Savage and Jeremy W. Peters, Bill to Restrict N.S.A. Data Collection Blocked in Vote by Senate Republicans, (Nov. 18, 2014), available at http://www.nytimes.com/2014/11/19/us/nsa-phone-records.html.

  [60]   Julian Hattem, Obama Renews NSA Spying Program After Reform Bill Fails, (December 8, 2014), available at http://thehill.com/policy/technology/226322-obama-renews-nsa-program-after-reform-bill-fails.

  [61]   Charlie Miller, Revelations of N.S.A. Spying Cost U.S. Tech Companies, (March 21, 2014), available at http://www.nytimes.com/2014/03/22/business/fallout-from-snowden-hurting-bottom-line-of-tech-companies.html.

  [62]   See also N.Y.S. Div. of Homeland Sec. & Emergency Servs., NYS Breach Notification Law Changes, http://www.dhses.ny.gov/ocs/breach-notification/.

  [63]   New Jersey is considering legislation which would also expand its data breach notification, which is currently pending in the Senate after clearing the Assembly.  H.B. 3146, S. 2188, 216th Leg. (N.J. 2014).

  [64]   Bills recently proposed in other states would have required companies to offer free credit monitoring to state residents when security breaches exposed those residents’ personal information.  Both Rhode Island’s H. 7519, which would have required any “person required to disclose a breach” under Rhode Island’s data breach law, to “provide one year of credit monitoring to any resident of Rhode Island, at no cost to the resident, whose personal information was, or is reasonably believed to have been” compromised, and Minnesota’s H.F. 2253, which would have required companies to provide the same services to residents of Minnesota whose “unencrypted personal information” was compromised,  died in committee.

  [65]   See, e.g., Personal Online Account Privacy Protection Act, H.B. 340, 40th Leg., Reg. Sess. 2014 (La. 2014); H.B. 1407, Reg. Sess. 2014 (N.H. 2014); Act Relating to Education and Labor–Social Media Policy, H.B. 7124 (R.I. 2014); Employee Online Privacy Act, S.B. 1808, H.B. 1852 (Tenn. 2014); S.B. 5211, 63rd Leg., 2013 Reg. Sess. (Wash. 2013); A.B. 2878, S.B. 1915, 215th Leg., 2012-2013 Reg. Sess. (N.J. 2013); see generally Nat’l Conf. on State Legs., http://www.ncsl.org/research/telecommunications-and-information-technology/employer-access-to-social-media-passwords-2013.aspx#2014 (cataloguing legislation regarding employer access to social media usernames and passwords).

  [66]   See, e.g., No College Requests for Social Media, S.B. 422, 51st Leg., 1st Sess. (N.M. 2013); Act Relating to Education and Labor–Social Media Policy, H.B. 7124 (R.I. 2014).

  [67]   See also Surveillance Act, S.B. 2937, 98th Reg. Sess. (Ill. 2014) (amends law to prohibit law enforcement use of information obtained from a drone owned by a private individual without a warrant); S.B. 196, 2013-2014 Reg. Sess. (Wis. 2014) (requiring a warrant before law enforcement may use UAS where a reasonable expectation of privacy exists); Freedom from Unwanted Surveillance Act, H.B. 591, S.B. 796, 108th Reg. Sess. (Tenn. 2013) (similar restrictions on use, court admissibility, and creation of a private remedy); Freedom from Drone Surveillance Act, S.B. 1587, 98th Gen. Assemb. (Ill. 2013) (enacting similar restrictions on the use of UAS without a warrant).

  [68]   At the federal level, Congress has set a deadline of September 2015 for full integration of UAS into its regulations, although a government audit expressed doubts about this deadline being met.  See Office of Inspector General, FAA, Report AV-2014-061, FAA Faces Significant Barriers to Safely Integrate Unmanned Aircraft Systems into the National Airspace System (June 26, 2014),  https://www.oig.dot.gov/library-item/31975.  In July 2014 the Federal Aviation Administration issued a policy consolidating regulations on drone use in federal airspace, without the creation of any new regulations.  See U.S. Dep’t of Transportation, N JO 7210.873, Air Traffic Organization Policy, Unmanned Aircraft Operations in the National Airspace System, http://www.faa.gov/documentLibrary/media/Notice/N_JO_7210.873_Unmanned_Aircraft_Operations.pdf.  In addition, in June 2014 the FAA issued the first permit for a commercial unmanned aircraft to fly over U.S. soil.  Oil company BP will be allowed to conduct aerial surveys over Alaska.  See FAA, Press Release–FAA Approves First Commercial UAS Flights over Land (June 10, 2014), http://www.faa.gov/news/press_releases/news_story.cfm?newsId=16354.  Other exemptions were subsequently awarded by the FAA, for example to drones used in TV and movie productions with a proper permit. See FAA, Press Release–U.S. Transportation Secretary Foxx Announces FAA Exemptions for Commercial UAS Movie and TV Production (Sept. 25, 2014), http://www.faa.gov/news/press_releases/news_story.cfm?cid=TW251&newsId=17194.

  [69]   Kamala D. Harris, California Dep’t of Justice, Making Your Privacy Policies Public: Recommendations on Developing a Meaningful Privacy Policy, (May 2014), https://oag.ca.gov/sites/all/files/agweb/pdfs/cybersecurity/making_your_privacy_practices_public.pdf.

  [70]   The California legislature also is considering a bill that would prohibit online retailers from collecting certain information about their customers.  S.B. 383, Reg. Sess. (Cal. 2014).  The bill, which has cleared the Senate, is seen as a reaction to the California Supreme Court’s ruling in Krescent, 56 Cal.4th 128 (2013) (holding restrictions on data collection placed on brick-and-mortar business do not apply to online retailers).  (We discuss this case in Section I.B.5 above).  The bill would allow billing addresses and ZIP codes to be retained only if used to address identity theft and fraud, while also prohibiting selling such data to third parties.

  [71]   For more information, see Gibson Dunn’s article, “California Tightens Privacy Protection, Part 1 of 2:  New California data privacy laws impose restrictions on third-party tracking and data breach notification,” (Nov. 18, 2013), available at http://www.gibsondunn.com/wp-content/uploads/documents/publications/SouthwellCaliforniaPrivacyPartOne.pdf.

  [72]   For more information, see Gibson Dunn’s article, “California’s New ‘Digital Eraser’ Evaporates Embarrassment, Part 2 of 2: New California privacy laws will make it easier for kids to remove inappropriate posts from websites,” (Nov. 19, 2013), available at http://www.gibsondunn.com/wp-content/uploads/documents/publications/SouthwellCaliforniaPrivacyPartTwo.pdf.

  [73]   For more discussion regarding the new cybersecurity proposal, see Alexander H. Southwell, Eric D. Vandevelde, Ryan T. Bergsieker, Stephenie Gosnell Handler & Adam Chen, of Gibson, Dunn & Crutcher LLP, U.S. President Obama Announces Renewed Focus on Securing Cyberspace and Protecting Consumer Privacy, 15 Bloomberg BNA 1, available at http://www.gibsondunn.com/wp-content/uploads/documents/publications/WorldDataProtectionReport-BNA-Jan2015.pdf.

  [74]   See Sophia Pearson & Andrew Zajac, ‘Guccifer’ Indicted in U.S. for ID Theft, Cyberstalking, Bloomberg (June 12, 2014), available at http://www.bloomberg.com/news/2014-06-12/u-s-indicts-romanian-hacker-guccifer-for-cyberstalking-1-.html.

  [75]   Bitcoin is a currency created in 2009 that is exchanged without the use of banks, thereby allowing holders of Bitcoin to make purchases anonymously.  Bitcoin exchanges allow customers to buy or sell Bitcoins using different currencies.  Bitcoin owners can transfer Bitcoins digitally via a computer file that serves as a public ledger called the “block chain.”

  [76]   See WP 202, http://ec.europa.eu/justice/data-protection/article-29/documentation/opinion-recommendation/files/2013/wp202_en.pdf.

  [77]   See WP 204, http://ec.europa.eu/justice/data-protection/article-29/documentation/opinion-recommendation/files/2013/wp204_en.pdf.

  [78]   See WP 208, http://ec.europa.eu/justice/data-protection/article-29/documentation/opinion-recommendation/files/2013/wp208_en.pdf.

  [79]   See WP 225, http://ec.europa.eu/justice/data-protection/article-29/documentation/opinion-recommendation/files/2014/wp225_en.pdf.

  [80]   For discussion of the current EU Data Privacy Regulation and the Article 29 Working Party Guidelines concerning the “right to be forgotten,” see above at Sections V.A.1.a and V.A.1.c.

  [81]   CJEU No. C-131/12, Google Spain SL v. Agencia Española de Protección de Datos. See discussion above in Section V.A.1.f.

  [82]   It is worth noting that the ICO accepts that covert surveillance of employees may be justified as a last resort in exceptional circumstances; the employer should be satisfied there are grounds for suspecting criminal activity (or equivalent malpractice), and that notifying the employee concerned would prejudice detection or prevention.

  [83]   Barristers and solicitors are generally classed as data controllers, making them legally responsible for the personal information they process.

  [84]   Gulveen Aulakh, India Proposes to Penalise Invasion of Privacy Offences in Draft Bill, (Feb. 18, 2014), available at http://articles.economictimes.indiatimes.com/2014-02-18/news/47451233_1_personal-data-privacy-bill-draft-bill.

  [85]   India Increasing Data Protection after US Cyber Snooping, (Dec. 10, 2014), available at http://www.business-standard.com/article/news-ians/india-increasing-data-protection-after-us-cyber-snooping-114121001016_1.html.

  [86]   See http://www.nic.in/node/41.

  [87]   Saikat Datta, Security Breach in NIC, Critical Data at Risk, (Aug. 10, 2014), available at http://www.hindustantimes.com/india-news/newdelhi/nic-security-breach-raises-concerns-about-india-s-net-security-practices/article1-1250464.aspx.

  [88]   Chen Yifei, New Internet Rules Allow Websites to be Sued for Defamation in China, (Oct. 10, 2014), available at http://www.scmp.com/news/china-insider/article/1613890/new-internet-rules-allow-website-be-sued-defamation-china.

  [89]   PCPD Publishes Guidance on Personal Data Protection in Cross-border Data Transfer, (Dec. 29, 2014), available at  http://www.pcpd.org.hk/english/news_events/media_statements/press_20141229.html.

  [90]   See Benesse Suspect gets Fresh Warrant over Second Data Theft, http://www.japantimes.co.jp/news/2014/08/11/national/crime-legal/benesse-suspect-gets-fresh-warrant-over-second-data-theft/#.VK5StpgcTGg.

  [91]   Toshio Aritake, Japan Ministry to Amend Data Security Rules As Breached Company Says 48.6M Affected, (Oct. 6, 2014), available at http://www.bna.com/japan-ministry-amend-n17179895732/.

  [92]   Japan’s Ministry of Economy, Trade and Industry.

  [93]   Megumi Fujikawa, Japan Airlines Reports Hacker Attack, available at http://www.wsj.com/articles/japan-airlines-reports-hacker-attack-1412053828.

  [94]   Toshio Aritake, Japan Ministry to Amend Data Security Rules As Breached Company Says 48.6M Affected, (Oct. 6, 2014), available at http://www.bna.com/japan-ministry-amend-n17179895732/.

  [95]   PDPC Advisory Guidelines, available at http://www.pdpc.gov.sg/legislation-and-guidelines/advisory-guidelines.

  [96]   Irene Tham, Xiaomi Under Probe over Alleged Privacy Breach, (Aug. 14, 2014), available at http://www.straitstimes.com/news/singapore/more-singapore-stories/story/xiaomi-under-probe-over-alleged-privacy-breach-20140814.

  [97]   Property Salesperson to be Charged for Breaching the Do Not Call Requirements, (Sept. 22, 2014), available at http://www.pdpc.gov.sg/news/press-room/page/0/year/2014/property-salesperson-to-be-charged-for-breaching-the-do-not-call-requirements.

  [98]   Irene Tham and Pearl Lee, Personal Data of 300,000 K Box Singapore Clients Surfaces Online, (Sept. 16, 2014), available at http://www.straitstimes.com/news/singapore/courts-crime/story/personal-data-300000-k-box-singapore-clients-surfaces-online-20140.


The following Gibson Dunn attorneys assisted in preparing this client alert: Alexander H. Southwell, Michael Li-Ming Wong, Karl G. Nelson, Joshua A. Jessen, Michael Walther, James Cox, Michael Adelman, Nicolas Autet, Nathaniel L. Bach, Abbey Barrera, Ryan T. Bergsieker, Jennifer Bracht, Amy Chmielewski, Lyndy Davies, Kai Gesing, Jared Greenberg, Stephenie Gosnell Handler, Hartmut Kamrad, Kyle J. Kolb, Salomé Lemasson, Jeana Bisnar Maute, Tiffany Phan, Henry Pistell, Genevieve B. Quinn, Priyanka Rajagopalan, Reid Rector, Shawn D. Rodriguez, Ashley Rogers, Ilissa Samplin, Danielle Serbin, JP Shih, Jillian Stonecipher, Oliver Welch, Tristan Welham, Amy Wolf, Peter Wu, Adam Yarian, Lindsey Young, Alexander Zbrozek, Zhou Zhou, and Timothy Zimmerman.

Gibson, Dunn & Crutcher’s lawyers are available to assist with any questions you may have regarding these issues.  For further information, please contact the Gibson Dunn lawyer with whom you usually work or any of the following members of the Information Technology and Data Privacy Group:

United States
M. Sean Royall – Co-Chair, Dallas (+1 214-698-3256, [email protected])
Alexander H. Southwell – Co-Chair, New York (+1 212-351-3981, [email protected])
Debra Wong Yang – Co-Chair, Los Angeles (+1 213-229-7472, [email protected])
Howard S. Hogan – Member, Washington, D.C. (+1 202-887-3640, [email protected])
Karl G. Nelson – Member, Dallas (+1 214-698-3203, [email protected])
Joshua A. Jessen – Member, Orange County and Palo Alto (+1 949-451-4114/+1 650-849-5375, [email protected])
Michael Li-Ming Wong – Member, San Francisco/Palo Alto (+1 415-393-8333/+1 6508495393, [email protected])
Ryan T. Bergsieker – Member, Denver (+1 303-298-5774, [email protected])
Richard H. Cunningham – Member, Denver (+1 303-298-5752, [email protected])

Eric D. Vandevelde – Member, Los Angeles (+1 213-229-7186, [email protected])

Europe
James A. Cox – Member, London (+44 207 071 4250, [email protected])
Andrés Font Galarza – Member, Brussels (+32 2 554 7230, [email protected])
Kai Gesing – Member, Munich (+49 89 189 33-180, [email protected])
Bernard Grinspan – Member, Paris (+33 1 56 43 13 00, [email protected])
Alejandro Guerrero Perez – Member, Brussels (+32 2 554 7218, [email protected])
Jean-Philippe Robé – Member, Paris (+33 1 56 43 13 00, [email protected])
Michael Walther – Member, Munich (+49 89 189 33-180, [email protected])

China
Kelly Austin – Member, Hong Kong (+852 2214 3788, [email protected])

India
Jai S. Pathak – Member, Singapore (+65 6507 3683, [email protected]

Questions about SEC disclosure issues concerning data privacy and cybersecurity can also be addressed to any of the following members of the Securities Regulation and Corporate Disclosure Group:

James J. Moloney – Co-Chair, Orange County, CA (949-451-4343, [email protected])
Elizabeth Ising – Member, Washington, D.C. (202-955-8287, [email protected])   

© 2015 Gibson, Dunn & Crutcher LLP

Attorney Advertising:  The enclosed materials have been prepared for general informational purposes only and are not intended as legal advice.