February 7, 2012
The pace of data privacy and security legal events accelerated in 2011, as the global economy became increasingly dependent on online, mobile and server-based platforms and networks. The past year witnessed a number of significant legal developments as private plaintiffs and regulators around the globe focused heavily on practices and intrusions involving user data in a wide range of industries and technical environments. At the same time, the law continues to lag behind technological change, and legislative proposals in the United States, the European Union and elsewhere signal significant legal risks and threats to technical innovation.
Gibson Dunn’s Information Technology and Data Privacy group–which was at the forefront of many of these developments–has detailed the key data privacy and security events of the past year and anticipated trends for the year to come. This Review covers six core areas: (1) class actions and civil litigation related to data privacy and security; (2) FTC and regulatory activity; (3) criminal enforcement; (4) federal legislative activity; (5) data security; and (6) select international developments in the European Union and Asia Pacific Region.
A. Stop Online Piracy Act (“SOPA”) and the Preventing Real Online Threats to Economic Creativity and Theft of Intellectual Property Act (“PROTECT IP Act” or “PIPA”)
B. Online Protection and Enforcement of Digital Trade Act (“OPEN Act”)
C. Notable Federal Privacy and Data Breach Legislation
In 2011, the plaintiffs’ bar was extremely active in filing class actions asserting a variety of novel claims relating to the allegedly unauthorized collection, use or disclosure of consumer data, or following widely publicized data breaches. Despite significant setbacks to plaintiffs who struggled to articulate a viable theory of harm, data privacy and security-related filings continue to gain significant momentum.
The past year witnessed several critical developments in the application of Article III standing requirements to the theories of injury being asserted by plaintiffs in data privacy and security class actions–beginning with the first decision dismissing a privacy class action for lack of Article III standing in LaCourt v. Specific Media, Case No. 10-cv-01256-GW-JCG, 2011 WL 1661532 (C.D. Cal. Aug. 29, 2011) (Gibson Dunn represented Specific Media in this case). While several courts have followed and extended the reasoning in Specific Media to dismiss other privacy-related class actions for lack of Article III standing, other courts have accepted alternative theories of injury proffered by plaintiffs, including theories arising from the alleged invasion of a statutory right or from the increased risk of sufficiently “credible” future harms (particularly in the context of a data breach).
Plaintiffs’ theories of harm in data privacy cases vary, but typically have involved some assertion that the unexpected collection and use of plaintiffs’ personal information harmed plaintiffs in some way, either by diminishing the value of that information or by depriving plaintiffs of the opportunity to use and control that information as they see fit. Plaintiffs also frequently assert that entities collecting their personal information are misappropriating or misusing it. In cases involving data breaches, plaintiffs typically assert the increased risk of identity theft and the costs flowing from it (for example, the purchase of credit monitoring) as the alleged harm. Notably, lawsuits asserting these theories are often spurred by the publication of academic articles, blogs or media reports documenting a potential privacy concern or security vulnerability (often within 24 hours of an initial news report), rather than any concrete instance of user harm.
No Injury in Fact
Gibson Dunn’s client, Specific Media, was the first to challenge plaintiffs’ ability to demonstrate a concrete injury in fact, a requirement for Article III standing, in a data privacy case. Specific Media was targeted (along with several other online publishers and advertising networks) in a series of class actions claiming that it had improperly used Adobe “Flash cookies” to track the online behavior of users who had configured their browser settings to avoid being tracked. After approving settlements by the other defendants in these cases, Judge Wu granted Specific Media’s motion to dismiss on grounds that plaintiffs had not alleged any concrete injury in fact and therefore lacked standing under Article III. LaCourt v. Specific Media, Case No. 10-cv-01256-GW-JCG, 2011 WL 1661532 (C.D. Cal. Aug. 29, 2011).
In the first decision to apply Article III to a data privacy case, Judge Wu concluded that the various theories of injury advanced by the named plaintiffs in the Specific Media case did not demonstrate an injury in fact, in terms that have broad application to many data privacy and security class actions:
The Complaint does not identify a single individual who was foreclosed from entering into a ‘value-for-value’ exchange as a result of Specific Media’s alleged conduct. Furthermore, there are no facts . . . that indicate that the Plaintiffs themselves ascribed an economic value to their unspecified personal information. Finally, even assuming an opportunity to engage in a ‘value-for-value exchange,’ Plaintiffs do not explain how they were ‘deprived’ of the economic value of their personal information simply because their unspecified personal information was purportedly collected by a third party.
Specific Media, 2011 WL 1661532 at *5.
These arguments were successfully advanced (again by Gibson Dunn) in In re iPhone Application Litig., No. 11-md-02250-LHK, 2011 WL 4403963 (N.D. Cal. Sept. 20, 2011), which involved a series of consolidated class actions challenging the alleged “tracking” of smartphone users by mobile device manufacturers and third-party advertising and analytics companies (the “Mobile Industry Defendants”) that support the “apps” that can be downloaded onto these devices. There, Plaintiffs alleged that the Mobile Industry Defendants collected and disclosed users’ personal information located on their mobile Apple devices without their knowledge or permission, allegedly in violation of several federal and state laws. Plaintiffs also sought to hold Apple liable for these alleged violations on the grounds that (i) its design of the iOS system allowed apps to access users’ personal information despite Apple’s alleged representations, and (ii) Apple exercised control over the apps that could be sold in the Apple App Store but failed to adequately police their collection and disclosure of users’ personal information. Plaintiffs also sought to hold Apple liable for allegedly enabling iOS devices to maintain, synchronize, and retain detailed, unencrypted location history files, an issue that also has also received significant media attention.
In a detailed opinion that surveyed the broad range of relevant case law and relied heavily on the reasoning in Specific Media, the court dismissed the consolidated complaint for lack of Article III standing, finding that plaintiffs had failed to plead an injury in fact on two grounds. First, plaintiffs failed to “allege injury in fact to themselves.” Id. at *4 (emphasis in original). As the Court explained, “Plaintiffs do not identify what [Apple devices] they used, do not identify which Defendant (if any) accessed or tracked their personal information, do not identify which apps they downloaded that access/track their personal information, and do not identify what harm (if any) resulted from the access or tracking of their personal information.” Id. Second, agreeing with the holding in Specific Media, the Court held that “Plaintiffs [have] not identified a concrete harm from the alleged collection and tracking of their personal information sufficient to create injury in fact.” Id. at *5. The Court observed that, as in Specific Media, the named plaintiffs “had not alleged any ‘particularized example’ of economic injury or harm to their computers, but instead offered only abstract concepts, such as ‘opportunity costs,’ ‘value-for-value exchanges,’ ‘consumer choice,’ and ‘diminished performance.'” Id. Notably, Judge Koh made clear that “[t]he Court does not take lightly Plaintiffs’ allegations of privacy violations.” Id. at *4. However, she stated that “for purposes of the standing analysis under Article III, Plaintiffs’ current allegations [were] clearly insufficient.” Id.; see also Low v. LinkedIn, No. 11-CV-01468-LHK, 2011 WL 5509848, at *6 (N.D. Cal. Nov. 11, 2011), (dismissing case for failure to sufficiently allege an “injury in fact” as required for Article III standing because plaintiff “failed to put forth a coherent theory of how his personal information was disclosed or transferred to third parties, and how it has harmed him.”).
Despite a series of decisions rejecting claims brought based on the allegedly unauthorized track collection of user data for lack of Article III standing, the filing of such cases shows no sign of abating. See, e.g., Cousineau v. Microsoft Corp., No. 11-CV-01438-JCC (W.D. Wash. Aug. 31, 2011) (challenging alleged collection of detailed location file histories); Kim v. Space Pencil, Inc., No. 11-CV-3796-LB (N.D. Cal. Aug. 1, 2011) (challenging analytics company’s alleged use of Flash cookies, ETags, and HTML5 storage to identify computers of users who blocked or deleted browser cookies); Garvey v. KISSmetrics, et al., No. 11-CV-3764-LB (N.D. Cal. Jul. 29, 2011) (same); Kenny v. Carrier IQ, Inc., No. 11-cv-05774-PSG (N.D. Cal. Dec. 1, 2011) (first of 70 class actions against mobile device manufacturers, wireless carriers, and Carrier IQ relating to alleged tracking of smartphone user activity through Carrier IQ’s diagnostic tool).
This may be due in part to the relative liberality with which courts have thus far granted plaintiffs leave to amend, as well as to language holding out hope that it may yet be possible to articulate an actionable theory of harm. See, e.g., Specific Media, 2011 WL 1661532 at *6 (“It is not obvious that plaintiffs cannot articulate some actual or imminent injury in fact. It is just that at this point they haven’t offered a coherent and factually supported theory of what that injury might be.”).
Indeed, as explained in the following section, plaintiffs are already aggressively developing–with some success–alternate theories of injury in order to overcome the significant hurdles posed by the requirements of Article III standing in data privacy and breach cases.
Alleged Invasion of Statutory Right
In response to the various decisions in 2011 dismissing online and mobile privacy complaints for failure to allege a cognizable injury in fact sufficient to demonstrate Article III standing, the plaintiffs’ bar has begun to shift to pleading statutory claims that may not have an express injury component–asserting that the alleged invasion of a statutory right itself constitutes a de facto injury in fact. See Warth v. Seldin, 422 U.S. 490, 500 (1975) (quoting Linda R.S. v. Richard D., 410 U.S. 614, 617 n.3 (1973)) (“The actual or threatened injury required by Art[icle] III may exist solely by virtue of ‘statutes creating legal rights, the invasion of which creates standing.'”). This theory of injury was given new life by the Ninth Circuit’s decision in Edwards v. First American Corp., 610 F.3d 514 (9th Cir. 2010), cert. granted, 131 S. Ct. 3022 (2011), which reaffirmed the holding in Warth.
In a few recent cases, plaintiffs that have invoked this approach have survived Article III challenges. Relying on Edwards, Judge Ware in In re Facebook Privacy Litigation, 791 F. Supp. 2d 705 (N.D. Cal. 2011) held that plaintiffs, who claimed that Facebook had transmitted Facebook IDs to advertisers without consent, had constitutional standing where they alleged violation of their rights under the Wiretap Act, 18 U.S.C. §§ 2510, et seq.–even though the Court dismissed plaintiffs’ Wiretap Act claim as insufficiently pled. 791 F. Supp. 2d at 711-13; see also In re Zynga Privacy Litig., No. 10-CV-04680-JW (N.D. Cal. June 15, 2011) (same); Low v. LinkedIn, 2011 WL 5509848 at *6 n.1 (“There is also an argument, though not specifically advanced by Plaintiff, that the creation of a statutory right may be sufficient to confer standing on Plaintiff.”).
In addition, the Ninth Circuit recently relied on the holding in Edwards and Warth to find that the alleged violation of plaintiffs’ rights under the under the Stored Communications Act, the Electronic Communications Privacy Act, and the Foreign Intelligence Surveillance Act were sufficient to confer Article III standing–albeit in a case involving unique allegations of a government “dragnet” that was used to monitor the contents of plaintiffs’ and class members’ wireless communications. See Jewel v. Nat’l Sec. Agency, 2011 WL 6848406 (9th Cir. Dec. 29, 2011) (holding that alleged violations of statutory rights conferred standing). The Ninth Circuit emphasized in Jewel, however, that while the injury required by Article III may be satisfied through the alleged violation of a statutory right, the injury must nonetheless be particularized. Id. at *6.
Given the amorphous and theoretical injuries claimed by plaintiffs in many privacy class actions, we anticipate that strategic challenges to Article III standing will continue to be an important defense for companies facing data privacy and security claims (including claims asserting the alleged violation of a statutory or constitutional right), regardless of the outcome of the Supreme Court’s decision in Edwards.
Data Breach Cases
In the wake of several prominent hacking incidents, courts have seen a significant increase in data breach class actions. Plaintiffs have had somewhat greater success demonstrating Article III standing in cases involving a data security breach, and several key decisions issued last year addressed the extent to which the risk of identity theft and other alleged injuries are sufficient to confer standing to plaintiffs who information may have been compromised in a security breach.
In late 2010, the Ninth Circuit joined the Seventh Circuit in finding that the increased risk of future harm resulting from an identified data breach could establish the injury in fact required for Article III standing. See Krottner v. Starbucks Corp., 628 F.3d 1139 (9th Cir. 2010); Pisciotta v. Old Nat’l Bancorp, 499 F.3d 629 (7th Cir. 2007) (threat of future harm resulting from third-party hack into banking records was injury in fact). Krottner involved claims resulting from the theft of an unencrypted laptop allegedly containing the personal information of hundreds of Starbucks employees by an unknown party. The plaintiffs alleged that their injuries consisted of anxiety, stress, and time and expense spent monitoring their finances. One plaintiff even alleged that a bank account was opened using his social security number; the bank closed the account immediately, with no financial loss to the plaintiff. Citing the Seventh Circuit’s decision in Pisciotta, the Ninth Circuit held that there was a “credible threat of real and immediate harm stemming from the theft of a laptop containing . . . unencrypted personal data.” Id. at 1143. Observing that the threat would be “far less credible” if the laptop had not yet been stolen, the court held that the plaintiffs nonetheless had pled sufficient injury to satisfy Article III. Id.
Following the Krottner decision, one court in the Northern District of California refused to dismiss claims for lack of Article III standing where plaintiffs alleged that a hacker had exploited a security vulnerability and accessed the database of RockYou, a publisher and developer of online services and social networking applications, and copied the email and social networking login credentials of approximately 32 million registered RockYou users. Claridge v. RockYou, Inc., 785 F. Supp. 2d 855 (N.D. Cal. 2011). Although plaintiffs alleged that RockYou had failed to utilize adequate encryption to prevent intruders from accessing and reading their personally identifiable information, they were unable to point to specific harm resulting from this incident, instead claiming that they had lost the “value” of their personal information. Id. at 861. While the court recognized the need for “actionable harm or concrete, non-speculative harm,” it noted the “paucity” of controlling authority in this area, and that the “unauthorized disclosure of personal information via the Internet is itself relatively new” and raised “issues of law not yet settled in the courts.” Id. Despite its reservations, the court declined to find that the plaintiffs’ allegations were insufficient to confer standing as a matter of law. Id.
But in a significant decision in December, the Third Circuit, in Reilly v. Ceridian Corp., 664 F.3d 38 (3d Cir. 2011), departed from the approach taken by the Seventh and Ninth Circuits. In Reilly, plaintiffs filed suit against Ceridian, a payroll processor, following a 2009 security breach in which an unknown hacker gained access to employees’ personal and financial information. Although the plaintiffs asserted injuries including the increased risk of identity theft and the cost of credit monitoring services, the court found that unless plaintiffs’ “conjectures [came] true,” they had not yet suffered any injury in fact sufficient to satisfy Article III standing. Id. at 42. The Third Circuit emphasized that plaintiffs’ injuries were “dependent on entirely speculative, future actions of an unknown third-party,” while distinguishing the circumstances in Krottner and RockYou on the grounds that the security intrusions there involved “malicious,” “sophisticated” third-parties or the actual misuse of compromised data. Id.
Given the emerging and variable treatment of Article III standing requirements in data breach cases among the circuits, forum considerations will be particularly critical in litigating data breach actions. These factors should be taken into account when assessing transfer motions in any multidistrict litigation resulting from a data breach episode, and companies may also need to consider whether to enforce any venue selection clauses in light of the emerging jurisprudence (see infra Terms of Service). Litigants should consider the factors courts use to evaluate the “credibility” of future injury for Article III standing, including the sophistication of third-party hackers, the strength of encryption and protections systems employed by the defendant, the demonstrated comprehension of the accessed data, or other means of quantifying the risk of harm.
* * *
Collectively, these recent decisions underscore the importance of closely examining plaintiffs’ allegations of harm when defending against a putative class action, especially a class action involving alleged privacy invasions directed to new and developing technologies–an area in which the plaintiffs’ class action bar has become especially active. Oftentimes, these suits–spurred by sensational media reports–allege widespread privacy violations that may be challenging to parse at the pleadings stage, but which are lacking in any specific or credible allegation of harm. Under such circumstances, a strong standing challenge may get the entire case dismissed at the outset and avoid the potential challenges involved in seeking to dismiss individual claims under Federal Rule of Civil Procedure 12(b)(6), which may include claims that do not require an initial showing of injury or that may require the Court to address confusing and technical allegations in the context of a one-sided pleading.
During the past year, plaintiffs have filed class actions in response to a number of data privacy and security incidents, including many involving relatively new technologies such as online social games and mobile applications. The theories pursued by Plaintiffs have been varied, but frequently involve state or federal statutes prohibiting criminal hacking or other computer crimes (such as the Computer Fraud and Abuse Act, Stored Communications Act or Electronic Communications Privacy Act). Others state unfair competition claims, or common law claims ranging from trespass to invasion of privacy to negligence. During the past year, courts continued to grapple with how–or whether–to apply these claims to a range of highly technical facts involving the collection, use or disclosure of user data.
“Do Not Track” Cases
2011 saw a wave of filings in “do not track” cases, involving claims that defendants had intentionally collected plaintiffs’ personal data without their knowledge or permission, allegedly in ways that users would not anticipate–such as through Adobe “Flash cookies” (for behavioral advertising) or mobile applications. See Specific Media, 2011 WL 2473399 (challenging the use of Adobe “Flash cookies” and alleging violations of the Computer Fraud and Abuse Act, California Comprehensive Computer Data Access and Fraud Act, California Invasion of Privacy Act, California Consumer Legal Remedies Act, California Unfair Competition Law, trespass to personal property, and unjust enrichment); In re Google Inc. Street View Elec. Commc’ns Litig., 794 F. Supp. 2d 1067 (N.D. Cal. 2011) (challenging Google’s practice of intercepting data packets from Wi-Fi networks and alleging three causes of action–violation of the federal Wiretap Act, the state wiretap act, and the California UCL); In re Facebook Privacy Litig., 791 F. Supp. 2d 705 (N.D. Cal. 2011) (alleging violations of the Wiretap Act and the California Comprehensive Computer Data Access and Fraud Act, and fraud under Cal. Civ. Code §§ 1572 and 1573).
In such “do not track” cases, plaintiffs have struggled to articulate even the facial elements of their claims. See, e.g., Specific Media, 2011 WL 1661532 (dismissing plaintiffs’ complaint for lack of Article III standing but also noting severe substantive defects in each of the six claims asserted by plaintiffs); In re iPhone Application Litig., 2011 WL 4403963 (same); Bose v. Interclick, Inc., No. 10-cv-09183-DAB, 2011 WL 4343517 (S.D.N.Y. Aug. 17, 2011) (dismissing plaintiff’s Computer Fraud and Abuse Act class action claim and holding that plaintiff failed to quantify any damage or loss, as defined by the statute); Google Street View, 794 F. Supp. 2d 1067 (dismissing plaintiffs’ California UCL claim for failure to satisfy California’s UCL standing requirements, which requires the plaintiff to establish the loss of money or property).
But it was not all bad news for plaintiffs in these cases, and plaintiffs in several “do not track” cases succeeded in overcoming motions to dismiss on at least one of their claims. For example, in Bose v. Interclick, Inc., the court permitted plaintiff’s challenge to the use of Adobe Flash cookies to track user information for behavioral advertising purposes to go forward on state law claims based on New York General Business Law Section 349 (deceptive business practices) and trespass to chattels. The court held that even though plaintiff had failed to plead a cognizable economic injury, an alleged “privacy violation” was sufficient to state a claim under Section 349. 2011 WL 4343517 at *9. With respect to the trespass claim, the court held that plaintiff’s generic allegations of harm to her computer were “arguably sufficient to survive a motion to dismiss.” Id. As another example, the plaintiffs in Google Street View were able to survive a motion to dismiss their Wiretap Act claim by convincing the court that the data packets (which included plaintiffs’ SSID information, MAC address, usernames, passwords, and personal emails) collected by the defendant through the capturing of Wi-Fi data by a “wireless sniffer,” through unprotected wireless networks, were not “readily accessible to the general public” for purposes of the Wiretap Act. 794 F. Supp. 2d at 1084.
Finally, in Pineda v. Williams-Sonoma Stores, Inc. 51 Cal. 4th 524 (2011), the California Supreme Court construed “personal information” under the Song-Beverly Credit Card Act to include ZIP codes, breathing new life into a series of lawsuits alleging that retailers collected ZIP codes as a condition to accepting payment. 51 Cal. 4th at 531-36. However, the scope of this ruling appears limited: on August 24, 2011, the San Francisco Superior Court dismissed claims against Craigslist for allegedly violating the Song-Beverly Credit Card Act, finding that the Act “on its face does not apply to online transactions.” Gonor v. Craigslist, Inc., No. CGC-11-511332 (Cal. Super. Ct. Aug. 24, 2011), affirming Saulic v. Symantec Corp., 596 F. Supp. 2d 1323 (C.D. Cal. 2009).
Several cases filed in the past year have involved claims that internal identifiers used by online publishers to organize and deliver user content (such as a Facebook user ID) or device identifiers (such as smartphone serial numbers) are the functional equivalent of personally identifiable information given their potential ability to be tied to identifying information, such as a name (see also infra discussion of proposed FTC revisions to COPPA, expanding definition of “personal information” to include persistent identifiers). Keying off of this theory, plaintiffs have targeted companies that disclosed personal identifiers to third parties (often through routine technical protocols such as HTTP referrers), arguing that this effectively disclosed users’ personal information to third parties. So far, courts have not been receptive to claims involving the alleged disclosure of user IDs or device identifiers to advertisers or other third parties. See, e.g, In re Facebook Privacy Litig., 2011 WL 6176208 (dismissing claims centered around defendant’s alleged transmission of user IDs and usernames to third-party advertisers); In re Zynga Privacy Litig., No. 5:10-CV-04680-JW (same); In re iPhone Application Litig., 2011 WL 4403963 (dismissing claims based on, among other things, defendants’ transmission of unique mobile phone identifiers); see also Hines v. OpenFeint, Inc., No. 11-cv-03084-EMC (N.D. Cal. Dec. 5, 2011) (plaintiffs voluntarily dismissed claims against OpenFeint brought on the basis of researcher reports that its mobile social gaming service transmitted information such as the user’s mobile phone identifier and Facebook user ID in an unencrypted format, following a motion to dismiss prepared by Gibson Dunn).
“Privacy” of Social Media
Last year, courts also considered the extent to which plaintiffs could challenge various practices involving the use of information they elected to share on social networking sites like Twitter and Facebook. In Cohen v. Facebook, Inc., Judge Seeborg rejected claims that Facebook’s use of plaintiffs’ Facebook profile pictures on other users’ pages to promote the “Friend Finder” service constituted a violation of their statutory rights of publicity. Cohen v. Facebook, Inc., 798 F. Supp. 2d 1090 (N.D. Cal. 2011) (“Cohen I”); Cohen v. Facebook, Inc., No. 10-cv-05282-RS, 2011 WL 5117164 (N.D. Cal. Dec. 27, 2011) (“Cohen II”). The court concluded that the plaintiffs had not alleged injury (a required element of their claim) sufficient to survive a motion to dismiss on claims, because plaintiffs’ “names and likenesses were merely displayed on the pages of other users who were already plaintiffs’ Facebook ‘friends’ and who would regularly see, or at least have access to, those names and likenesses in the ordinary course of using their Facebook accounts.” Cohen II, 2011 WL 5117164 at *3. But one court reached a different conclusion with respect to the facial sufficiency of claims relating to Facebook’s republication of plaintiffs’ “likes” (and related use of plaintiffs’ names and profile pictures) in “Sponsored Stories,” which are generated when Facebook users “like” an organization, company or cause. See Fraley v. Facebook, Inc., No. 11-cv-001726-LHK, 2011 WL 6303898 (N.D. Cal. Dec. 16, 2011) (finding Facebook’s alleged use of “[plaintiffs’] names, photographs, and likenesses . . . in paid commercial endorsements targeted . . . at other consumers, without their consent” sufficient to allege injury).
As with Article III standing challenges, plaintiffs had greater success overcoming motions to dismiss in claims arising from third-party hacking or other data breach incidents. See Anderson v. Hannaford, 659 F.3d 151 (1st Cir. 2011) (plaintiffs’ negligence and implied contract claims survived a motion to dismiss because mitigation damages, such as replacement credit card costs and identity theft insurance, were sufficient to allege injury where credit card information was obtained by a third-party hacker and there was evidence that the hacked information was used for an improper purpose); RockYou, 785 F. Supp. 2d 855 (plaintiffs’ claims for breach of contract, breach of implied contract, negligence, and negligence per se survived a motion to dismiss because the personally identifiable information that was hacked constituted valuable property, but noting that the plaintiffs could have difficulty proving their damages theory).
The courts’ decisions in these cases appeared to turn, in part, on concerns that defendants failed to use adequate measures to protect sensitive client information. See, e.g., id. at 861 (noting plaintiffs’ allegations that RockYou failed to employ “commercially reasonable” methods for safeguarding personally identifiable information); Anderson, 659 F.3d at 164 (pointing out that the third-party hacking was “a large-scale criminal operation conducted over three months”).
Defendants confronting data privacy and security class actions continued to invoke contractual provisions–such as forum clauses or limitations on liability–contained in their terms of service.
During 2011, courts frequently were reluctant to enforce provisions contained in online terms of service, particularly at the outset of litigation. For example, in Harris v. Comscore, 2011 WL 4738357, at *2 (N.D. Ill. 2011), the Northern District of Illinois refused to enforce a forum selection clause in a class action challenging Comscore’s alleged use of “deep packet inspection” to collect plaintiffs’ personal information, finding that the hyperlink to Comscore’s forum selection clause was not “readily apparent.” The court observed that valid forum selection clauses could appear in click-through agreements, but that for such clauses to be enforceable, they needed to be “immediately available and obvious.” Id. at *2. See also Hoffman v. Supplements ToGo Management, LLC, 419 N.J. Super. 596, 607 (App. Div. 2011) (holding forum selection clause in online terms unenforceable because it was not visible unless a user scrolled down to a concealed portion of the defendant’s web page and thus failed to provide “fair and forthright” notice to plaintiffs).
On the other hand, courts began to expand the Supreme Court’s ruling in AT&T Mobility LLC v. Concepcion, 131 S.Ct. 1740 (2011) to the online context. For example, one court in the Northern District of California enforced an arbitration clause in Zynga’s online terms of service, dismissing claims that Zynga had engaged in unfair and deceptive practices in certain in-game advertising under the terms of service applicable to Zynga’s online social games. Swift v. Zynga, No. 09-cv-05443-EDL (N.D. Cal. Aug. 4, 2011) (order granting Zynga’s motion to compel arbitration).
Defendants faced challenges when seeking to avoid liability for data privacy and security claims based on their agreements with end users at the pleading stage. Given frequent changes in online terms, courts were often reluctant to assess which contract versions may have applied to plaintiffs or class members in the absence of a complete record. For example, in Cohen I, the court refused to apply Facebook’s terms of service when determining if Facebook’s Friend Finder service was authorized under Facebook’s terms of service; in part because it did not think it proper to consider the terms at the motion to dismiss stage, and in part because “substantial questions would remain in this instance as to when various versions of the documents may have appeared on the website and the extent to which they necessarily bound all plaintiffs.” 798 F. Supp. 2d at 1094 (noting that even if it was “theoretically , . . possible to apply the [t]erms documents against plaintiffs at the motion to dismiss stage,” Facebook did not show that its terms were sufficient to insulate it from plaintiffs’ claims).
Even where courts did consider online terms of service at the motion to dismiss stage, they typically concluded that fact issues prevented the court from resolving the claims based on the terms of the user agreement. See, e.g., RockYou, 785 F. Supp. 2d at 865 (terms of service not dispositive on plaintiffs’ contract claims at motion to dismiss stage where terms provided that RockYou used secure servers, and a factual question existed as to whether the servers were secure); Fraley, 2011 WL 6303898, at *15 (finding the question of whether Facebook’s terms authorized use of information in Social Stories to be a disputed question of fact); but see In re iPhone App. Litig., 2011 WL 4403963, at *7-8 (noting that in any amended complaint the plaintiffs must explain why Apple’s terms of service would not bar privacy claims).
We expect that the contractual provisions contained in online terms of service will be of increased importance as data privacy and security cases advance into later stages of litigation.
The FTC was extremely active in 2011 in pushing to expand and apply its Section 5 authority to combat “unfair and deceptive” conduct to a broad range of practices and technologies related to consumer privacy.
Reflecting a particular focus on user information shared on social networking sites, the FTC in 2011 announced and/or finalized settlements with Twitter, Google, and Facebook, in that order. Although each of these settlements contains certain unique provisions, the cornerstone of the FTC’s complaints against these companies was that consumers were misled as to how their information would be protected, shared, and used.
The FTC pursued Twitter following a hacking incident in which third parties obtained, among other things, unauthorized access to non-public user information and tweets that consumers had designated as private, alleging that the company misled users about the extent to which the company protects the security, privacy, and confidentiality of consumer information. With respect to Google, the FTC charged that the company’s launch of Google Buzz was misleading and that the company, among other things, improperly used data supplied by users solely for use in Google’s Gmail product to launch its Buzz social network. Against Facebook (represented by Gibson Dunn), the FTC’s core allegations related to changes in how certain pieces of profile information were shared. Key takeaways from these settlements are discussed below.
The Google and Facebook consent orders are the first FTC settlements to require the implementation of comprehensive privacy programs that apply to the development of new features and products and also require privacy assessments by an independent third party. Although the FTC frequently has required companies in data security cases to implement comprehensive security programs (and to submit to ongoing security audits)–as it did in the Twitter order–the Google and Facebook consent orders represent the FTC’s first attempt to expand this requirement to encompass privacy.
These moves reflect the FTC’s recent emphasis on integrating privacy into the product development process, which the FTC refers to as “Privacy by Design.” (This is evocative of Article 20(1) of the proposed EU Data Privacy Regulation which would impose an obligation for all data controllers targeting EU consumers to engage in “privacy by design”.) The FTC’s Initial Privacy Report, for example, states that companies “should promote consumer privacy throughout their organizations and at every stage of the development of their products and services” and “should maintain comprehensive data management procedures throughout the life cycle of their products and services.” FTC, Protecting Consumer Privacy in an Era of Rapid Change: A Proposed Framework for Business and Policymakers¸ Preliminary FTC Staff Report at ix (Dec. 2011). Given the repeated emphasis of this principle in the FTC’s public discourse, privacy report, and recent consent orders, it is likely that these provisions will become more standard going forward.
Over the past year, the FTC staff has seemingly renewed efforts to leverage its Section 5 authority to impose a prescriptive requirement on companies to obtain the express affirmative consent of their users before making retroactive changes to their privacy policies.
The FTC staff issued a more definitive endorsement of express affirmative consent in its 2009 report on self-regulatory principles for online behavioral advertising. FTC, FTC Staff Report: Self-Regulatory Principles for Online Behavioral Advertising (Feb. 2009). The report advised that “before a company can use previously collected data in a manner materially different from promises the company made when it collected the data, it should obtain affirmative express consent from affected consumers.” Id. at 46. The FTC Staff reiterated this view a year later, stating “companies must provide prominent disclosures and obtain affirmative express consent before using consumer data in a materially different manner than claimed when the data was collected.” Protecting Consumer Privacy in an Era of Rapid Change at 76.
Several months later, the FTC obtained a similar–though far narrower–affirmative express consent commitment from Facebook as part of a settlement resolving various concerns relating to the company’s privacy practices, including allegations related to 2009 changes designating certain categories of information as “publicly available.” Complaint, In re Facebook, Inc., FTC File No. 092-3184 (Nov. 29, 2011). The FTC alleged that Facebook failed to clearly inform users of the scope of these changes when migrating users through a mandatory “Privacy Wizard” that users completed before these changes were implemented. Notably, in contrast to the Google order, the “express affirmative consent” provision proposed in Facebook is limited to non-public information and applies only when such information is shared in a manner that materially exceeds restrictions the user imposed using a Facebook privacy setting. Proposed Agreement Containing Consent Order, In re Facebook, Inc., FTC File No. 092-3184 (Nov. 29, 2011). The Facebook order also features an express carve-out for information that is re-shared by users and others on Facebook and includes language allowing the company to seek modification of the order “to address relevant developments [related to] . . . technological changes and changes in methods of obtaining . . . consent.” Id.
In addition to these two enforcement actions, the FTC also recently attempted (unsuccessfully) to persuade the court overseeing the Borders Group, Inc. bankruptcy to forbid Borders from selling detailed customer records to Barnes & Noble without first obtaining those customers’ express affirmative consent. In a letter dated September 14, 2011, David Vladeck, Director of the FTC’s Bureau of Consumer Protection, urged the Consumer Privacy Ombudsman assigned to the bankruptcy proceedings to recommend that any transfer of customer records in connection with a bankruptcy sale only occur with the customers’ express affirmative consent. The bankruptcy court rejected the FTC’s recommendation, authorizing the sale subject to a limited period during which Borders’ customers had the option to “opt out” of having their information disclosed.
Based on the FTC’s actions in the Google, Facebook, and Borders cases and Gibson Dunn’s ongoing representation of several companies in investigations and enforcement actions, we expect the FTC staff to continue to vigorously pursue express affirmative consent provisions in future consent orders, particularly in cases where the target of the enforcement action is alleged to have taken actions that violate prior commitments to consumers regarding the privacy and use of their personal information.
During the past year, the FTC has also sought to dramatically expand the scope of information covered by consent decrees in the privacy and data security fields. In the past, the scope of the information covered by these orders has been limited typically to “individually identifiable information from or about a consumer,” which typically included traditional categories of personally identifiable information–i.e., information that in and of itself operates to identify unique individuals (such as a first and last name, email address or social security number)–as well as other information that is combined with specifically enumerated categories of personally identifiable information.
During the past year, the FTC has entered into several consent decrees that reach a significantly broader scope of personal information. For example, though the scope of the FTC’s consent order with Twitter is limited to “individually identifiable information,” the term is defined to include specifically any IP addresses and other persistent identifiers (e.g., mobile device identifiers), even though IP addresses and persistent identifiers have historically not been considered “individually identifiable information” unless they are associated with other categories of individually identifiable information such as a name or street address. Decision and Order, In re Twitter, FTC File No. 092-3093 (Mar. 11, 2011). Similarly, both the recent Google and Facebook consent decrees utilize a definition of “covered information” that would include all “information from or about an individual consumer,” regardless of whether the information is individually identifying or not. Agreement Containing Consent Order, In re Google Inc., FTC File No. 102-3136; Proposed Agreement Containing Consent Order, In re Facebook, Inc., FTC File No. 092-3184 (The Facebook order also contains a separately defined term, “nonpublic user information,” that is used to narrow the scope of covered information for key provisions, such as the notice and consent requirements.). See also Agreement Containing Consent Order, In re Chitika, Inc., FTC File No. 102-3087 (June 17, 2011) (expanding the definition of “data collected” to encompass “any information or data received from a computer or device”).
The FTC’s attempts to expand the information covered under recent consent orders are consistent with recent statements expressing the Staff’s view that “the traditional distinction between [personally identifiable information and non-personally identifiable information] has eroded and that information practices and restrictions that rely on this distinction are losing their relevance.” Protecting Consumer Privacy in an Era of Rapid Change at 34-35. Accordingly, the Staff’s proposed framework for business is “not limited to those who collect personally identifiable information,” but rather “applies to those commercial entities that collect data that can be reasonably linked to a specific consumer, computer, or device.” Id. at 43. Consistent with this view, we expect the FTC staff to continue to push the scope of covered information covered by privacy and data security consent orders in 2012.
The Children’s Online Privacy Protection Act of 1998 (COPPA), effective April 21, 2000, requires companies that operate websites or online services to obtain parental consent before collecting, using, or disclosing “personal information” from children under age 13, if (1) those companies’ websites or services are directed to children under 13, or (2) they have knowledge that they are collecting personal information from children under 13. 16 C.F.R. § 312.2. Currently, personal information covered by the Act includes (1) individually identifiable information, such as full name, home address, email address, telephone number, social security number; (2) persistent identifiers, such as customer numbers collected through cookies or processor serial numbers, if they are associated with individually identifiable information; and (3) information concerning the child or the parents of that child that the website operator collects online from the child and combines with the types of information covered by (1) or (2). Id. The FTC has been especially active in pursuing enforcement actions under COPPA during the past year, with a particular focus on new and emerging mobile and online technologies.
Recent FTC Orders Related to COPPA
Over the past year, the FTC for the first time targeted mobile applications, online virtual worlds, and social networking websites directed to children for alleged COPPA violations. The orders entered in the past year clarify that the FTC interprets “Web site located on the Internet,” the language employed by the Act, broadly to cover virtually any content that children can access through a browser on a computer or on a mobile device. See Children’s Online Privacy Protection Rule, 76 Fed. Reg. 59,804, 59,807 (proposed Sept. 27, 2011) (to be codified at 16 C.F.R. pt. 312). The FTC’s consent orders in these cases provide important guidance on the potential application of COPPA to online and mobile services that may collect information from children under 13.
Last year, the FTC also pursued the first enforcement action under COPPA against a mobile application developer, W3 Innovations, LLC (W3). United States v. W3 Innovations, LLC, Case No. CV-11-03958-PSG, FTC File No. 102 3251 (N.D. Cal. Aug. 12, 2011). W3 operated approximately forty apps–through which it collected and maintained thousands of children’s email addresses and allowed children to publicly post information, including personal information, on message boards–without providing proper notice or obtaining verifiable parental consent. The settlement imposed a $50,000 penalty on the mobile app company, but also required the developer to delete all personal information collected in violation of COPPA, in addition to other requirements. This type of requirement could have significant implications for any company that depends on continued access to an installed user base that includes children under 13, since the deletion of all personal information would include account information and content posted by those users–effectively requiring the company to delete the accounts for all users under 13.
Finally, in United States v. Godwin, the FTC targeted a social networking website directed to children ages 7 to 14. Case No. 11-cv-03846-JOF, FTC File No. 1123033 (N.D. Ga. Nov. 8, 2011). The FTC charged Godwin, the operator of www.skidekids.com (a website that markets itself as the “Facebook and Myspace for kids”) with COPPA violations for collecting personal information from children without obtaining prior parental consent and with violating the FTC Act for making allegedly deceptive claims about its information collection practices by stating that children must provide a parent’s valid email address to register, when in fact the site permitted children to register without a parent’s email address. The Order imposed a $100,000 civil penalty (although all but $1,000 of this amount could be suspended if Godwin provides truthful information about his financial condition and complies with the Order’s oversight provision). Notably, the FTC’s action targeted the operator of the website in his individual capacity, rather than his company–a trend that may be of particular interest to app developers and start-up tech companies. Like W3, Godwin was also ordered to destroy personal information obtained in violation of COPPA and, for a limited time, to link to educational material and to retain an online privacy professional to oversee any COPPA-covered websites.
Proposed Revisions to COPPA
In response to the dramatic increase in mobile and online products used by children, the FTC proposed a number of substantial amendments to COPPA to modify the Act’s definitions and requirements regarding provision of parental notice, parental consent mechanisms, security of children’s personal information, and oversight of self-regulatory safe harbor programs. Children’s Online Privacy Protection Rule, 76 Fed. Reg. 59,804. FTC Chairman Jon Leibowitz explained that the proposed revisions to the law were prompted by “an explosion in children’s use of mobile devices, the proliferation of online social networking and interactive gaming.”
The most significant and controversial proposed change to COPPA is an expanded definition of “personal information” that dramatically enlarges the scope of information covered by the Act. For the first time, the FTC’s proposed definition would include geolocation information. In addition, the FTC would expand the scope of persistent identifiers covered by the Act to include any persistent identifier used for functions other than the operator’s own internal operations (potentially sweeping in all persistent identifiers contained in third-party cookies, for example, even when they do not contain individually identifiable information). The tech industry has urged the FTC to abandon this revision, citing a significant impact on innovation as well as privacy. Web-based services, such as Google and Yahoo, and telecommunications carriers, such as AT&T and Verizon, caution that treating persistent identifiers like personal information “could negatively impact the way many web services have been designed to function, and would have a devastating impact on the advertising that supports the flow of free online content.” In addition to negatively impacting businesses, the proposed change has the potential to undermine COPPA’s confidentiality objectives: “[p]lacing identifiers on a level playing field with other personal information could result in the collection of more, rather than less, personal information” and could “reduce incentives for businesses to take privacy-enhancing steps to anonymize or de-identify the child’s personal information.”
Many companies have expressed concern with the FTC’s notice and consent proposals, noting that “in an environment where many companies offer services on the same webpage . . . it is becoming increasingly unworkable for each operator to provide notice and obtain parental consent.” There is particular concern over the FTC’s proposal to eliminate “e-mail plus,” which many sites and services have integrated into their products to ensure compliance with COPPA. Eliminating what had long been a widely accepted method of consent “pulls the rug out from under them and creates major short-term marketplace uncertainty.” The Center for Democracy and Technology has criticized the FTC’s proposed new method of consent via government-issued identification, warning that the method only proves the operator has received someone’s ID, as it cannot verify that the person on the ID is in fact a parent of the minor.
The FTC also has proposed revisions intended to impose greater responsibility on companies that disclose children’s personal information to third parties (such as platforms that disclose information through APIs to third-party developers), as well revisions intended to increase oversight over self-regulatory industry groups that implement safe harbor programs under COPPA.
The comment period on the proposed changes closed on December 23, 2011, and it is unclear how the FTC will respond to these concerns. The outcome of these issues will have significant implications for online and mobile developers and services, particularly the increasing number that are used by children.
2011 demonstrated that cybercrime continues to be a priority for U.S. law enforcement officials. In recent years, the FBI has repeatedly stated that cybercrime was the agency’s number three priority after counterterrorism and counterintelligence, and FBI officials, including Director Robert Mueller, have emphasized that they expect computer-related threats to increase in the future. The past year saw a number of developments relating to criminal prosecutions of cybercrime, detailed below, including new enforcement strategies and challenges to the Computer Fraud and Abuse Act (“CFAA”), 18 U.S.C. § 1030. The CFAA establishes federal criminal penalties and a civil cause of action for unauthorized computer access, and is the primary federal statute used to prosecute computer hacking and misuse cases. Among its other provisions, the CFAA creates a criminal cause of action against any person who “intentionally accesses a computer without authorization or exceeds authorized access, and thereby obtains . . . information from any protected computer,” or who “intentionally accesses a protected computer without authorization.” See 18 U.S.C. §§ 1030(a)(2)(C), (a)(5)(C), and (g).
United States v. Rodriguez, 628 F.3d 1258 (11th Cir. 2010)
In a decision issued on December 27, 2010, the Eleventh Circuit upheld the conviction of Roberto Rodriguez, a former employee of the Social Security Administration (“SSA”), for criminal violations of the CFAA based upon his unauthorized use of SSA databases to access individuals’ personal records for non-business reasons. The Eleventh Circuit ruled that, although Rodriguez was authorized to access the SSA databases, he exceeded his authorized access by using the databases for non-business reasons even after being notified of SSA policies prohibiting employees from such use and warning them of potential criminal penalties. Mr. Rodriguez was sentenced to one year’s imprisonment. This case is among the latest examples of the criminal prosecution of “insiders”–employees convicted of exceeding authorized access to their employer’s computer network–and stands as a reminder of that ever-present threat.
United States v. Kramer, 631 F.3d 900 (8th Cir. 2011)
Neil Kramer pleaded guilty to transporting a minor in interstate commerce with the intent to engage in criminal sexual activity and acknowledged using his cellular telephone to communicate with the victim for a six-month period prior to the offense. The Eighth Circuit affirmed the district court’s conclusion that the phone was a “computer” within the meaning of the CFAA that was used to facilitate the offense, and Kramer’s actions therefore merited an enhancement to his prison sentence under the federal sentencing guidelines. In reaching this conclusion, the Eighth Circuit confirmed the breadth of the CFAA, noting that the CFAA’s definition of a computer is “exceedingly broad” and includes “any device that makes use of a[n] electronic data processor,” regardless of an Internet connection.
United States v. Scheinberg, No. 10-CR-336 (S.D.N.Y. Mar. 10, 2011)
On April 15, 2011, the government announced charges against the founders of the three largest Internet poker companies operating within the United States–PokerStars, Full Tilt Poker, and Absolute Poker–alleging that the companies engaged in bank fraud, money laundering, and violations of gambling laws including the Unlawful Internet Gambling Enforcement Act (“UIGEA”). The indictment charged that the defendants, over a nearly five-year period, created phony merchants to process payments to their sites after the passage of the UIGEA prompted banks and payment processors to refuse to process their transactions. Further, the FBI seized the domain names of five related websites, preventing user access to the sites. This case and a related civil lawsuit brought by the government represent the Department of Justice’s most significant recent attempt to pursue Internet gambling operators. Relatedly, the DOJ’s Office of Legal Counsel issued an opinion on December 23, 2011, stating that the Federal Wire Act only applies to interstate transmissions of wire communications that relate to “a sporting event or contest.” This is contrary to the Criminal Division’s previous view that the U.S. operations of virtually any type of Internet gambling site would violate the Wire Act. Thus, it is likely that future criminal prosecutions will rely on the UIGEA as in Scheinberg.
United States v. Nosal, 642 F.3d 781 (9th Cir. 2011), en banc rehearing granted
In April 2011, a three-judge panel of the Ninth Circuit reversed the district court’s granting of David Nosal’s motion to dismiss the indictment, holding that an employee “exceeds authorized access” within the meaning of the CFAA when an employee violates an employer’s computer access restrictions, including use restrictions. Similar to a growing number of employee disloyalty cases involving confidential computer data, Nosal, a former employee of the executive search firm Korn/Ferry International, allegedly obtained confidential information from one of his employer’s databases, which he planned to use to establish his own competing executive search firm. Critics of the decision have argued that the Ninth Circuit’s ruling broadens the scope of the CFAA and that, under Nosal, even minor violations of an employer’s computer use policies would create the potential for criminal liability. The Ninth Circuit agreed to review the case en banc and oral arguments were heard in December 2011, but no decision has yet been issued.
United States v. Fricosu, No. 10-CR-00509-01-REB (D. Colo. May 6, 2011)
In a case that implicates the law governing search and seizure of computer data, including Fourth and Fifth Amendment issues, on January 23, 2012, a district court judge ordered Ramona Fricosu, who was charged with perpetrating a complex mortgage fraud, to unlock the hard disk of her seized laptop computer, which utilized full hard disk encryption. The judge ruled that compelling the production of documents stored on the encrypted hard drive did not violate the Fifth Amendment privilege against compelled testimony and also found that the government had met its burden to show that Fricosu either owned the laptop or was its primary user. Fricosu would not necessarily be required to reveal her actual password in order decrypt the computer data–for example, she could enter the password into the laptop directly. The decision in this case is a significant contribution to the growing body of case law on this issue.
Several previous cases involving similar issues have been decided in recent years. In In re Boucher, No. 06-MJ-91, 2009 WL 424718 (D. Vt. Feb. 19, 2009), the first case to test whether an individual could be constitutionally compelled to provide an encryption key to decrypt computer data, immigration officials seized a laptop held by Canadian national Sebastien Boucher as he entered the United States from Canada, after the officials detected images that appeared to contain child pornography on the laptop. The laptop was subsequently shut down and its files were encrypted behind a password. A magistrate judge ruled that compelling Boucher to provide the password would compel testimonial evidence in violation of the Fifth Amendment privilege against self-incrimination, but was later overruled by a district court judge, and Boucher subsequently decrypted the laptop.
However, later cases took positions contrasting with Boucher. In United States v. Kirschner, No. 09-MC-50872, 2010 WL 1257355 (E.D. Mich. Mar. 30, 2010), a district court judge quashed a subpoena to compel the defendant to reveal his password to an encrypted computer, noting that the government was seeking incriminating testimony from the defendant by compelling him to reveal his password, which was unconstitutional under the Fifth Amendment. And in United States v. Rogozin, No. 09-CR-379, 2010 WL 4628520 (W.D.N.Y. Nov 16, 2010), a magistrate judge suppressed evidence from a seized laptop because the defendant had not been notified of his Miranda rights before law enforcement officials asked for his laptop password in an oral interview. The magistrate judge ruled that the defendant’s response to the questions of federal agents constituted incriminating testimony.
In Fricosu, prosecutors argued that the encrypted laptop is a form of physical evidence and that failing to compel Fricosu to decrypt the data would permit future criminal defendants to evade search warrants through the increasingly common use of computer encryption. Fricosu and an amicus curiae brief filed by the Electronic Frontier Foundation, citing Kirschner, Rogozin, and the magistrate judge’s decision in Boucher, argued that the password is a form of knowledge and that compelling Fricosu to decrypt the data would violate her Fifth Amendment privilege against self-incrimination.
United States v. Pu, No. 11-CR-00699-1 (N.D. Ill. Oct. 10, 2011)
In yet another employee disloyalty case, Yihao Pu, a former employee of an asset management firm, was charged with criminal theft of trade secrets for using unauthorized computer software on his employer’s computer to circumvent access restrictions and obtain confidential trading information. The complaint alleged that the compromised information would give a significant advantage to competitive businesses and that trades using that information would undermine the firm’s trading strategies. The fact that this case was prosecuted as a trade secrets case, rather than as a computer hacking case under the CFAA, serves to demonstrate the variety of tools federal prosecutors have at their disposal in pursuing cybercrime. As of this writing, Pu is currently free on bail and awaiting indictment.
United States v. Dotcom, No. 12CR3 (E.D. Va. Jan. 5, 2012)
On January 19, 2012, the Department of Justice announced that it had charged Megaupload, an online file-sharing website, and seven individuals with operating an “international organized criminal enterprise” engaged in racketeering, money laundering, and copyright infringement. In its indictment, the government alleged that Megaupload generated more than $175 million in revenue and led to more than $500 million in damage to copyright holders. Megaupload allegedly employed a business model designed to promote copyright infringement, which offered financial incentives to users who uploaded infringing content, and Megaupload failed to terminate the accounts of users who it knew had engaged in copyright infringement. A district court judge ordered the seizure of 18 domain names affiliated with Megaupload, and four of the individual defendants were arrested in New Zealand. The case continues the trend of seizing domain names and shutting down websites as an enforcement tool and highlights the level of international cooperation in cybercrime investigations.
While privacy and information security are invariably hot topics in the political arena, 2011 proved to be a uniquely active year in this area, as many proposals tackling cybersecurity, personal privacy, online infringement and data breach notification were introduced at the federal level. While the prospects of the following bills are difficult to predict with certainty, some of the most notable proposals are discussed below.
SOPA and PIPA are the House and Senate versions of what were easily the most controversial pieces of privacy-related legislation introduced in 2011, both intended to combat “rogue websites” committed to online piracy.
Representative Lamar Smith (R-TX) introduced the House version of the bill, H.R. 3261, known as SOPA, on October 26, 2011. The law would, in relevant part, permit the Attorney General or an injured holder of an intellectual property right to sue a website owner, website operator, or domain name registrant of a “foreign infringing site” committing or facilitating the criminal violation of a U.S. user’s intellectual property rights. The Attorney General could seek court orders to have ISPs block access to the infringing site, block payment network providers from completing transactions with the site, prevent advertisers from continuing to advertise on the site, and have search engines take reasonable measures to prevent the site from being served as a direct link, within five days of receipt of the order. Individual rights holders could seek similar injunctive relief after a two-step notification procedure (restricting payment and advertising, but not ISPs or search engine results).
Senator Patrick Leahy (D-VT) introduced the Senate version of the bill, S. 968, known as the PROTECT IP Act or PIPA, on May 12, 2011. Similarly to SOPA, PIPA would give the Attorney General or an injured copyright or trademark holder the right to commence an action against website owners or operators, or against domain name registrants, if either were “dedicated to infringing activities,” meaning that the site has “no significant use other than engaging in, enabling, or facilitating” copyright or trademark infringement, or “is designed, operated, or marketed by its operator or person operating in concert with the operator, and facts or circumstances suggest is used, primarily as a means for engaging in, enabling, or facilitating” the same. Another portion of the bill would have also permitted the Attorney General or injured plaintiffs to seek court orders to compel ISPs to block allegedly infringing websites’ domain names or web addresses, if the website or domain name has certain connections to the United States, conducts business directed to United States residents, and harms holders of United States intellectual property rights.
The legislation has inspired robust debate. Supporters of the bills view the legislation as a targeted means to combat rogue websites dedicated to counterfeiting and piracy, particularly those based abroad. The bills inspired a groundswell of opposition from supporters of the technology sector, including many who had not previously been involved in the political process. Critics of the bills raised concerns about censoring websites for hosting infringing content, chilling innovation, and other privacy concerns.
As a result of the flurry of protests in mid-January 2012 (political activism which is, in and of itself, a notable trend), several former House and Senate co-sponsors of the bills pulled their support, leaving the future of both bills in doubt. A scheduled vote to bring PIPA to the Senate floor for debate on January 24, 2012 was postponed indefinitely on January 20, 2012. SOPA was referred to the House Judiciary’s Subcommittee on Intellectual Property, Competition and the Internet, with hearings and markup sessions being held through December 2011.
Opponents of PIPA, led by Senator Ron Wyden (D-OR), introduced the OPEN Act, S. 2029, on December 17, 2011. The OPEN Act–which would amend the Tariff Act of 1930–would authorize the International Trade Commission (rather than the Justice Department) to issue cease and desist orders against websites dedicated to infringing activity. The orders would restrict payments from being made to, and advertisements from being made on, infringing websites served with such orders, “as expeditiously as reasonable.”
Senator Wyden has reportedly promoted the OPEN Act as an alternative to SOPA and PIPA that would achieve the same goals “without the collateral damage.” Critics of the OPEN Act, however, argue that SOPA and PIPA may be more effective at combatting piracy sites that make money from foreign advertisers, and that the OPEN Act would give the executive branch too much power to pardon foreign websites for mere “policy reasons.”
The OPEN Act was referred to the Senate Finance Committee.
The following three related bills have all been reported out of committee and are awaiting Senate consideration.
Personal Data Privacy and Security Act (including Proposed Amendments to Computer Fraud and Abuse Act)
Senator Patrick Leahy (D-VT) introduced the Personal Data Privacy and Security Act, S. 1151, on June 7, 2011. The bill would create several new federal crimes for unauthorized access to personal information, pertaining to hacking, identity theft, and security breaches. The bill would also require certain government agencies and private business entities that use, access, transmit, store, dispose of, or collect personally identifiable information to establish certain security measures, and provide data breach notices to affected individuals.
Entities engaging in interstate commerce that collect, access, transmit, use, store, or dispose of sensitive, personally identifiable information on 10,000 or more United States persons would have to establish a data privacy and security program designed to ensure the security of such information. Covered entities would be required to engage in periodic risk assessments of its security program and train employees accordingly for its implementation within a year of the law’s implementation. Financial institutions regulated by the Gramm-Leach-Bliley Act and HIPAA-regulated entities would be exempt from these requirements because those statutes provide similar rules. The bill includes steep civil and criminal penalties.
Notably, the bill also contained proposed amendments to the CFAA. The bill adopts (in revised form) Senators Chuck Grassley’s (R-IA) and Al Franken’s (D-MN) proposals to amend 18 U.S.C. § 1030(g) to state that no action may be brought under the CFAA for Terms of Service violations based upon unauthorized access or access in excess of authority. In addition, Senator Leahy’s proposed amendments to the CFAA would substantively rewrite the statute to punish accessing particular categories of information, and would create new penalties for aggravated damage to a critical infrastructure computer.
The Senate Judiciary Committee filed a written report on the bill in November 2011, and it is awaiting Senate consideration.
Data Breach Notification Act
Senator Dianne Feinstein (D-CA) introduced the Data Breach Notification Act, S. 1408, on July 22, 2011. Feinstein’s bill would establish one federal data breach notification standard whenever an agency or business entity engaged in interstate commerce that uses, access, transmits, stores, disposes of, or collects sensitive personally identifiable information discovers a breach has occurred. Covered entities would be required to notify affected owners or licensees of such breaches “without unreasonable delay” following the breach according to specified content notification provisions. Reasonable delay could include the time necessary to determine the scope of the breach, prevent further disclosures, restore the integrity of affected system, and provide notice to law enforcement. Notice generally would not be required if the agency or business entity concluded that there was “no significant risk” that a breach resulted in or would result in harm to the affected individual.
If the breach is reasonably believed to have affected over 5,000 residents of a state, major media outlets would have to be notified. If the breach is reasonably believed to have affected over 10,000 people or involved particularly large databases (or those owned by the federal government), the United States Secret Service must be notified. Other law enforcement agency notification provisions are also included if the breach involves particularized circumstances such as espionage or mail fraud, for example. The law would be enforced by state attorneys general, providing for severe civil penalties and possible injunctive relief.
The Judiciary Committee held a hearing on the bill in September 2011, and the bill was ordered reported out to the Senate with amendments.
Personal Data Protection and Breach Accountability Act
Senator Richard Blumenthal (D-CT) introduced the Personal Data Protection and Breach Accountability Act, S. 1535, on September 8, 2011. The bill would provide for criminal fines or imprisonment for the intentional or willful concealment of security breaches resulting in harm to any person. The bill further prohibits the interception, redirection, monitoring, manipulation, aggregation or marketing of an authorized user’s web search or query without that user’s consent and without clear and conspicuous disclosure of the data collected. The bill would also require interstate companies that use, access, transmit, store, dispose, or collect personal information pertaining to more than 10,000 people to implement a comprehensive security program to safeguard that information from vulnerabilities or unauthorized access.
Among the extensive notification provisions, the bill would require that such companies notify affected individuals of a security breach without unreasonable delay (while providing for certain exceptions for federal law enforcement and the intelligence community), and provides for the methods and content of such notice. Companies required to provide notice to individuals would also be required to provide for the affected individual at no cost, upon his or her request, quarterly consumer credit reports, credit monitoring services, a security freeze on the individual’s credit report, and compensation for damages caused by the security breach.
The bill provides for exceptions for HIPAA-regulated entities and financial institutions covered by Gramm-Leach-Bliley, among a few other exceptions. The bill is enforceable by the Attorney General, FTC, and state attorneys general, but also would permit individuals to bring civil actions to obtain injunctive relief or to recover damages (including punitive damages) for violations of the notice requirements.
The bill was placed on the Senate’s Legislative Calendar under General Orders.
White House Cybersecurity Legislative Proposal
The Obama administration has pursued a comprehensive national cybersecurity agenda, publicly stating a desire to invest in and secure the nation’s digital infrastructure as a means of combatting repeated cyber intrusions and an increase in cybercrime. The administration has further expressed the view that our cybersecurity law requires updating in order to properly defend against such threats. To that end, on May 12, 2011, the White House released a Cybersecurity Legislative Proposal to provide input for Congressional consideration. The proposal has two central goals. First, it suggests a framework for national data breach reporting, to simplify and standardize the patchwork of state laws currently in place. Second, it would enhance criminal penalties for cybercrimes and extend the Racketeering Influenced and Corrupt Organizations (“RICO”) Act to cover cybercrimes.
The White House proposal would also enable the Department of Homeland Security (“DHS”) to help private sector companies or state and local governments with responding to data breaches, and would permit voluntary information sharing between those entities and DHS, while providing immunity for those entities when doing so. Additionally, the proposal envisions centralizing (largely through DHS) management of cybersecurity, recruitment of cybersecurity professionals, installation of intrusion prevention systems, and embracing cloud computing.
The House and Senate are considering several pieces of legislation that incorporate pieces of the White House proposal, including the Federal Protective Service Reform and Enhancement Act, H.R. 2658 (referred to two committees and forwarded with markups to House Homeland Security Committee by voice vote in July 2011) and the Cyber Intelligence Sharing and Protection Act, H.R. 3523 (referred to the House Committee on Intelligence in November 2011).
Promoting and Enhancing Cybersecurity and Information Sharing Effectiveness Act (“PRECISE Act”)
Representative Daniel Lungren (R-CA) introduced the PRECISE Act, H.R. 3674, on December 15, 2011. That bill would give the DHS authority to protect federal and critical infrastructure systems, conduct risk assessments, coordinate with other entities, and designate a lead cybersecurity official to report to Congress. The bill would further authorize coordinating the development of sector-specific security standards, and would create a National Cybersecurity Authority to centralize national cybersecurity efforts. Lungren’s bill would also create a nonprofit organization to serve as a national clearinghouse for cybersecurity threat information, called the National Information Sharing Organization. The board of directors would be composed of a representative from the DHS, four representatives from three different Federal agencies with significant responsibility for cybersecurity, ten representatives from specific private sectors (including at least one member representing a small business interest), two representatives from the privacy and civil liberties community, and the Chair of the National Council of Information Sharing and Analysis Centers.
The bill was referred to the House Subcommittee on Technology and Innovation on January 12, 2012.
Some other notable federal legislative efforts in the past year include:
Data security breaches and their consequences continued in 2011 to be a burgeoning area of attention and concern for businesses, government enforcement agencies, and private litigants. One needed to look no further than the mainstream news media for evidence of the escalating magnitude, sophistication, and cost of data breaches. As a result, sophisticated businesses that handle sensitive customer, employee, or commercial information are not only increasing their attention and investment in data security, but also taking steps to anticipate the potential for breach incidents in spite of their best efforts. We will be discussing leading-edge strategies for preparing for and responding to a security incident or vulnerability report in a complimentary client briefing on February 29, 2012 entitled Data Breaches, Hacks and Vulnerabilities: Leading Strategies for Responding to a Data Breach Incident. The one-hour briefing will discuss important considerations for any business that handles consumer, business or personal information in today’s rapidly evolving technical and legal environment.
Below we illustrate a sampling of high-profile data breach incidents that occurred in 2011 and some of the potential trends they illustrate, followed by an update of legal developments relating to breach notification obligations, and a discussion of insurance issues that may arise from data breach incidents.
In May, hackers succeeded in breaching Citicorp’s online account system. According to published reports, personal account information for a subset of Citicorp’s 21 million customers may have been exposed, including account information for more than 360,000 of the company’s U.S. credit card holders. The hackers were able to infiltrate the bank by first logging onto Citicorp’s online credit card website and then inserting account numbers into the text in the address bar to access various accounts. The breach, one of the first known hacking cases at a bank, was discovered during routine system maintenance. Despite the banking industry’s significant focus on data security, hackers continue to target businesses in the financial sector and to find new vulnerabilities to exploit, in large part due to the potential value of the data they handle, such as payment card and account information.
Epsilon, a unit of Texas-based Alliance Data Systems, manages email marketing and communications for more than 2,500 clients, including prominent businesses such as Best Buy, Marriott International, Chase, Capital One, and Citigroup. In April, Epsilon reported a massive data security breach that exposed millions of individual email addresses and consumer names. Although sensitive financial information was not exposed, the information obtained by hackers could be used to send personalized emails in “phishing” scams. In a phishing scam, hackers send fake emails pretending to be a company with which the consumer does business. The emails trick customers into clicking a link that installs malware or spyware or asks for credit card or login information to the customer’s account with that business where more sensitive data is stored. Given that data such as that exposed in the Epsilon breach can facilitate phishing and similar scams, this data can have considerable value to online fraudsters even if no financial information or personally identifying information is involved.
* * *
As these and numerous other high-profile breach incidents illustrate, hacking attacks continue to grow in both size and sophistication. At the same time, a growing segment of the legal industry stands ready to press private litigation on behalf of those whose information is potentially compromised, often within days or even hours of a breach coming to light, and frequently without any evidence of direct financial harm or misuse of the data that was compromised. Both of these trends are illustrated in the recently announced Zappos breach. Within a day of announcing that one of its servers had been infiltrated, thereby potentially exposing limited customer data, Zappos was named in multiple class action lawsuits brought on behalf of customers despite the absence of direct financial loss on their part. Moreover, data security issues are no longer limited merely to businesses that handle or host significant amounts of user data. Hewlett-Packard (“HP”), for example, was targeted during 2011 in a class action lawsuit following media reports that researchers had developed a malicious tool that could allow hackers to take control of HP printers through embedded firmware, even though no evidence existed that such an attack had ever taken place. These and other suits illustrate the readiness of private litigants to bring suit based on new and creative theories of liability, although–as discussed in the class actions section above–it is unclear which of these theories ultimately will gain traction and survive legal challenges.
Given the volume and increasing sophistication of data breach attacks, as well as the exponential growth in the volume and potential value of stored personal data, attention in the information technology sphere has evolved from simply preventing an attack to being prepared for one. Virtually no commercial network is truly hacker-proof, and so while prudent businesses invest in the means to detect, rebuff, and contain potential intrusion, many CIOs and in-house counsel also recognize that the prospect of a data security incident is not so much a question of “if,” but “when.” The new goal, then, is not just to maintain best practices defenses, but also to have the right structure in place to respond to a breach.
As part of this new paradigm, many businesses are recognizing the importance of planning for a potential breach incident even before it occurs. Since data security incidents are extremely fast moving, involve high potential exposure, and implicate many different disciplines, sophisticated businesses have developed formal data breach incident response plans.
The U.S. has yet to enact a uniform national law on data breach notification. Accordingly, for now, businesses must contend with a patchwork quilt of local statutes in 46 states, the District of Columbia, Puerto Rico and the Virgin Islands requiring notification to those impacted by security breaches involving personal information. Data security attacks also frequently precipitate law enforcement and government regulatory agency attention, including by the Department of Justice, the FBI, the Federal Trade Commission, and state attorneys general. In addition, particularly high-profile breaches have subjected the companies involved to inquiries from Congress and requests to testify in congressional hearings. Moreover, individual executives must be aware of their potential exposure in this setting. On April 7, 2011, the Securities and Exchange Commission for the first time assessed fines against three former executives of broker-dealer GunnAllen Financial, Inc. Without admitting or denying the SEC’s findings, the three former executives agreed to settle charges that they had violated Regulation S-P of the Securities Exchange Act of 1934 by failing to protect confidential information about their customers.
In the absence of a unified federal statute addressing breach notification obligations, entities are subject to a patchwork quilt of state statutes, narrow federal laws regulating specific industries (such as financial institutions), and specific regulatory obligations. Two developments during 2011 elaborate on these emerging standards.
SEC Guidance on Breach Reporting
On October 13, 2011, the SEC Division of Corporation Finance released guidance that assists public companies in assessing what disclosures should be made when faced with cybersecurity risks and incidents. SEC, CF Disclosure Guidance: Topic No. 2–Cybersecurity (Oct. 13, 2011). The guidance provides an overview of disclosure obligations under current securities laws, acknowledging that no existing disclosure requirements explicitly refer to cybersecurity risks and incidents but that a number of existing disclosure requirements may impose an obligation upon registrants to disclose such risks and incidents. In addition to the brief summary that follows, we discuss the SEC’s guidance in greater detail in our October 17, 2011, client alert, SEC Issues Interpretive Guidance on Cybersecurity Disclosures Under U.S. Securities Laws.
The guidance provides that public companies should disclose risk of cybersecurity incidents in their risk factors if “these issues are among the most significant factors that make an investment in the company speculative or risky.” Companies are expected to evaluate their cybersecurity risks, taking into account all relevant information, including the following:
If the company finds that a disclosure related to cybersecurity risks is necessary, it “must adequately describe the nature of the material risks and specify how each risk affects the registrant,” avoiding general “boilerplate” disclosure.
Management’s Discussion and Analysis of Financial Condition and Results of Operations (“MD&A”)
The guidance also advises public companies to address cybersecurity risks and incidents in their MD&A “if the costs or other consequences associated with one or more known incidents or the risk of potential incidents represent a material event, trend, or uncertainty that is reasonably likely to have a material effect on the registrant’s results of operations, liquidity, or financial condition or would cause reported financial information not to be necessarily indicative of future operating results or financial condition.” For example, the MD&A should discuss a material reduction to a company’s revenues due to a loss of customers following a cybersecurity incident, or a material increase in costs resulting from litigation linked to a cybersecurity incident, or related to protecting the company from future cyber incidents.
Description of Business
Public companies should discuss cybersecurity incidents in their Description of Business to the extent that such incidents materially affect a company’s products and services, relationships with customers or suppliers, or competitive conditions. Such disclosure should consider the impact of cybersecurity incidents on each reportable segment.
Companies may need to include in their Legal Proceedings disclosure a discussion of any pending material legal proceeding involving a cybersecurity incident where the company or any of its subsidiaries is a party to the litigation.
Financial Statement Disclosures
Cybersecurity risks and incidents may have significant effects on a company’s financial statements. For example, prior to a cybersecurity incident, a company may incur substantial costs in the development of preventative measures. During and after a cybersecurity incident, companies may offer customers additional incentives to encourage customer loyalty, and may incur significant losses and diminished cash flows resulting in impairment of certain assets. Companies should ensure that any such impacts to financial statements are accounted for pursuant to applicable accounting guidance.
Disclosure Controls and Procedures
Companies should consider the risks that cybersecurity incidents may pose to the effectiveness of their disclosure controls and procedures. If it is reasonably possible that a cybersecurity event might disrupt a company’s ability to provide the SEC with information required to be disclosed in SEC filings, then a company may conclude that its disclosure controls and procedures are ineffective.
California State Notification Law Amendments
In 2011, California amended its security breach notification law with changes that went into effect on January 1, 2012. The updated law mandates a number of additional requirements for entities that have experienced a data breach incident. Most notably, the updated law touches on two important disclosure requirements.
First, the notification sent to affected California residents must include certain mandatory content, such as the name and contact information of the reporting entity, the types of personal information that were potentially leaked, the date of the breach (if known), the date of the notice, whether notification was delayed as a result of a law enforcement investigation, a general description of the breach incident, and toll-free telephone numbers and addresses of the major credit reporting agencies (if the breach exposed a social security number or a California driver’s license or identification card number). Additionally, the entity may, at its discretion, include information about what it has done to protect individuals whose information has been compromised and provide advice on steps that such individuals can take to protect themselves. The law requires that the notice must be written in “plain language,” requiring entities to ensure that the notice is plain and easy to understand.
Second, the updated law requires entities that are required to issue breach notices to more than 500 California residents as a result of a single breach incident to also submit an electronic “sample copy” of its security breach notification directly to the California attorney general.
Given the complexity and expense (and, some might say, the inevitability) of responding to data breach incidents, the availability of financial protection through insurance coverage tailored to data breach losses has become a significant and growing area of attention.
Many businesses confronted with a data breach incident have found, to their dismay, that claims arising from a breach may not readily fit within their standard Commercial General Liability (“CGL”) coverage. Insurers frequently reject claims in connection with breach liabilities as falling outside typical CGL coverage for “bodily injury,” property damage,” and the like. Moreover, some insurers have taken the position that claims arising from a data security breach are expressly excluded by their policy terms.
Similar disputes may arise in connection with claims for “loss of use” of tangible property, as is often covered under CGL policies. Even though lost or damaged data may itself be considered intangible, coverage may be triggered by, for example, a company’s loss of use of infected servers or other network assets due to a hacker attack. In response, however, express exclusions for data breaches and other cyber claims are becoming more common in CGL policies.
Given this potential gap in coverage, many businesses are considering the purchase of policies that specifically protect against losses resulting from data breach incidents. These increasingly common policies can include both first-party and third-party protections. First-party protections can include payment for loss of digital assets, cyber-extortion, cyber-terrorism, and expenses such as consumer notification costs. Third-party liability coverage can include disclosure injury (such as lawsuits alleging unauthorized access to or dissemination of the plaintiff’s private information), content injury (such as suits arising from intellectual property infringement), reputational injury (such as lawsuits alleging disparagement of products or services, libel, slander, defamation, and invasion of privacy), conduit injury (such as suits due to system security failures that result in harm to third-party systems), and impaired access injury (such as suits arising from system security failure resulting in an insured’s system being unavailable to its customers). Given the broad coverage provided by cybersecurity policies, companies with large amounts of sensitive data should carefully consider the costs and potential benefits of maintaining such coverage.
Proposed New Privacy Regulation to Replace Data Protection Directive
On January 25, 2012, the European Commission released its proposed new regulation that would replace the 1995 Data Protection Directive (the “Directive”). The aim of the new regulation is to update the outdated Directive to deal with technical developments and to unify the existing data privacy legislation of each EU member state. We discuss this significant legislative proposal in our client alert Proposed EU Privacy Rules Add to the Burden on International Business.
Implementation of EU Cookie Consent Requirement
Since 2003, the European Union’s Privacy and Electronic Communications Directive (the “e-Privacy Directive”) has required website operators and others who place cookies on users’ computers to inform the users about the purpose of the cookie and to give a right to opt out. The European Union took additional steps to strengthen this obligation on May 25, 2011, by amending Article 5(3) of the e-Privacy Directive, which now requires that website operators obtain consent before storing cookies on users’ computers. This new “opt-in” regime is in stark contrast to that of the “opt-out” standard generally adopted in the United States and has been widely criticized as being unworkable in practice.
Over six months have passed since the May 2011 deadline for member states to adopt legislation implementing the new rule. However, partly as a result of uncertainty over the legal and technical manner in which consent can be obtained, the provision has yet to be fully implemented in each member state.
The German legislature is debating a draft Employee Data Protection Act. The bill would provide a much more detailed–and in some respects more restrictive–regulation on the use of employee data. For example, it would implement specific restrictions on employee background screenings, video surveillance at work, the use of social media and employee health checks. It would also restrict the ability of companies to “contract out” of these and other data protection provisions by agreeing to less onerous standards with their works councils. On the other hand, the bill would allow employee data to be used in the context of compliance-related and other internal investigations (subject to appropriate safeguards). And, for international companies, the bill would facilitate data transfers to affiliated companies outside the European Union. The bill is expected to be enacted in the coming months but remains subject to ongoing public debate and developments at EU level.
Two important recent regional court decisions have considered the legality of reviewing employee emails as part of discovery and compliance-related internal investigations in workplaces that permit (or tolerate) personal use of corporate email. Although these decisions were favorable from an employer’s perspective, it is important to note that the German Federal Supreme Court has not yet decided this question. Because a violation of telecommunication secrecy laws involves the risk of criminal prosecution for the investigators, conducting internal investigations in Germany continues to require a high degree of caution.
The UK Information Commissioner (“IC”) has shown an increased willingness to sanction data controllers with monetary penalties under the UK Data Protection Act (“DPA”). The IC exercised his power to issue monetary penalties four times in 2011 and obtained formal public undertakings in 60 other cases. The DPA grants the IC the power to serve a data controller with a monetary penalty of up to £500,000 when there has been a serious violation that is likely to cause substantial damage or distress and where the violation was either deliberate or reckless. Three of the four sanctions have been against government entities for violations including mishandling of medical records and social work case files, and for issuing an unencrypted laptop that was later stolen. The penalties against the government entities ranged from £70,000 to £130,000. This new wave of monetary sanctions in the United Kingdom underscores the growing prominence of data privacy and data security issues throughout the European Union. Businesses have been warned: laptops and other portable storage devices should be encrypted to guard against theft, and personal data should be appropriately encrypted when sent via email, post or courier.
While often cited as the origin of cyber-attacks, China is not immune to data breaches within its own borders. In December 2011, hackers attacked popular social networking site tianya.cn and computer programmer community site CSDN, leaking the personal information of a total of 46 million users. Despite this, China has fallen behind other countries in the region on data privacy legislation, with no comprehensive national law or regulation currently in effect. A Draft Personal Information Protection Law was submitted to the State Council for review in 2008 but not yet passed into law, and it is not known when (or if) it will be enacted. However, this does not mean that data breaches go unpunished, and 2011 has seen the introduction of two major privacy measures.
Serious cases of illegally obtaining, selling, or providing citizens’ personal information are prosecuted under Article 235(1) of the PRC Criminal Law. In August, for instance, a Beijing appellate court convicted 21 defendants under this provision for illegally obtaining, providing, and selling personal information including cell phone registration information, call records, text message lists and geolocation information, household registration information, bank account information, vehicle information, and real estate registration information. 14 of the defendants received prison sentences. Although the Criminal Law provides serious sanctions for serious data security breaches, it does not form the basis of a comprehensive data protection regime. Nevertheless, the Chinese government has made notable progress in 2011:
The MIIT’s recent rulemaking efforts may represent an attempt to fill the legislative void left by the long delay in enacting the Personal Information Protection Law. However, the IISP Provisions are essentially limited to the website operators and providers of Internet-based services such as instant messaging, and the Draft Guide is simply a set of non-binding standards containing no enforcement provisions. Given these limitations, it is unclear how much impact these measures will have on the protection of personal information.
Across the strait, Taiwan has made greater strides towards the comprehensive protection of personal information. The country’s new Personal Information Protection Act (“PIPA”) replaces the 1995 Computer Processed Personal Information Protection Act, and now covers all public and private entities’ collection, processing and use of all personal information (“PI”), electronically processed or otherwise. The new law includes an enforceable requirement to notify affected data subjects of data breach incidents caused by a violation of the PIPA, a first in the region. Other notable features include restrictions on the collection of certain types of sensitive PI; increased civil, criminal, and administrative liabilities; and government power to restrict cross-border transmission of PI under certain circumstances. PIPA is expected to come into effect in 2012.
In addition, Taiwan’s Ministry of Economic Affairs has established the Taiwan Personal Information Protection and Administration System (“TPIPAS”), a set of standards intended to help private entities establish internal policies to comply with PIPA’s requirements. Entities that have established compliant privacy protection systems may be certified and issued the Data Privacy Protection Mark (“DP Mark”), similar to the PrivacyMark in Japan and TRUSTe in the U.S.
Hong Kong also recently moved to revamp its 15-year-old Personal Data (Privacy) Ordinance (“PDPO”) after a scandal broke in 2010 exposing the sale of two million personal data records by the Octopus Card operator without card users’ direct consent. The Personal Data (Privacy) (Amendment) Bill 2011 (“2011 Amendment”) was introduced into the Legislative Council on July 8, 2011. Most notably, the 2011 Amendment would require data users intending to sell personal data or use such data for direct marketing to provide data subjects with certain information and a means to opt out. Unauthorized sale or disclosure of personal data would be subject to criminal liability. The 2011 Amendment also addresses the powers of the Privacy Commissioner for Personal Data, and would increase penalties for violations of the PDPO, among other things. At the time of this writing, the 2011 Amendment has not yet been enacted.
Until recent years, Indian law had no provisions dealing with privacy protection. The Information Technology Act 2000 was originally amended in 2009 to require a body corporate that possesses, deals with or handles any “sensitive personal data or information” in a computer resource which it owns, controls or operates to maintain “reasonable security practices and procedures.” But the task of defining these terms was delegated to the Central Government. On April 11, 2011, the Ministry of Communications and Information Technology (Department of Information Technology), Government of India (“IT Ministry”) issued the Information Technology (Reasonable security practices and procedures and sensitive personal data or information) Rules 2011 (“Data Privacy Rules”). The new Data Privacy Rules require “bodies corporate” to observe certain standards in the collection, maintenance and disclosure of “sensitive personal data or information.”
The Data Privacy Rules give the term “sensitive personal data or information” an exhaustive definition. The term refers, inter alia, to (a) passwords, (b) financial information (details relating to bank accounts, credit cards, debit cards, or other payment instruments), (c) physical, physiological and mental health conditions, (d) sexual orientation, (e) medical records and history, and (f) biometric information. Broadly speaking, a body corporate must observe the following standards while collecting sensitive personal data or information:
Sensitive personal data or information can only be disclosed to a third party if prior consent has been obtained from the provider, unless otherwise agreed in the contract between parties, or unless otherwise required by law. Sensitive personal data or information cannot be published by the body corporate.
A body corporate which has adopted the international standard IS/ISO/IEC 27001 on “Information Technology–Security Techniques–Information Security Management System–Requirements” is deemed to have complied with its obligation to observe “reasonable security practices and procedures.” Alternatively, if an industry association does not follow the IS/ISO/IEC best practices for data protection, a body corporate that complies with a code of best practices approved and notified by the Central Government will also be deemed to have complied with its obligation to observe “reasonable security practices and procedures.” In both cases, the observance of best practices must be certified or audited on an annual basis by an independent auditor approved by the Central Government. A body corporate will also be considered to have satisfied its obligation to observe “reasonable security practices and procedures” if it has demonstrably implemented a comprehensive, documented information security program that contains managerial, technical, operational and physical security control measures commensurate with the information assets being protected, in keeping with the nature of the business.
Further, a body corporate must maintain a policy for dealing both with “sensitive personal data or information” and with “personal information.” The term “personal information” means any information that relates to a natural person which is capable of identifying such person, either by itself or in conjunction with other information likely to be available to the body corporate. The policy must be published on the body corporate’s website.
Once the Data Privacy Rules were issued, there was an outcry that these rules would make it difficult for Indian outsourcers to operate if they were required to take written consent from individuals in other countries whose data they collect and process through call centers and business process outsourcing operations. On August 24, 2011, the IT Ministry clarified that the obligations under the Data Privacy Rules apply only to Indian companies. Foreign companies are exempt. In addition, it was clarified that Indian companies that provide outsourcing services and which possess information under contract are no longer bound by the requirements of the Data Privacy Rules to obtain consent from the data subject. Also, “Providers of Information” as referred to in the Data Privacy Rules were limited only to natural persons.
Given that the rules are fairly new and have not been tested, they are still open to interpretation.
 See, e.g., Jennifer Valentino-Devries, iPhone Stored Location in Test Even if Disabled, WALL ST. J., Apr. 25, 2011.
 The Court in In re iPhone Application Litigation granted plaintiffs leave to amend their complaint, which they did in November 2011. Apple and the Mobile Industry Defendants have filed motions to dismiss the amended complaint, which are scheduled to be heard on May 3, 2012. Similar complaints are also pending against Google and Microsoft, and motions to dismiss those complaints are likely to be decided in the first part of 2012. See In re Google Android Consumer Privacy Litig., No. 11-MD-02264-JSW (N.D. Cal. Aug. 15, 2011); Cousineau v. Microsoft Corp., No. 11-CV-01438-JCC (W.D. Wash. Aug. 31, 2011).
 The current Act, by contrast, covers only a narrow set of persistent identifiers–those that are associated with individually identifiable information.
 Amy E. Bivins, Web Services, Telecoms Challenge Proposed COPPA Extension to All “Persistent Identifiers“, Bloomberg (Jan. 18, 2012).
 Press Release, SEC, SEC Charges Brokerage Executives with Failing to Protect Confidential Customer Information (Apr. 7, 2011).
The following Gibson Dunn attorneys assisted in preparing this client alert: Ashlie Beringer, Catherine Brewer, Tzung-Lin Fu, Kai Gesing, Daniel Li, Justin Liu, Priya Mehra, Scott Mellon, Joshua Mitchell, Karl Nelson, Laura O’Boyle, Jessica Ou, Jai Pathak, Daniel Pollard, Shawn Rodriguez, Ilissa Samplin, Michael Saryan, Meredith Smith, Alexander H. Southwell, Oliver Welch and Susannah Wright.
Gibson, Dunn & Crutcher’s lawyers are available to assist with any questions you may have regarding these issues. For further information, please contact the Gibson Dunn lawyer with whom you work or any of the following members of the Information Technology and Data Privacy Group:
S. Ashlie Beringer – Co-Chair, Palo Alto (+1 650-849-5219, firstname.lastname@example.org)
M. Sean Royall – Co-Chair, Dallas (+1 214-698-3256, email@example.com)
Alexander H. Southwell – Co-Chair, New York (+1 212-351-3981, firstname.lastname@example.org)
Debra Wong Yang – Co-Chair, Los Angeles (+1 213-229-7472, email@example.com)
Howard S. Hogan – Member, Washington, D.C. (+1 202-887-3640, firstname.lastname@example.org)
Karl G. Nelson – Member, Dallas (+1 214-698-3203, email@example.com)
James A. Cox – Member, London (+44 207 071 4250, firstname.lastname@example.org)
Andrés Font Galarza – Member, Brussels (+32 2 554 7230, email@example.com)
Kai Gesing – Member, Munich (+49 89 189 33-180, firstname.lastname@example.org)
Bernard Grinspan – Member, Paris (+33 1 56 43 13 00, email@example.com)
Daniel E. Pollard – Member, London (+44 207 071 4257, firstname.lastname@example.org)
Jean-Philippe Robé – Member, Paris (+33 1 56 43 13 00, email@example.com)
Michael Walther – Member, Munich (+49 89 189 33-180, firstname.lastname@example.org)
Questions about SEC disclosure issues concerning data privacy and cybersecurity can also be addressed to any of the following members of the Securities Regulation and Corporate Disclosure Group:
Amy L. Goodman – Co-Chair, Washington, D.C. (202-955-8653, email@example.com)
James J. Moloney – Co-Chair, Orange County, CA (949-451-4343, firstname.lastname@example.org)
Elizabeth Ising – Member, Washington, D.C. (202-955-8287, email@example.com)
© 2012 Gibson, Dunn & Crutcher LLP
Attorney Advertising: The enclosed materials have been prepared for general informational purposes only and are not intended as legal advice.