189 Search Results

January 13, 2019 |
Gibson Dunn Named a 2018 Law Firm of the Year

Gibson, Dunn & Crutcher LLP is pleased to announce its selection by Law360 as a Law Firm of the Year for 2018, featuring the four firms that received the most Practice Group of the Year awards in its profile, “The Firms That Dominated in 2018.” [PDF] Of the four, Gibson Dunn “led the pack with 11 winning practice areas” for “successfully securing wins in bet-the-company matters and closing high-profile, big-ticket deals for clients throughout 2018.” The awards were published on January 13, 2019. Law360 previously noted that Gibson Dunn “dominated the competition this year” for its Practice Groups of the Year, which were selected “with an eye toward landmark matters and general excellence.” Gibson Dunn is proud to have been honored in the following categories: Appellate [PDF]: Gibson Dunn’s Appellate and Constitutional Law Practice Group is one of the leading U.S. appellate practices, with broad experience in complex litigation at all levels of the state and federal court systems and an exceptionally strong and high-profile presence and record of success before the U.S. Supreme Court. Class Action: Our Class Actions Practice Group has an unrivaled record of success in the defense of high-stakes class action lawsuits across the United States. We have successfully litigated many of the most significant class actions in recent years, amassing an impressive win record in trial and appellate courts, including before the U. S. Supreme Court, that have changed the class action landscape nationwide. Competition: Gibson Dunn’s Antitrust and Competition Practice Group serves clients in a broad array of industries globally in every significant area of antitrust and competition law, including private antitrust litigation between large companies and class action treble damages litigation; government review of mergers and acquisitions; and cartel investigations, internationally across borders and jurisdictions. Cybersecurity & Privacy: Our Privacy, Cybersecurity and Consumer Protection Practice Group represents clients across a wide range of industries in matters involving complex and rapidly evolving laws, regulations, and industry best practices relating to privacy, cybersecurity, and consumer protection. Our team includes the largest number of former federal cyber-crimes prosecutors of any law firm. Employment: No firm has a more prominent position at the leading edge of labor and employment law than Gibson Dunn. With a Labor and Employment Practice Group that covers a complete range of matters, we are known for our unsurpassed ability to help the world’s preeminent companies tackle their most challenging labor and employment matters. Energy: Across the firm’s Energy and Infrastructure, Oil and Gas, and Energy, Regulation and Litigation Practice Groups, our global energy practitioners counsel on a complex range of issues and proceedings in the transactional, regulatory, enforcement, investigatory and litigation arenas, serving clients in all energy industry segments. Environmental: Gibson Dunn has represented clients in the environmental and mass tort area for more than 30 years, providing sophisticated counsel on the complete range of litigation matters as well as in connection with transactional concerns such as ongoing regulatory compliance, legislative activities and environmental due diligence. Real Estate: The breadth of sophisticated matters handled by our real estate lawyers worldwide includes acquisitions and sales; joint ventures; financing; land use and development; and construction. Gibson Dunn additionally has one of the leading hotel and hospitality practices globally. Securities: Our securities practice offers comprehensive client services including in the defense and handling of securities class action litigation, derivative litigation, M&A litigation, internal investigations, and investigations and enforcement actions by the SEC, DOJ and state attorneys general. Sports: Gibson Dunn’s global Sports Law Practice represents a wide range of clients in matters relating to professional and amateur sports, including individual teams, sports facilities, athletic associations, athletes, financial institutions, television networks, sponsors and municipalities. Transportation: Gibson Dunn’s experience with transportation-related entities is extensive and includes the automotive sector as well as all aspects of the airline and rail industries, freight, shipping, and maritime. We advise in a broad range of areas that include regulatory and compliance, customs and trade regulation, antitrust, litigation, corporate transactions, tax, real estate, environmental and insurance.

January 15, 2019 |
Ninth Circuit Judges Call for En Banc Review of the Federal Trade Commission’s Authority to Obtain Monetary Relief

Click for PDF With increasing regularity, the Federal Trade Commission (“FTC”) is seeking and obtaining large monetary remedies as “equitable monetary relief” pursuant to Section 13(b) of the FTC Act.  Indeed, FTC settlements and judgments exceeding $100 million, and even $1 billion, are becoming commonplace. The Supreme Court, however, has never held that Section 13(b) of the FTC Act empowers the FTC to obtain monetary relief.  Although multiple federal circuit courts have held that Section 13(b) provides the agency with this power, several weeks ago two Ninth Circuit judges issued a concurrence in FTC v. AMG Capital Management, LLC et al. calling for the full Ninth Circuit to reconsider this issue en banc in light of the Supreme Court’s 2017 decision in Kokesh v. SEC. Gibson Dunn partners Sean Royall, Blaine Evanson, and Rich Cunningham, and associate Brandon J. Stoker recently published an article discussing the AMG Capital Management concurrence in the Washington Legal Foundation’s The Legal Pulse blog.  The article describes the concurrence and how it fits into the broader legal landscape around this issue, which is clearly poised for further attention from the federal appellate courts, including the Supreme Court. Ninth Circuit Judges Call for En Banc Review of the Federal Trade Commission’s Authority to Obtain Monetary Relief (click on link) © 2019, Washington Legal Foundation, The Legal Pulse, January 15, 2019. Reprinted with permission. Gibson Dunn’s lawyers are available to assist in addressing any questions you may have regarding these developments. Please contact the authors of this Client Alert, the Gibson Dunn lawyer with whom you usually work, or one of the leaders and members of the firm’s Antitrust and Competition or Privacy, Cybersecurity and Consumer Protection practice groups: Washington, D.C. Scott D. Hammond (+1 202-887-3684, shammond@gibsondunn.com) D. Jarrett Arp (+1 202-955-8678, jarp@gibsondunn.com) Adam Di Vincenzo (+1 202-887-3704, adivincenzo@gibsondunn.com) Howard S. Hogan (+1 202-887-3640, hhogan@gibsondunn.com) Joseph Kattan P.C. (+1 202-955-8239, jkattan@gibsondunn.com) Joshua Lipton (+1 202-955-8226, jlipton@gibsondunn.com) Cynthia Richman (+1 202-955-8234, crichman@gibsondunn.com) Jeremy Robison (+1 202-955-8518, wrobison@gibsondunn.com) New York Alexander H. Southwell (+1 212-351-3981, asouthwell@gibsondunn.com) Eric J. Stock (+1 212-351-2301, estock@gibsondunn.com) Los Angeles Daniel G. Swanson (+1 213-229-7430, dswanson@gibsondunn.com) Debra Wong Yang (+1 213-229-7472, dwongyang@gibsondunn.com) Samuel G. Liversidge (+1 213-229-7420, sliversidge@gibsondunn.com) Jay P. Srinivasan (+1 213-229-7296, jsrinivasan@gibsondunn.com) Rod J. Stone (+1 213-229-7256, rstone@gibsondunn.com) Eric D. Vandevelde (+1 213-229-7186, evandevelde@gibsondunn.com) Orange County Blaine H. Evanson (+1 949-451-3805, bevanson@gibsondunn.com) San Francisco Rachel S. Brass (+1 415-393-8293, rbrass@gibsondunn.com) Dallas M. Sean Royall (+1 214-698-3256, sroyall@gibsondunn.com) Olivia Adendorff (+1 214-698-3159, oadendorff@gibsondunn.com) Veronica S. Lewis (+1 214-698-3320, vlewis@gibsondunn.com) Mike Raiff (+1 214-698-3350, mraiff@gibsondunn.com) Brian Robison (+1 214-698-3370, brobison@gibsondunn.com) Robert C. Walters (+1 214-698-3114, rwalters@gibsondunn.com) Denver Richard H. Cunningham (+1 303-298-5752, rhcunningham@gibsondunn.com) Ryan T. Bergsieker (+1 303-298-5774, rbergsieker@gibsondunn.com) © 2019 Gibson, Dunn & Crutcher LLP Attorney Advertising:  The enclosed materials have been prepared for general informational purposes only and are not intended as legal advice.

January 11, 2019 |
How Calif. Privacy Act Could Prompt Private Plaintiff Suits

Orange County partner Joshua Jessen is the author of “How Calif. Privacy Act Could Prompt Private Plaintiff Suits,” [PDF] published by Law360 on January 11, 2019.

November 28, 2018 |
Law360 Names Eight Gibson Dunn Partners as MVPs

Law360 named eight Gibson Dunn partners among its 2018 MVPs and noted that the firm had the most MVPs of any law firms this year.  Law360 MVPs feature lawyers who have “distinguished themselves from their peers by securing hard-earned successes in high-stakes litigation, complex global matters and record-breaking deals.” Gibson Dunn’s MVPs are: Christopher Chorba, a Class Action MVP [PDF] – Co-Chair of the firm’s Class Actions Group and a partner in our Los Angeles office, he defends class actions and handles a broad range of complex commercial litigation with an emphasis on claims involving California’s Unfair Competition and False Advertising Laws, the Consumers Legal Remedies Act, the Lanham Act, and the Class Action Fairness Act of 2005. His litigation and counseling experience includes work for companies in the automotive, consumer products, entertainment, financial services, food and beverage, social media, technology, telecommunications, insurance, health care, retail, and utility industries. Michael P. Darden, an Energy MVP [PDF] – Partner in charge of the Houston office, Mike focuses his practice on international and U.S. oil & gas ventures and infrastructure projects (including LNG, deep-water and unconventional resource development projects), asset acquisitions and divestitures, and energy-based financings (including project financings, reserve-based loans and production payments). Thomas H. Dupree Jr., an MVP in Transportation [PDF] –  Co-partner in charge of the Washington, DC office, Tom has represented clients in a wide variety of trial and appellate matters, including cases involving punitive damages, class actions, product liability, arbitration, intellectual property, employment, and constitutional challenges to federal and state statutes.  He has argued more than 80 appeals in the federal courts, including in all 13 circuits as well as the United States Supreme Court. Joanne Franzel, a Real Estate MVP [PDF] – Joanne is a partner in the New York office, and her practice has included all forms of real estate transactions, including acquisitions and dispositions and financing, as well as office and retail leasing with anchor, as well as shopping center tenants. She also has represented a number of clients in New York City real estate development, representing developers as well as users in various mixed-use projects, often with a significant public/private component. Matthew McGill, an MVP in the Sports category [PDF] – A partner in the Washington, D.C. office, Matt practices appellate and constitutional law. He has participated in 21 cases before the Supreme Court of the United States, prevailing in 16. Spanning a wide range of substantive areas, those representations have included several high-profile triumphs over foreign and domestic sovereigns. Outside the Supreme Court, his practice focuses on cases involving novel and complex questions of federal law, often in high-profile litigation against governmental entities. Mark A. Perry, an MVP in the Securities category [PDF] – Mark is a partner in the Washington, D.C. office and is Co-chair of the firm’s Appellate and Constitutional Law Group.  His practice focuses on complex commercial litigation at both the trial and appellate levels. He is an accomplished appellate lawyer who has briefed and argued many cases in the Supreme Court of the United States. He has served as chief appellate counsel to Fortune 100 companies in significant securities, intellectual property, and employment cases.  He also appears frequently in federal district courts, serving both as lead counsel and as legal strategist in complex commercial cases. Eugene Scalia, an Appellate MVP [PDF] – A partner in the Washington, D.C. office and Co-Chair of the Administrative Law and Regulatory Practice Group, Gene has a national practice handling a broad range of labor, employment, appellate, and regulatory matters. His success bringing legal challenges to federal agency actions has been widely reported in the legal and business press. Michael Li-Ming Wong, an MVP in Cybersecurity and Privacy [PDF] – Michael is a partner in the San Francisco and Palo Alto offices. He focuses on white-collar criminal matters, complex civil litigation, data-privacy investigations and litigation, and internal investigations. Michael has tried more than 20 civil and criminal jury trials in federal and state courts, including five multi-week jury trials over the past five years.

October 24, 2018 |
Lessons from FTC’s Loss in, and Subsequent Abandonment of, DirecTV Advertising Case

The Federal Trade Commission (“FTC”) is increasingly focusing on the advertising, data privacy/security, and e-commerce processes of prominent companies marketing legitimate, valuable products and services, as compared to the types of fraudsters and shams that have been a central focus of FTC attention in the past. The FTC’s recently concluded action against DirecTV is emblematic of this trend. In FTC v. DirecTV, the FTC alleged that DirecTV’s marketing failed to adequately disclose that (a) the introductory discounted price lasted only twelve months while subscribers were bound to a 24-month commitment; (b) subscribers who cancelled early would be charged a cancellation fee; and (c) subscribers would automatically incur monthly charges if they did not cancel a premium channel package after a free three-month promotional period. On August 16, 2017, after hearing the FTC’s case-in-chief, Judge Gilliam of the U.S. District Court for the Northern District of California granted judgment for DirecTV on the majority of these claims. And earlier this week, the FTC agreed to voluntarily dismiss the remainder of its case with prejudice. Gibson Dunn partners Sean Royall and Rich Cunningham and associates Brett Rosenthal and Emily Riff recently published an article titled Lessons from FTC’s Loss in, and Subsequent Abandonment of, DirecTV Advertising Case in the Washington Legal Foundation’s The Legal Pulse blog. The article describes the case, the FTC’s evidence, and key takeaways for companies crafting advertising and marketing disclosures. Lessons from FTC’s Loss in, and Subsequent Abandonment of, DirecTV Advertising Case (click on link) © 2018, Washington Legal Foundation, The Legal Pulse, October 23, 2018. Reprinted with permission. Gibson Dunn’s lawyers are available to assist in addressing any questions you may have regarding these developments. Please contact the authors of this Client Alert, the Gibson Dunn lawyer with whom you usually work, or one of the leaders and members of the firm’s Antitrust and Competition or Privacy, Cybersecurity and Consumer Protection practice groups: Washington, D.C. Scott D. Hammond (+1 202-887-3684, shammond@gibsondunn.com) D. Jarrett Arp (+1 202-955-8678, jarp@gibsondunn.com) Adam Di Vincenzo (+1 202-887-3704, adivincenzo@gibsondunn.com) Howard S. Hogan (+1 202-887-3640, hhogan@gibsondunn.com) Joseph Kattan P.C. (+1 202-955-8239, jkattan@gibsondunn.com) Joshua Lipton (+1 202-955-8226, jlipton@gibsondunn.com) Cynthia Richman (+1 202-955-8234, crichman@gibsondunn.com) New York Alexander H. Southwell (+1 212-351-3981, asouthwell@gibsondunn.com) Eric J. Stock (+1 212-351-2301, estock@gibsondunn.com) Los Angeles Daniel G. Swanson (+1 213-229-7430, dswanson@gibsondunn.com) Debra Wong Yang (+1 213-229-7472, dwongyang@gibsondunn.com) Samuel G. Liversidge (+1 213-229-7420, sliversidge@gibsondunn.com) Jay P. Srinivasan (+1 213-229-7296, jsrinivasan@gibsondunn.com) Rod J. Stone (+1 213-229-7256, rstone@gibsondunn.com) Eric D. Vandevelde (+1 213-229-7186, evandevelde@gibsondunn.com) San Francisco Rachel S. Brass (+1 415-393-8293, rbrass@gibsondunn.com) Dallas M. Sean Royall (+1 214-698-3256, sroyall@gibsondunn.com) Veronica S. Lewis (+1 214-698-3320, vlewis@gibsondunn.com) Brian Robison (+1 214-698-3370, brobison@gibsondunn.com) Robert C. Walters (+1 214-698-3114, rwalters@gibsondunn.com) Denver Richard H. Cunningham (+1 303-298-5752, rhcunningham@gibsondunn.com) Ryan T. Bergsieker (+1 303-298-5774, rbergsieker@gibsondunn.com) © 2018 Gibson, Dunn & Crutcher LLP Attorney Advertising:  The enclosed materials have been prepared for general informational purposes only and are not intended as legal advice.

October 17, 2018 |
SEC Warns Public Companies on Cyber-Fraud Controls

Click for PDF On October 16, 2018, the Securities and Exchange Commission issued a report warning public companies about the importance of internal controls to prevent cyber fraud.  The report described the SEC Division of Enforcement’s investigation of multiple public companies which had collectively lost nearly $100 million in a range of cyber-scams typically involving phony emails requesting payments to vendors or corporate executives.[1] Although these types of cyber-crimes are common, the Enforcement Division notably investigated whether the failure of the companies’ internal accounting controls to prevent unauthorized payments violated the federal securities laws.  The SEC ultimately declined to pursue enforcement actions, but nonetheless issued a report cautioning public companies about the importance of devising and maintaining a system of internal accounting controls sufficient to protect company assets. While the SEC has previously addressed the need for public companies to promptly disclose cybersecurity incidents, the new report sees the agency wading into corporate controls designed to mitigate such risks.  The report encourages companies to calibrate existing internal controls, and related personnel training, to ensure they are responsive to emerging cyber threats.  The report (issued to coincide with National Cybersecurity Awareness Month) clearly intends to warn public companies that future investigations may result in enforcement action. The Report of Investigation Section 21(a) of the Securities Exchange Act of 1934 empowers the SEC to issue a public Report of Investigation where deemed appropriate.  While SEC investigations are confidential unless and until the SEC files an enforcement action alleging that an individual or entity has violated the federal securities laws, Section 21(a) reports provide a vehicle to publicize investigative findings even where no enforcement action is pursued.  Such reports are used sparingly, perhaps every few years, typically to address emerging issues where the interpretation of the federal securities laws may be uncertain.  (For instance, recent Section 21(a) reports have addressed the treatment of digital tokens as securities and the use of social media to disseminate material corporate information.) The October 16 report details the Enforcement Division’s investigations into the internal accounting controls of nine issuers, across multiple industries, that were victims of cyber-scams. The Division identified two specific types of cyber-fraud – typically referred to as business email compromises or “BECs” – that had been perpetrated.  The first involved emails from persons claiming to be unaffiliated corporate executives, typically sent to finance personnel directing them to wire large sums of money to a foreign bank account for time-sensitive deals. These were often unsophisticated operations, textbook fakes that included urgent, secret requests, unusual foreign transactions, and spelling and grammatical errors. The second type of business email compromises were harder to detect. Perpetrators hacked real vendors’ accounts and sent invoices and requests for payments that appeared to be for otherwise legitimate transactions. As a result, issuers made payments on outstanding invoices to foreign accounts controlled by impersonators rather than their real vendors, often learning of the scam only when the legitimate vendor inquired into delinquent bills. According to the SEC, both types of frauds often succeeded, at least in part, because responsible personnel failed to understand their company’s existing cybersecurity controls or to appropriately question the veracity of the emails.  The SEC explained that the frauds themselves were not sophisticated in design or in their use of technology; rather, they relied on “weaknesses in policies and procedures and human vulnerabilities that rendered the control environment ineffective.” SEC Cyber-Fraud Guidance Cybersecurity has been a high priority for the SEC dating back several years. The SEC has pursued a number of enforcement actions against registered securities firms arising out of data breaches or deficient controls.  For example, just last month the SEC brought a settled action against a broker-dealer/investment-adviser which suffered a cyber-intrusion that had allegedly compromised the personal information of thousands of customers.  The SEC alleged that the firm had failed to comply with securities regulations governing the safeguarding of customer information, including the Identity Theft Red Flags Rule.[2] The SEC has been less aggressive in pursuing cybersecurity-related actions against public companies.  However, earlier this year, the SEC brought its first enforcement action against a public company for alleged delays in its disclosure of a large-scale data breach.[3] But such enforcement actions put the SEC in the difficult position of weighing charges against companies which are themselves victims of a crime.  The SEC has thus tried to be measured in its approach to such actions, turning to speeches and public guidance rather than a large number of enforcement actions.  (Indeed, the SEC has had to make the embarrassing disclosure that its own EDGAR online filing system had been hacked and sensitive information compromised.[4]) Hence, in February 2018, the SEC issued interpretive guidance for public companies regarding the disclosure of cybersecurity risks and incidents.[5]  Among other things, the guidance counseled the timely public disclosure of material data breaches, recognizing that such disclosures need not compromise the company’s cybersecurity efforts.  The guidance further discussed the need to maintain effective disclosure controls and procedures.  However, the February guidance did not address specific controls to prevent cyber incidents in the first place. The new Report of Investigation takes the additional step of addressing not just corporate disclosures of cyber incidents, but the procedures companies are expected to maintain in order to prevent these breaches from occurring.  The SEC noted that the internal controls provisions of the federal securities laws are not new, and based its report largely on the controls set forth in Section 13(b)(2)(B) of the Exchange Act.  But the SEC emphasized that such controls must be “attuned to this kind of cyber-related fraud, as well as the critical role training plays in implementing controls that serve their purpose and protect assets in compliance with the federal securities laws.”  The report noted that the issuers under investigation had procedures in place to authorize and process payment requests, yet were still victimized, at least in part “because the responsible personnel did not sufficiently understand the company’s existing controls or did not recognize indications in the emailed instructions that those communications lacked reliability.” The SEC concluded that public companies’ “internal accounting controls may need to be reassessed in light of emerging risks, including risks arising from cyber-related frauds” and “must calibrate their internal accounting controls to the current risk environment.” Unfortunately, the vagueness of such guidance leaves the burden on companies to determine how best to address emerging risks.  Whether a company’s controls are adequate may be judged in hindsight by the Enforcement Division; not surprisingly, companies and individuals under investigation often find the staff asserting that, if the controls did not prevent the misconduct, they were by definition inadequate.  Here, the SEC took a cautious approach in issuing a Section 21(a) report highlighting the risk rather than publicly identifying and penalizing the companies which had already been victimized by these scams. However, companies and their advisors should assume that, with this warning shot across the bow, the next investigation of a similar incident may result in more serious action.  Persons responsible for designing and maintaining the company’s internal controls should consider whether improvements (such as enhanced trainings) are warranted; having now spoken on the issue, the Enforcement Division is likely to view corporate inaction as a factor in how it assesses the company’s liability for future data breaches and cyber-frauds.    [1]   SEC Press Release (Oct. 16, 2018), available at www.sec.gov/news/press-release/2018-236; the underlying report may be found at www.sec.gov/litigation/investreport/34-84429.pdf.    [2]   SEC Press Release (Sept. 16, 2018), available at www.sec.gov/news/press-release/2018-213.  This enforcement action was particularly notable as the first occasion the SEC relied upon the rules requiring financial advisory firms to maintain a robust program for preventing identify theft, thus emphasizing the significance of those rules.    [3]   SEC Press Release (Apr. 24, 2018), available at www.sec.gov/news/press-release/2018-71.    [4]   SEC Press Release (Oct. 2, 2017), available at www.sec.gov/news/press-release/2017-186.    [5]   SEC Press Release (Feb. 21, 2018), available at www.sec.gov/news/press-release/2018-22; the guidance itself can be found at www.sec.gov/rules/interp/2018/33-10459.pdf.  The SEC provided in-depth guidance in this release on disclosure processes and considerations related to cybersecurity risks and incidents, and complements some of the points highlighted in the Section 21A report. Gibson Dunn’s lawyers are available to assist with any questions you may have regarding these issues.  For further information, please contact the Gibson Dunn lawyer with whom you usually work in the firm’s Securities Enforcement or Privacy, Cybersecurity and Consumer Protection practice groups, or the following authors: Marc J. Fagel – San Francisco (+1 415-393-8332, mfagel@gibsondunn.com) Alexander H. Southwell – New York (+1 212-351-3981, asouthwell@gibsondunn.com) Please also feel free to contact the following practice leaders and members: Securities Enforcement Group: New York Barry R. Goldsmith – Co-Chair (+1 212-351-2440, bgoldsmith@gibsondunn.com) Mark K. Schonfeld – Co-Chair (+1 212-351-2433, mschonfeld@gibsondunn.com) Reed Brodsky (+1 212-351-5334, rbrodsky@gibsondunn.com) Joel M. Cohen (+1 212-351-2664, jcohen@gibsondunn.com) Lee G. Dunst (+1 212-351-3824, ldunst@gibsondunn.com) Laura Kathryn O’Boyle (+1 212-351-2304, loboyle@gibsondunn.com) Alexander H. Southwell (+1 212-351-3981, asouthwell@gibsondunn.com) Avi Weitzman (+1 212-351-2465, aweitzman@gibsondunn.com) Lawrence J. Zweifach (+1 212-351-2625, lzweifach@gibsondunn.com) Washington, D.C. Richard W. Grime – Co-Chair (+1 202-955-8219, rgrime@gibsondunn.com) Stephanie L. Brooker  (+1 202-887-3502, sbrooker@gibsondunn.com) Daniel P. Chung (+1 202-887-3729, dchung@gibsondunn.com) Stuart F. Delery (+1 202-887-3650, sdelery@gibsondunn.com) Patrick F. Stokes (+1 202-955-8504, pstokes@gibsondunn.com) F. Joseph Warin (+1 202-887-3609, fwarin@gibsondunn.com) San Francisco Marc J. Fagel – Co-Chair (+1 415-393-8332, mfagel@gibsondunn.com) Winston Y. Chan (+1 415-393-8362, wchan@gibsondunn.com) Thad A. Davis (+1 415-393-8251, tdavis@gibsondunn.com) Charles J. Stevens (+1 415-393-8391, cstevens@gibsondunn.com) Michael Li-Ming Wong (+1 415-393-8234, mwong@gibsondunn.com) Palo Alto Paul J. Collins (+1 650-849-5309, pcollins@gibsondunn.com) Benjamin B. Wagner (+1 650-849-5395, bwagner@gibsondunn.com) Denver Robert C. Blume (+1 303-298-5758, rblume@gibsondunn.com) Monica K. Loseman (+1 303-298-5784, mloseman@gibsondunn.com) Los Angeles Michael M. Farhang (+1 213-229-7005, mfarhang@gibsondunn.com) Douglas M. Fuchs (+1 213-229-7605, dfuchs@gibsondunn.com) Privacy, Cybersecurity and Consumer Protection Group: Alexander H. Southwell – Co-Chair, New York (+1 212-351-3981, asouthwell@gibsondunn.com) M. Sean Royall – Dallas (+1 214-698-3256, sroyall@gibsondunn.com) Debra Wong Yang – Los Angeles (+1 213-229-7472, dwongyang@gibsondunn.com) Christopher Chorba – Los Angeles (+1 213-229-7396, cchorba@gibsondunn.com) Richard H. Cunningham – Denver (+1 303-298-5752, rhcunningham@gibsondunn.com) Howard S. Hogan – Washington, D.C. (+1 202-887-3640, hhogan@gibsondunn.com) Joshua A. Jessen – Orange County/Palo Alto (+1 949-451-4114/+1 650-849-5375, jjessen@gibsondunn.com) Kristin A. Linsley – San Francisco (+1 415-393-8395, klinsley@gibsondunn.com) H. Mark Lyon – Palo Alto (+1 650-849-5307, mlyon@gibsondunn.com) Shaalu Mehra – Palo Alto (+1 650-849-5282, smehra@gibsondunn.com) Karl G. Nelson – Dallas (+1 214-698-3203, knelson@gibsondunn.com) Eric D. Vandevelde – Los Angeles (+1 213-229-7186, evandevelde@gibsondunn.com) Benjamin B. Wagner – Palo Alto (+1 650-849-5395, bwagner@gibsondunn.com) Michael Li-Ming Wong – San Francisco/Palo Alto (+1 415-393-8333/+1 650-849-5393, mwong@gibsondunn.com) Ryan T. Bergsieker – Denver (+1 303-298-5774, rbergsieker@gibsondunn.com) © 2018 Gibson, Dunn & Crutcher LLP Attorney Advertising:  The enclosed materials have been prepared for general informational purposes only and are not intended as legal advice.

October 10, 2018 |
Artificial Intelligence and Autonomous Systems Legal Update (3Q18)

Click for PDF We are pleased to provide the following update on recent legal developments in the areas of artificial intelligence, machine learning, and autonomous systems (or “AI” for short), and their implications for companies developing or using products based on these technologies.  As the spread of AI rapidly increases, legal scrutiny in the U.S. of the potential uses and effects of these technologies (both beneficial and harmful) has also been increasing.  While we have chosen to highlight below several governmental and legislative actions from the past quarter, the area is rapidly evolving and we will continue to monitor further actions in these and related areas to provide future updates of potential interest on a regular basis. I.       Increasing Federal Government Interest in AI Technologies The Trump Administration and Congress have recently taken a number of steps aimed at pushing AI forward on the U.S. agenda, while also treating with caution foreign involvement in U.S.-based AI technologies.  Some of these actions may mean additional hurdles for cross-border transactions involving AI technology.  On the other hand, there may also be opportunities for companies engaged in the pursuit of AI technologies to influence the direction of future legislation at an early stage. A.       White House Studies AI In May, the Trump Administration kicked off what is becoming an active year in AI for the federal government by hosting an “Artificial Intelligence for American Industry” summit as part of its designation of AI as an “Administration R&D priority.”[1] During the summit, the White House also announced the establishment of a “Select Committee on Artificial Intelligence” to advise the President on research and development priorities and explore partnerships within the government and with industry.[2]  This Select Committee is housed within the National Science and Technology Council, and is chaired by Office of Science and Technology Policy leadership. Administration officials have said that a focus of the Select Committee will be to look at opportunities for increasing federal funds into AI research in the private sector, to ensure that the U.S. has (or maintains) a technological advantage in AI over other countries.  In addition, the Committee is to look at possible uses of the government’s vast store of taxpayer-funded data to promote the development of advanced AI technologies, without compromising security or individual privacy.  While it is believed that there will be opportunities for private stakeholders to have input into the Select Committee’s deliberations, the inaugural meeting of the Committee, which occurred in late June, was not open to the public for input. B.       AI in the NDAA for 2019 More recently, on August 13th, President Trump signed into law the John S. McCain National Defense Authorization Act (NDAA) for 2019,[3] which specifically authorizes the Department of Defense to appoint a senior official to coordinate activities relating to the development of AI technologies for the military, as well as to create a strategic plan for incorporating a number of AI technologies into its defense arsenal.  In addition, the NDAA includes the Foreign Investment Risk Review Modernization Act (FIRRMA)[4] and the Export Control Reform Act (ECRA),[5] both of which require the government to scrutinize cross-border transactions involving certain new technologies, likely including AI-related technologies. FIRRMA modifies the review process currently used by the Committee on Foreign Investment in the United States (CFIUS), an interagency committee that reviews the national security implications of investments by foreign entities in the United States.  With FIRRMA’s enactment, the scope of the transactions that CFIUS can review is expanded to include those involving “emerging and foundational technologies,” defined as those that are critical for maintaining the national security technological advantage of the United States.  While the changes to the CFIUS process are still fresh and untested, increased scrutiny under FIRRMA will likely have an impact on available foreign investment in the development and use of AI, at least where the AI technology involved is deemed such a critical technology and is sought to be purchased or licensed by foreign investors. Similarly, ECRA requires the President to establish an interagency review process with various agencies including the Departments of Defense, Energy, State and the head of other agencies “as appropriate,” to identify emerging and foundational technologies essential to national security in order to impose appropriate export controls.  Export licenses are to be denied if the proposed export would have a “significant negative impact” on the U.S. defense industrial base.  The terms “emerging and foundational technologies” are not expressly defined, but it is likely that AI technologies, which are of course “emerging,” would receive a close look under ECRA and that ECRA might also curtail whether certain AI technologies can be sold or licensed to foreign entities. The NDAA also established a National Security Commission on Artificial Intelligence “to review advances in artificial intelligence, related machine learning developments, and associated technologies.”  The Commission, made up of certain senior members of Congress as well as the Secretaries of Defense and Commerce, will function independently from other such panels established by the Trump Administration and will review developments in AI along with assessing risks related to AI and related technologies to consider how those methods relate to the national security and defense needs of the United States.  The Commission will focus on technologies that provide the U.S. with a competitive AI advantage, and will look at the need for AI research and investment as well as consider the legal and ethical risks associated with the use of AI.  Members are to be appointed within 90 days of the Commission being established and an initial report to the President and Congress is to be submitted by early February 2019. C.       Additional Congressional Interest in AI/Automation While a number of existing bills with potential impacts on the development of AI technologies remain stalled in Congress,[6] two more recently-introduced pieces of legislation are also worth monitoring as they progress through the legislative process. In late June, Senator Feinstein (D-CA) sponsored the “Bot Disclosure and Accountability Act of 2018,” which is intended to address  some of the concerns over the use of automated systems for distributing content through social media.[7] As introduced, the bill seeks to prohibit certain types of bot or other automated activity directed to political advertising, at least where such automated activity appears to impersonate human activity.  The bill would also require the Federal Trade Commission to establish and enforce regulations to require public disclosure of the use of bots, defined as any “automated software program or process intended to impersonate or replicate human activity online.”  The bill provides that any such regulations are to be aimed at the “social media provider,” and would place the burden of compliance on such providers of social media websites and other outlets.  Specifically, the FTC is to promulgate regulations requiring the provider to take steps to ensure that any users of a social media website owned or operated by the provider would receive “clear and conspicuous notice” of the use of bots and similar automated systems.  FTC regulations would also require social media providers to police their systems, removing non-compliant postings and/or taking other actions (including suspension or removal) against users that violate such regulations.  While there are significant differences, the Feinstein bill is nevertheless similar in many ways to California’s recently-enacted Bot disclosure law (S.B. 1001), discussed more fully in our previous client alert located here.[8] Also of note, on September 26th, a bipartisan group of Senators introduced the “Artificial Intelligence in Government Act,” which seeks to provide the federal government with additional resources to incorporate AI technologies in the government’s operations.[9] As written, this new bill would require the General Services Administration to bring on technical experts to advise other government agencies, conduct research into future federal AI policy, and promote inter-agency cooperation with regard to AI technologies.  The bill would also create yet another federal advisory board to advise government agencies on AI policy opportunities and concerns.  In addition, the newly-introduced legislation seeks to require the Office of Management and Budget to identify ways for the federal government to invest in and utilize AI technologies and tasks the Office of Personal Management with anticipating and providing training for the skills and competencies the government requires going-forward for incorporating AI into its overall data strategy. II.       Potential Impact on AI Technology of Recent California Privacy Legislation Interestingly, in the related area of data privacy regulation, the federal government has been slower to respond, and it is the state legislatures that are leading the charge.[10] Most machine learning algorithms depend on the availability of large data sets for purpose of training, testing, and refinement.  Typically, the larger and more complete the datasets available, the better.  However, these datasets often include highly personal information about consumers, patients, or others of interest—data that can sometimes be used to predict information specific to a particular person even if attempts are made to keep the source of such data anonymous. The European Union’s General Data Protection Regulation, or GDPR, which went into force on May 25, 2018, has deservedly garnered a great deal of press as one of the first, most comprehensive collections of data privacy protections. While we’re only months into its effective period, the full impact and enforcement of the GDPR’s provisions have yet to be felt.  Still, many U.S. companies, forced to take steps to comply with the provisions of GDPR at least with regard to EU citizens, have opted to take many of those same steps here in the U.S., despite the fact that no direct U.S. federal analogue to the GDPR yet exists.[11] Rather than wait for the federal government to act, several states have opted to follow the lead of the GDPR and enact their own versions of comprehensive data privacy laws.  Perhaps the most significant of these state-legislated omnibus privacy laws is the California Consumer Privacy Act (“CCPA”), signed into law on June 28, 2108, and slated to take effect on January 1, 2020.[12]  The CCPA is not identical to the GDPR, differing in a number of key respects.  However there are many similarities, in that the CCPA also has broadly defined definitions of personal information/data, and seeks to provide a right to notice of data collection, a right of access to and correction of collected data, a right to be forgotten, and a right to data portability.  But how do the CCPA’s requirements differ from the GDPR for companies engaged in the development and use of AI technologies?  While there are many issues to consider, below we examine several of the key differences of the CCPA and their impact on machine learning and other AI-based processing of collected data. A.       Inferences Drawn from Personal Information The GDPR defines personal data as “any information relating to an identified or identifiable natural person,” such as “a name, an identification number, location data, an online identifier or to one or more factors specific to the physical, physiological, genetic, mental, economic, cultural or social identify of that nature person.”[13]  Under the GDPR, personal data has implications in the AI space beyond just the data that is actually collected from an individual.  AI technology can be and often is used to generate additional information about a person from collected data, e.g., spending habits, facial features, risk of disease, or other inferences that can be made from the collected data.  Such inferences, or derivative data, may well constitute “personal data” under a broad view of the GDPR, although there is no specific mention of derivative data in the definition. By contrast, the CCPA goes farther and specifically includes “inferences drawn from any of the information identified in this subdivision to create a profile about a consumer reflecting the consumer’s preferences, characteristics, psychological trends, preferences, predispositions, behavior, attitudes, intelligence, abilities and aptitudes.”[14]  An “inference” is defined as “the derivation of information, data, assumptions, or conclusions from evidence, or another source of information or data.”[15] Arguably the primary purpose of many AI systems is to draw inferences from a user’s information, by mining data, looking for patterns, and generating analysis.  Although the CCPA does limit inferences to those drawn “to create a profile about a consumer,” the term “profile” is not defined in the CCPA.  However, the use of consumer information that is “deidentified” or “aggregated” is permitted by the CCPA.  Thus, one possible solution may be to take steps to “anonymize” any personal data used to derive any inferences.  As a result, when looking to CCPA compliance, companies may want to carefully consider the derivative/processed data that they are storing about a user, and consider additional steps that may be required for CCPA compliance. B.       Identifying Categories of Personal Information The CCPA also requires disclosures of the categories of personal information being collected, the categories of sources from which personal information is collected, the purpose for collecting and selling personal information, and the categories of third parties with whom the business shares personal information. [16]  Although these categories are likely known and definable for static data collection, it may be more difficult to specifically disclose the purpose and categories for certain information when dynamic machine learning algorithms are used.  This is particularly true when, as discussed above, inferences about a user are included as personal information.  In order to meet these disclosure requirements, companies may need to carefully consider how they will define all of the categories of personal information collected or the purposes of use of that information, particularly when machine learning algorithms are used to generate additional inferences from, or derivatives of, personal data. C.       Personal Data Includes Households The CCPA’s definition of “personal data” also includes information pertaining to non-individuals, such as “households” – a term that the CCPA does not further define.[17]  In the absence of an explicit definition, the term “household” would seem to target information collected about a home and its inhabits through smart home devices, such as thermostats, cameras, lights, TVs, and so on.  When looking to the types of personal data being collected, the CCPA may also encompass information about each of these smart home devices, such as name, location, usage, and special instructions (e.g., temperature controls, light timers, and motion sensing).  Furthermore, any inferences or derivative information generated by AI algorithms from the information collected from these smart home devices may also be covered as personal information.  Arguably, this could include information such as conversations with voice assistants or even information about when people are likely to be home determined via cameras or motion sensors.  Companies developing smart home, or other Internet of Things, devices thus should carefully consider whether the scope and use they make of any information collected from “households” falls under the CCPA requirements for disclosure or other restrictions. III.       Continuing Efforts to Regulate Autonomous Vehicles Much like the potential for a comprehensive U.S. data privacy law, and despite a flurry of legislative activity in Congress in 2017 and early 2018 towards such a national regulatory framework, autonomous vehicles continue to operate under a complex patchwork of state and local rules with limited federal oversight.  We previously provided an update (located here)[18] discussing the Safely Ensuring Lives Future Deployment and Research In Vehicle Evolution (SELF DRIVE) Act[19], which passed the U.S. House of Representatives by voice vote in September 2017 and its companion bill (the American Vision for Safer Transportation through Advancement of Revolutionary Technologies (AV START) Act).[20]  Both bills have since stalled in the Senate, and with them the anticipated implementation of a uniform regulatory framework for the development, testing and deployment of autonomous vehicles. As the two bills languish in Congress, ‘chaperoned’ autonomous vehicles have already begun coexisting on roads alongside human drivers.  The accelerating pace of policy proposals—and debate surrounding them—looks set to continue in late 2018 as virtually every major automaker is placing more autonomous vehicles on the road for testing and some manufacturers prepare to launch commercial services such as self-driving taxi ride-shares[21] into a national regulatory vacuum. A.       “Light-touch” Regulation The delineation of federal and state regulatory authority has emerged as a key issue because autonomous vehicles do not fit neatly into the existing regulatory structure.  One of the key aspects of the proposed federal legislation is that it empowers the National Highway Traffic Safety Administration (NHTSA) with the oversight of manufacturers of self-driving cars through enactment of future rules and regulations that will set the standards for safety and govern areas of privacy and cybersecurity relating to such vehicles.  The intention is to have a single body (the NHTSA) develop a consistent set of rules and regulations for manufacturers, rather than continuing to allow the states to adopt a web of potentially widely differing rules and regulations that may ultimately inhibit development and deployment of autonomous vehicles.  This approach was echoed by safety guidelines released by the Department of Transportation (DoT) for autonomous vehicles.  Through the guidelines (“a nonregulatory approach to automated vehicle technology safety”),[22] the DoT avoids any compliance requirement or enforcement mechanism, at least for the time being, as the scope of the guidance is expressly to support the industry as it develops best practices in the design, development, testing, and deployment of automated vehicle technologies. Under the proposed federal legislation, the states can still regulate autonomous vehicles, but the guidance encourages states not to pass laws that would “place unnecessary burdens on competition and innovation by limiting [autonomous vehicle] testing or deployment to motor vehicle manufacturers only.”[23]  The third iteration of the DoT’s federal guidance, published on October 4, 2018, builds upon—but does not replace—the existing guidance, and reiterates that the federal government is placing the onus for safety on companies developing the technologies rather than on government regulation. [24]  The guidelines, which now include buses, transit and trucks in addition to cars, remain voluntary. B.       Safety Much of the delay in enacting a regulatory framework is a result of policymakers’ struggle to balance the industry’s desire to speed both the development and deployment of autonomous vehicle technologies with the safety and security concerns of consumer advocates. The AV START bill requires that NHTSA must construct comprehensive safety regulations for AVs with a mandated, accelerated timeline for rulemaking, and the bill puts in place an interim regulatory framework that requires manufacturers to submit a Safety Evaluation Report addressing a range of key areas at least 90 days before testing, selling, or commercialization of an driverless cars.  But some lawmakers and consumer advocates remain skeptical in the wake of highly publicized setbacks in autonomous vehicle testing.[25]  Although the National Safety Transportation Board (NSTB) has authority to investigate auto accidents, there is still no federal regulatory framework governing liability for individuals and states.[26]  There are also ongoing concerns over cybersecurity risks[27], the use of forced arbitration clauses by autonomous vehicle manufacturers,[28] and miscellaneous engineering problems that revolve around the way in which autonomous vehicles interact with obstacles commonly faced by human drivers, such as emergency vehicles,[29] graffiti on road signs or even raindrops and tree shadows.[30] In August 2018, the Governors Highway Safety Association (GHSA) published a report outlining the key questions that manufacturers should urgently address.[31]  The report suggested that states seek to encourage “responsible” autonomous car testing and deployment while protecting public safety and that lawmakers “review all traffic laws.”  The report also notes that public debate often blurs the boundaries between the different levels of automation the NHTSA has defined (ranging from level 0 (no automation) to level 5 (fully self-driving without the need for human occupants)), remarking that “most AVs for the foreseeable future will be Levels 2 through 4.  Perhaps they should be called ‘occasionally self-driving.'”[32] C.       State Laws Currently, 21 states and the District of Columbia have passed laws regulating the deployment and testing of self-driving cars, and governors in 10 states have issued executive orders related to them.[33]  For example, California expanded its testing rules in April 2018 to allow for remote monitoring instead of a safety driver inside the vehicle.[34]  However, state laws differ on basic terminology, such as the definition of “vehicle operator.” Tennessee SB 151[35] points to the autonomous driving system (ADS) while Texas SB 2205[36] designates a “natural person” riding in the vehicle.  Meanwhile, Georgia SB 219[37] identifies the operator as the person who causes the ADS to engage, which might happen remotely in a vehicle fleet. These distinctions will affect how states license both human drivers and autonomous vehicles going forward.  Companies operating in this space accordingly need to stay abreast of legal developments in states in which they are developing or testing autonomous vehicles, while understanding that any new federal regulations may ultimately preempt those states’ authorities to determine, for example, crash protocols or how they handle their passengers’ data. D.       ‘Rest of the World’ While the U.S. was the first country to legislate for the testing of automated vehicles on public roads, the absence of a national regulatory framework risks impeding innovation and development.  In the meantime, other countries are vying for pole position among manufacturers looking to test vehicles on roads.[38]  KPMG’s 2018 Autonomous Vehicles Readiness Index ranks 20 countries’ preparedness for an autonomous vehicle future. The Netherlands took the top spot, outperforming the U.S. (3rd) and China (16th).[39]  Japan and Australia plan to have self-driving cars on public roads by 2020.[40]  The U.K. government has announced that it expects to see fully autonomous vehicles on U.K. roads by 2021, and is introducing legislation—the Automated and Electric Vehicles Act 2018—which installs an insurance framework addressing product liability issues arising out of accidents involving autonomous cars, including those wholly caused by an autonomous vehicle “when driving itself.”[41] E.       Looking Ahead While autonomous vehicles operating on public roads are likely to remain subject to both federal and state regulation, the federal government is facing increasing pressure to adopt a federal regulatory scheme for autonomous vehicles in 2018.[42]  Almost exactly one year after the House passed the SELF DRIVE Act, House Energy and Commerce Committee leaders called on the Senate to advance automated vehicle legislation, stating that “[a]fter a year of delays, forcing automakers and innovators to develop in a state-by-state patchwork of rules, the Senate must act to support this critical safety innovation and secure America’s place as a global leader in technology.”[43]  The continued absence of federal regulation renders the DoT’s informal guidance increasingly important.  The DoT has indicated that it will enact “flexible and technology-neutral” policies—rather than prescriptive performance-based standards—to encourage regulatory harmony and consistency as well as competition and innovation.[44]  Companies searching for more tangible guidance on safety standards at federal level may find it useful to review the recent guidance issued alongside the DoT’s announcement that it is developing (and seeking public input into) a pilot program for ‘highly or fully’ autonomous vehicles on U.S. roads.[45]  The safety standards being considered include technology disabling the vehicle if a sensor fails or barring vehicles from traveling above safe speeds, as well as a requirement that NHTSA be notified of any accident within 24 hours. [1] See https://www.whitehouse.gov/wp-content/uploads/2018/05/Summary-Report-of-White-House-AI-Summit.pdf; note also that the Trump Administration’s efforts in studying AI technologies follow, but appear largely separate from, several workshops on AI held by the Obama Administration in 2016, which resulted in two reports issued in late 2016 (see Preparing for the Future of Artificial Intelligence, and Artificial Intelligence, Automation, and the Economy). [2] Id. at Appendix A. [3] See https://www.mccain.senate.gov/public/index.cfm/2018/8/senate-passes-the-john-s-mccain-national-defense-authorization-act-for-fiscal-year-2019.  The full text of the NDAA is available at https://www.congress.gov/bill/115th-congress/house-bill/5515/text.  For additional information on CFIUS reform implemented by the NDAA, please see Gibson Dunn’s previous client update at https://www.gibsondunn.com/cfius-reform-our-analysis/. [4] See id.; see also https://www.treasury.gov/resource-center/international/Documents/FIRRMA-FAQs.pdf. [5] See https://foreignaffairs.house.gov/wp-content/uploads/2018/02/HR-5040-Section-by-Section.pdf.   [6] See, e.g. infra., Section III discussion of SELF DRIVE and AV START Acts, among others. [7] S.3127, 115th Congress (2018). [8] https://www.gibsondunn.com/new-california-security-of-connected-devices-law-and-ccpa-amendments/. [9] S.3502, 115th Congress (2018). [10] See also, infra., Section III for more discussion of specific regulatory efforts for autonomous vehicles. [11] However, as 2018 has already seen a fair number of hearings before Congress relating to digital data privacy issues, including appearances by key executives from many major tech companies, it seems likely that it may not be long before we see the introduction of a “GDPR-like” comprehensive data privacy bill.  Whether any resulting federal legislation would actually pre-empt state-enacted privacy laws to establish a unified federal framework is itself a hotly-contested issue, and remains to be seen. [12] AB 375 (2018); Cal. Civ. Code §1798.100, et seq. [13] Regulation (EU) 2016/679 (General Data Protection Regulation), Article 4 (1). [14] Cal. Civ. Code §1798.140(o)(1)(K). [15] Id.. at §1798.140(m). [16] Id. at §1798.110(c). [17] Id. at §1798.140(o)(1). [18] https://www.gibsondunn.com/accelerating-progress-toward-a-long-awaited-federal-regulatory-framework-for-autonomous-vehicles-in-the-united-states/. [19]   H.R. 3388, 115th Cong. (2017). [20]   U.S. Senate Committee on Commerce, Science and Transportation, Press Release, Oct. 24, 2017, available at https://www.commerce.senate.gov/public/index.cfm/pressreleases?ID=BA5E2D29-2BF3-4FC7-A79D-58B9E186412C. [21]   Sean O’Kane, Mercedes-Benz Self-Driving Taxi Pilot Coming to Silicon Valley in 2019, The Verge, Jul. 11, 2018, available at https://www.theverge.com/2018/7/11/17555274/mercedes-benz-self-driving-taxi-pilot-silicon-valley-2019. [22]   U.S. Dept. of Transp., Automated Driving Systems 2.0: A Vision for Safety 2.0, Sept. 2017, https://www.nhtsa.gov/sites/nhtsa.dot.gov/files/documents/13069a-ads2.0_090617_v9a_tag.pdf. [23]   Id., at para 2. [24]   U.S. DEPT. OF TRANSP., Preparing for the Future of Transportation: Automated Vehicles 3.0, Oct. 4, 2018, https://www.transportation.gov/sites/dot.gov/files/docs/policy-initiatives/automated-vehicles/320711/preparing-future-transportation-automated-vehicle-30.pdf. [25]   Sasha Lekach, Waymo’s Self-Driving Taxi Service Could Have Some Major Issues, Mashable, Aug. 28, 2018, available at https://mashable.com/2018/08/28/waymo-self-driving-taxi-problems/#dWzwp.UAEsqM. [26]   Robert L. Rabin, Uber Self-Driving Cars, Liability, and Regulation, Stanford Law School Blog, Mar. 20, 2018, available at https://law.stanford.edu/2018/03/20/uber-self-driving-cars-liability-regulation/. [27]   David Shephardson, U.S. Regulators Grappling with Self-Driving Vehicle Security, Reuters. Jul. 10, 2018, available at https://www.reuters.com/article/us-autos-selfdriving/us-regulators-grappling-with-self-driving-vehicle-security-idUSKBN1K02OD. [28]   Richard Blumenthal, Press Release, Ten Senators Seek Information from Autonomous Vehicle Manufacturers on Their Use of Forced Arbitration Clauses, Mar. 23, 2018, available at https://www.blumenthal.senate.gov/newsroom/press/release/ten-senators-seek-information-from-autonomous-vehicle-manufacturers-on-their-use-of-forced-arbitration-clauses. [29]   Kevin Krewell, How Will Autonomous Cars Respond to Emergency Vehicles, Forbes, Jul. 31, 2018, available at https://www.forbes.com/sites/tiriasresearch/2018/07/31/how-will-autonomous-cars-respond-to-emergency-vehicles/#3eed571627ef. [30]   Michael J. Coren, All The Things That Still Baffle Self-Driving Cars, Starting With Seagulls, Quartz, Sept. 23, 2018, available at https://qz.com/1397504/all-the-things-that-still-baffle-self-driving-cars-starting-with-seagulls/. [31]   ghsa, Preparing For Automated Vehicles: Traffic Safety Issues For States, Aug. 2018, available at https://www.ghsa.org/sites/default/files/2018-08/Final_AVs2018.pdf. [32]   Id., at 7. [33]   Brookings, The State of Self-Driving Car Laws Across the U.S., May 1, 2018, available at https://www.brookings.edu/blog/techtank/2018/05/01/the-state-of-self-driving-car-laws-across-the-u-s/. [34]   Aarian Marshall, Fully Self-Driving Cars Are Really Truly Coming to California, Wired, Feb. 26, 2018, available at, https://www.wired.com/story/california-self-driving-car-laws/; State of California, Department of Motor Vehicles, Autonomous Vehicles in California, available at https://www.dmv.ca.gov/portal/dmv/detail/vr/autonomous/bkgd. [35]   SB 151, available at http://www.capitol.tn.gov/Bills/110/Bill/SB0151.pdf. [36]   SB 2205, available at https://legiscan.com/TX/text/SB2205/2017. [37]   SB 219, available at http://www.legis.ga.gov/Legislation/en-US/display/20172018/SB/219. [38]   Tony Peng & Michael Sarazen, Global Survey of Autonomous Vehicle Regulations, Medium, Mar. 15, 2018, available at https://medium.com/syncedreview/global-survey-of-autonomous-vehicle-regulations-6b8608f205f9. [39]   KPMG, Autonomous Vehicles Readiness Index: Assessing Countries’ Openness and Preparedness for Autonomous Vehicles, 2018, (“The US has a highly innovative but largely disparate environment with little predictability regarding the uniform adoption of national standards for AVs. Therefore the prospect of  widespread driverless vehicles is unlikely in the near future. However, federal policy and regulatory guidance could certainly accelerate early adoption . . .”), p. 17, available at https://assets.kpmg.com/content/dam/kpmg/nl/pdf/2018/sector/automotive/autonomous-vehicles-readiness-index.pdf. [40]   Stanley White, Japan Looks to Launch Autonomous Car System in Tokyo by 2020, Automotive News, Jun. 4, 2018, available at http://www.autonews.com/article/20180604/MOBILITY/180609906/japan-self-driving-car; National Transport Commission Australia, Automated vehicles in Australia, available at https://www.ntc.gov.au/roads/technology/automated-vehicles-in-australia/. [41]   The Automated and Electric Vehicles Act 2018, available at http://www.legislation.gov.uk/ukpga/2018/18/contents/enacted; Lexology, Muddy Road Ahead Part II: Liability Legislation for Autonomous Vehicles in the United Kingdom, Sept. 21, 2018,  https://www.lexology.com/library/detail.aspx?g=89029292-ad7b-4c89-8ac9-eedec3d9113a; see further Anne Perkins, Government to Review Law Before Self-Driving Cars Arrive on UK Roads, The Guardian, Mar. 6, 2018, available at https://www.theguardian.com/technology/2018/mar/06/self-driving-cars-in-uk-riding-on-legal-review. [42]   Michaela Ross, Code & Conduit Podcast: Rep. Bob Latta Eyes Self-Driving Car Compromise This Year, Bloomberg Law, Jul. 26, 2018, available at https://www.bna.com/code-conduit-podcast-b73014481132/. [43]   Freight Waves, House Committee Urges Senate to Advance Self-Driving Vehicle Legislation, Sept. 10, 2018, available at https://www.freightwaves.com/news/house-committee-urges-senate-to-advance-self-driving-vehicle-legislation; House Energy and Commerce Committee, Press Release, Sept. 5, 2018, available at https://energycommerce.house.gov/news/press-release/media-advisory-walden-ec-leaders-to-call-on-senate-to-pass-self-driving-car-legislation/. [44]   See supra n. 24, U.S. DEPT. OF TRANSP., Preparing for the Future of Transportation: Automated Vehicles 3.0, Oct. 4, 2018, iv. [45]   David Shephardson, Self-driving cars may hit U.S. roads in pilot program, NHTSA says, Automotive News, Oct. 9, 2018, available at http://www.autonews.com/article/20181009/MOBILITY/181009630/self-driving-cars-may-hit-u.s.-roads-in-pilot-program-nhtsa-says. Gibson Dunn’s lawyers are available to assist in addressing any questions you may have regarding these developments.  Please contact the Gibson Dunn lawyer with whom you usually work, or the authors: H. Mark Lyon – Palo Alto (+1 650-849-5307, mlyon@gibsondunn.com) Claudia M. Barrett – Washington, D.C. (+1 202-887-3642, cbarrett@gibsondunn.com) Frances Annika Smithson – Los Angeles (+1 213-229-7914, fsmithson@gibsondunn.com) Ryan K. Iwahashi – Palo Alto (+1 650-849-5367, riwahashi@gibsondunn.com) Please also feel free to contact any of the following: Automotive/Transportation: Theodore J. Boutrous, Jr. – Los Angeles (+1 213-229-7000, tboutrous@gibsondunn.com) Christopher Chorba – Los Angeles (+1 213-229-7396, cchorba@gibsondunn.com) Theane Evangelis – Los Angeles (+1 213-229-7726, tevangelis@gibsondunn.com) Privacy, Cybersecurity and Consumer Protection: Alexander H. Southwell – New York (+1 212-351-3981, asouthwell@gibsondunn.com) Public Policy: Michael D. Bopp – Washington, D.C. (+1 202-955-8256, mbopp@gibsondunn.com) Mylan L. Denerstein – New York (+1 212-351-3850, mdenerstein@gibsondunn.com) © 2018 Gibson, Dunn & Crutcher LLP Attorney Advertising:  The enclosed materials have been prepared for general informational purposes only and are not intended as legal advice.

October 5, 2018 |
New California Security of Connected Devices Law and CCPA Amendments

Click for PDF California continues to lead the United States in focusing attention on privacy and security of user data and devices.  Last week, Governor Jerry Brown signed into law two identical bills requiring manufacturers to include “reasonable security feature[s]” on all devices which are “capable of connecting to the Internet” (commonly known as the Internet of Things).[1]  The law is described as the first of its kind in the United States, and comes just three months after passage of the California Consumer Privacy Act of 2018 (“CCPA”);[2] both laws are set to take effect January 1, 2020.[3]  Collectively, these laws represent a dramatic expansion of data privacy law that will impact the products and processes of many companies. Also last week, Governor Brown signed into law Senate Bill 1121, which implemented amendments to the CCPA relating primarily to enforcement of the provisions, and clarification of exemptions relating to medical information. Security of Connected Devices The new law is aimed at protecting “connected devices” from unauthorized access, and requires “reasonable security feature[s]” proportional to the device’s “nature and function” and the “information it may collect, contain, or transmit.”[4]  There are various notable exclusions, particularly where the devices are covered by certain other laws, or when a company merely purchases devices for resale (or for branding and resale) in California.[5]  Nonetheless, the law is unique in that it may require security for Internet-connected products regardless of the type of information or data at issue—a contrast to the CCPA and other data privacy and security laws. Who Must Comply with the Law? Anyone “who manufactures, or contracts with another person to manufacture on the person’s behalf, connected devices that are sold or offered for sale in California” is subject to the statute.[6]  However, the law includes an explicit carve-out that “contract[ing] with another person to manufacture on the person’s behalf” does not include a “contract only to purchase a connected device, or only to purchase and brand a connected device.”[7]  Thus, if a company is merely purchasing whole units, and reselling, or even branding and reselling—effectively without the ability to indicate specifications for the device—it will likely not be subject to the new law. What’s Required? The law applies to manufacturers of “connected devices.”  A “connected device” is defined as “capable of connecting to the Internet . . . and . . . assigned an Internet Protocol address or Bluetooth address.”[8]  The number of products falling into this category is increasing at a remarkable rate, and the products span a multitude of applications, from consumer products (such as smart home features, including automatic lights or thermostats controlled remotely), to commercial use cases (such as electronic toll systems and “smart agriculture”). The law requires that such manufacturers “equip the device with a reasonable security feature or features” that is: Appropriate to the nature and function of the device; Appropriate to the information it may collect, contain, or transmit; and Designed to protect the device and its information from unauthorized access, destruction, use, modification, or disclosure.[9] The law does not specify what is “reasonable,” and relies upon the manufacturer to determine what is appropriate to the device.  As a result, “reasonable” will likely be further refined through enforcement actions (described below) . However, the law does provide that a device will satisfy the provisions if it is “equipped with a means for authentication outside a local area network,” and (1) each device is preprogrammed with a unique password, or (2) the user must create a “new means of authentication” (such as a password) before the device may be used.[10] [11] What’s Not Covered? Notably, the law excludes certain devices or manufacturers, particularly where they are covered by other existing laws, and makes clear statements of what this law does not do.  For example, the law does not apply to[12]: Any unaffiliated third-party software or applications the user adds to the device; Any provider of an electronic store, gateway, marketplace, or other means of purchasing or downloading software or applications; Devices subject to security requirements under federal law (e.g., FDA); and “Manufacturers” subject to HIPAA or the Confidentiality of Medical Information Act—at least “with respect to any activity regulated by those acts.”[13] How Will It Be Enforced? The law expressly does not provide for a private right of action, and it may only be enforced by the “Attorney General, a city attorney, a county counsel, or a district attorney.”[14]  It further does not set forth any criminal penalty, include a maximum civil fine, or specify any other authorized relief.  Nonetheless, the authorization of the enumerated entities to enforce it presumably includes the authority for those entities to seek civil fines, as they can under other consumer protection statutes (for example, Section 17206 of the California Business & Professions Code).[15] What Can You Do? If your company sells, or intends to sell, a product in California that connects to the Internet, consider: Whether the company is a “manufacturer”; The security features of the device, if any; What security features might be reasonable given the nature and function of the device and the nature of the data collected or used; Possibilities for alternative, or additional security measures for the specific device; and Engineering resources and timeline required to implement additional features. Many connected devices on the market today already have authentication and security features, but even those that do may benefit from an evaluation of their sufficiency in preparation for this new law.  Because the law may require actual product changes, rather than merely policy changes, addressing these issues early is important. Consultation with legal and information security professionals may be helpful. Amendments to CCPA Signed by Governor Brown on September 23, 2018 As anticipated, the California Legislature has begun to pass amendments to the CCPA, though the current changes are relatively modest.  Governor Brown signed the latest amendments to the CCPA on September 23, 2018, which included[16]: Extending the deadline for the California Attorney General (“AG”) to develop and publish rules implementing the CCPA until July 1, 2020; Prohibiting the AG from enforcing the Act until either July 1, 2020, or six months after the publication of the regulations, whichever comes first; Limiting the civil penalties that the AG can impose to $2,500 for each violation of the CCPA or up to $7,500 per each intentional violation; Removing the requirement for a consumer to notify the AG within 30 days of filing a civil action in the event of a data breach and to then wait six months to see if the AG elects to pursue the case; Clarifying that consumers only have a right of action related to a business’ alleged failure to “implement and maintain reasonable security procedures and practices” that results in a breach and not for any other violations of the Act; Updating the definition of “personal information” to stress that certain identifiers (e.g., IP address, geolocation information and web browsing history) only constitute personal information if the data can be “reasonably linked, directly or indirectly, with a particular consumer or household”; and Explicitly exempting entities covered by HIPAA, GLBA and DPPA, as well as California’s Confidentiality of Medical Information Act and its Financial Information Privacy Act. The foregoing amendments may not have been of major significance—they were passed on the last day of the most recent legislative session.  The California Legislature is expected to consider more substantive changes to the law when it reconvenes for the 2019 – 2020 session in January 2019, including addressing additional concerns regarding enforcement mechanisms, the law’s broad scope, and the sweeping disclosure obligations. Companies that may be impacted by the CCPA should continue to monitor legislative and regulatory developments relating to the CCPA, and should begin planning for the implementation of this broad statute.    [1]   Assembly Bill 1906 and Senate Bill 327 contain identical language.    [2]   The California Consumer Privacy Act was the subject of a detailed analysis in a client alert issued by Gibson Dunn on July 12, 2018.  That publication is available here.    [3]   The law will be enacted as California Civil Code Sections 1798.91.04 to 1798.91.06.    [4]   Cal. Civil Code § 1798.91.04(a)(1) and (a)(2).    [5]   Cal. Civil Code § 1798.91.05(c) and § 1798.91.06.    [6]   Cal. Civil Code § 1798.91.05(c).    [7]   Cal. Civil Code § 1798.91.05(c).    [8]   Cal. Civil Code § 1798.91.05(b).    [9]   Cal. Civil Code § 1798.91.04(a)(1), (a)(2), and (a)(3).    [10]   Cal. Civil Code § 1798.91.04(b) (emphasis added).    [11]   Authentication is simply defined as a “method of verifying the authority” of a user accessing the information or device. Cal. Civil Code § 1798.91.05(a).    [12]   Cal. Civil Code § 1798.91.06.    [13]   That said, those laws generally require stricter provisions for security measures.    [14]   Cal. Civil Code § 1798.91.06(e).    [15]   See Cal. Bus. & Prof. Code § 17204.    [16]   S.B. 1121. S. Reg. Sess. 2017-2018. (CA 2018) The following Gibson Dunn lawyers assisted in the preparation of this client alert: Joshua A. Jessen, Benjamin B. Wagner, and Cassandra L. Gaedt-Sheckter. Gibson Dunn’s lawyers are available to assist with any questions you may have regarding these issues.  For further information, please contact the Gibson Dunn lawyer with whom you usually work or the following leaders and members of the firm’s Privacy, Cybersecurity and Consumer Protection practice group: United States Alexander H. Southwell – Co-Chair, New York (+1 212-351-3981, asouthwell@gibsondunn.com) M. Sean Royall – Dallas (+1 214-698-3256, sroyall@gibsondunn.com) Debra Wong Yang – Los Angeles (+1 213-229-7472, dwongyang@gibsondunn.com) Christopher Chorba – Los Angeles (+1 213-229-7396, cchorba@gibsondunn.com) Richard H. Cunningham – Denver (+1 303-298-5752, rhcunningham@gibsondunn.com) Howard S. Hogan – Washington, D.C. (+1 202-887-3640, hhogan@gibsondunn.com) Joshua A. Jessen – Orange County/Palo Alto (+1 949-451-4114/+1 650-849-5375, jjessen@gibsondunn.com) Kristin A. Linsley – San Francisco (+1 415-393-8395, klinsley@gibsondunn.com) H. Mark Lyon – Palo Alto (+1 650-849-5307, mlyon@gibsondunn.com) Shaalu Mehra – Palo Alto (+1 650-849-5282, smehra@gibsondunn.com) Karl G. Nelson – Dallas (+1 214-698-3203, knelson@gibsondunn.com) Eric D. Vandevelde – Los Angeles (+1 213-229-7186, evandevelde@gibsondunn.com) Benjamin B. Wagner – Palo Alto (+1 650-849-5395, bwagner@gibsondunn.com) Michael Li-Ming Wong – San Francisco/Palo Alto (+1 415-393-8333/+1 650-849-5393, mwong@gibsondunn.com) Ryan T. Bergsieker – Denver (+1 303-298-5774, rbergsieker@gibsondunn.com) Europe Ahmed Baladi – Co-Chair, Paris (+33 (0)1 56 43 13 00, abaladi@gibsondunn.com) James A. Cox – London (+44 (0)207071 4250, jacox@gibsondunn.com) Patrick Doris – London (+44 (0)20 7071 4276, pdoris@gibsondunn.com) Bernard Grinspan – Paris (+33 (0)1 56 43 13 00, bgrinspan@gibsondunn.com) Penny Madden – London (+44 (0)20 7071 4226, pmadden@gibsondunn.com) Jean-Philippe Robé – Paris (+33 (0)1 56 43 13 00, jrobe@gibsondunn.com) Michael Walther – Munich (+49 89 189 33-180, mwalther@gibsondunn.com) Nicolas Autet – Paris (+33 (0)1 56 43 13 00, nautet@gibsondunn.com) Kai Gesing – Munich (+49 89 189 33-180, kgesing@gibsondunn.com) Sarah Wazen – London (+44 (0)20 7071 4203, swazen@gibsondunn.com) Alejandro Guerrero – Brussels (+32 2 554 7218, aguerrero@gibsondunn.com) Asia Kelly Austin – Hong Kong (+852 2214 3788, kaustin@gibsondunn.com) Jai S. Pathak – Singapore (+65 6507 3683, jpathak@gibsondunn.com) © 2018 Gibson, Dunn & Crutcher LLP Attorney Advertising:  The enclosed materials have been prepared for general informational purposes only and are not intended as legal advice.

September 14, 2018 |
Kristin Linsley, Christina Greenberg and Jennifer Rho Named Among Women Leaders in Tech Law

The Recorder named San Francisco partner Kristin Linsley to its annual Women Leaders in Tech Law. Additionally, Palo Alto associate Christina Greenberg and Los Angeles associate Jennifer Rho were named among The Recorder’s Next Generation Leaders. The list of 60 winners—30 Women Leaders in Tech Law and 30 Next Generation Leaders recognized attorneys who “are helping the law and the legal profession address novel issues raised by technological advances.” The honorees were announced on September 14, 2018.

September 1, 2018 |
Timothy Loose Named Among Global Data Review’s 40 Under 40

Global Data Review named Los Angeles partner Timothy Loose to its 2018 40 Under 40 [PDF] list which profiles “the 40 individuals who represent the best and the brightest of the data law bar around the world.” The list was published on September 1, 2018.

July 12, 2018 |
California Consumer Privacy Act of 2018

Click for PDF On June 28, 2018, Governor Jerry Brown signed the California Consumer Privacy Act of 2018 (“CCPA”), which has been described as a landmark privacy bill that aims to give California consumers increased transparency and control over how companies use and share their personal information.  The law will be enacted as several new sections of the California Civil Code (sections 1798.100 to 1798.198).  While lawmakers and others are already discussing amending the law prior to its January 1, 2020 effective date, as passed the law would require businesses collecting information about California consumers to: disclose what personal information is collected about a consumer and the purposes for which that personal information is used; delete a consumer’s personal information if requested to do so, unless it is necessary for the business to maintain that information for certain purposes; disclose what personal information is sold or shared for a business purpose, and to whom; stop selling a consumer’s information if requested to do so (the “right to opt out”), unless the consumer is under 16 years of age, in which case the business is required to obtain affirmative authorization to sell the consumer’s data (the “right to opt in”); and not discriminate against a consumer for exercising any of the aforementioned rights, including by denying goods or services, charging different prices, or providing a different level or quality of goods or services, subject to certain exceptions. The CCPA also empowers the California Attorney General to adopt regulations to further the statute’s purposes, and to solicit “broad public participation” before the law goes into effect.[1]  In addition, the law permits businesses to seek the opinion of the Attorney General for guidance on how to comply with its provisions. The CCPA does not appear to create any private rights of action, with one notable exception:  the CCPA expands California’s data security laws by providing, in certain cases, a private right of action to consumers “whose nonencrypted or nonredacted personal information” is subject to a breach “as a result of the business’ violation of the duty to implement and maintain reasonable security procedures,” which permits consumers to seek statutory damages of $100 to $750 per incident.[2]  The other rights embodied in the CCPA may be enforced only by the Attorney General—who may seek civil penalties up to $7,500 per violation. In the eighteen months ahead, businesses that collect personal information about California consumers will need to carefully assess their data privacy and disclosure practices and procedures to ensure they are in compliance when the law goes into effect on January 1, 2020.  Businesses may also want to consider whether to submit information to the Attorney General regarding the development of implementing regulations prior to the effective date. I.     Background and Context The CCPA was passed quickly in order to block a similar privacy initiative from appearing on election ballots in November.  The ballot initiative had obtained enough signatures to be presented to voters, but its backers agreed to abandon it if lawmakers passed a comparable bill.  The ballot initiative, if enacted, could not easily be amended by the legislature,[3] so legislators quickly drafted and unanimously passed AB 375 before the June 28 deadline to withdraw items from the ballot.  While not as strict as the EU’s new General Data Protection Regulation (GDPR), the CCPA is more stringent than most existing privacy laws in the United States. II.     Who Must Comply With The CCPA? The CCPA applies to any “business,” including any for-profit entity that collects consumers’ personal information, which does business in California, and which satisfies one or more of the following thresholds: has annual gross revenues in excess of twenty-five million dollars ($25,000,000); possesses the personal information of 50,000 or more consumers, households, or devices; or earns more than half of its annual revenue from selling consumers’ personal information.[4] The CCPA also applies to any entity that controls or is controlled by such a business and shares common branding with the business.[5] The definition of “Personal Information” under the CCPA is extremely broad and includes things not considered “Personal Information” under other U.S. privacy laws, like location data, purchasing or consuming histories, browsing history, and inferences drawn from any of the consumer information.[6]  As a result of the breadth of these definitions, the CCPA likely will apply to hundreds of thousands of companies, both inside and outside of California. III.     CCPA’s Key Rights And Provisions The stated goal of the CCPA is to ensure the following rights of Californians: (1) to know what personal information is being collected about them; (2) to know whether their personal information is sold or disclosed and to whom; (3) to say no to the sale of personal information; (4) to access their personal information; and (5) to equal service and price, even if they exercise their privacy rights.[7]  The CCPA purports to enforce these rights by imposing several obligations on covered businesses, as discussed in more detail below.            A.     Transparency In The Collection Of Personal Information The CCPA requires disclosure of information about how a business collects and uses personal information, and also gives consumers the right to request certain additional information about what data is collected about them.[8]  Specifically, a consumer has the right to request that a business disclose: the categories of personal information it has collected about that consumer; the categories of sources from which the personal information is collected; the business or commercial purpose for collecting or selling personal information; the categories of third parties with whom the business shares personal information; and the specific pieces of personal information it has collected about that consumer.[9] While categories (1)-(4) are fairly general, category (5) requires very detailed information about a consumer, and businesses will need to develop a mechanism for providing this type of information. Under the CCPA, businesses also must affirmatively disclose certain information “at or before the point of collection,” and cannot collect additional categories of personal information or use personal information collected for additional purposes without providing the consumer with notice.[10]  Specifically, businesses must disclose in their online privacy policies and in any California-specific description of a consumer’s rights a list of the categories of personal information they have collected about consumers in the preceding 12 months by reference to the enumerated categories (1)-(5), above.[11] Businesses must provide consumers with at least two methods for submitting requests for information, including, at a minimum, a toll-free telephone number, and if the business maintains an Internet Web site, a Web site address.[12]            B.     Deletion Of Personal Information The CCPA also gives consumers a right to request that businesses delete personal information about them.  Upon receipt of a “verifiable request” from a consumer, a business must delete the consumer’s personal information and direct any service providers to do the same.  There are exceptions to this deletion rule when “it is necessary for the business or service provider to maintain the consumer’s personal information” for one of nine enumerated reasons: Complete the transaction for which the personal information was collected, provide a good or service requested by the consumer, or reasonably anticipated within the context of a business’s ongoing business relationship with the consumer, or otherwise perform a contract between the business and the consumer. Detect security incidents, protect against malicious, deceptive, fraudulent, or illegal activity; or prosecute those responsible for that activity. Debug to identify and repair errors that impair existing intended functionality. Exercise free speech, ensure the right of another consumer to exercise his or her right of free speech, or exercise another right provided for by law. Comply with the California Electronic Communications Privacy Act pursuant to Chapter 3.6 (commencing with Section 1546) of Title 12 of Part 2 of the Penal Code. Engage in public or peer-reviewed scientific, historical, or statistical research in the public interest that adheres to all other applicable ethics and privacy laws, when the businesses’ deletion of the information is likely to render impossible or seriously impair the achievement of such research, if the consumer has provided informed consent. To enable solely internal uses that are reasonably aligned with the expectations of the consumer based on the consumer’s relationship with the business. Comply with a legal obligation. Otherwise use the consumer’s personal information, internally, in a lawful manner that is compatible with the context in which the consumer provided the information.[13] Because these exceptions are so broad, especially given the catch-all provision in category (9), it is unclear whether the CCPA’s right to deletion will substantially alter a business’s obligations as a practical matter.            C.     Disclosure Of Personal Information Sold Or Shared For A Business Purpose The CCPA also requires businesses to disclose what personal information is sold or disclosed for a business purpose, and to whom.[14]  The disclosure of certain information is only required upon receipt of a “verifiable consumer request.”[15]  Specifically, a consumer has the right to request that a business disclose: The categories of personal information that the business collected about the consumer; The categories of personal information that the business sold about the consumer and the categories of third parties to whom the personal information was sold, by category or categories of personal information for each third party to whom the personal information was sold; and The categories of personal information that the business disclosed about the consumer for a business purpose.[16] A business must also affirmatively disclose (including in its online privacy policy and in any California-specific description of consumer’s rights): The category or categories of consumers’ personal information it has sold, or if the business has not sold consumers’ personal information, it shall disclose that fact; and The category or categories of consumers’ personal information it has disclosed for a business purpose, or if the business has not disclosed the consumers’ personal information for a business purpose, it shall disclose that fact.[17] This information must be disclosed in two separate lists, each listing the categories of personal information it has sold about consumers in the preceding 12 months that fall into categories (1) and (2), above.[18]            D.     Right To Opt-Out Of Sale Of Personal Information The CCPA also requires businesses to stop selling a consumer’s personal information if requested to do so by the consumer (“opt-out”).  In addition, consumers under the age of 16 must affirmatively opt-in to allow selling of personal information, and parental consent is required for consumers under the age of 13.[19]  Businesses must provide notice to consumers that their information may be sold and that consumers have the right to opt out of the sale.  In order to comply with the notice requirement, businesses must include a link titled “Do Not Sell My Personal Information” on their homepage and in their privacy policy.[20]            E.     Prohibition Against Discrimination For Exercising Rights The CCPA prohibits a business from discriminating against a consumer for exercising any of their rights in the CCPA, including by denying goods or services, charging different prices, or providing a different level or quality of goods or services.  There are exceptions, however, if the difference in price or level or quality of goods or services “is reasonably related to the value provided to the consumer by the consumer’s data.”  For example, while the language of the statute is not entirely clear, a business may be allowed to charge those users who do not allow the sale of their data while providing the service for free to users who do allow the sale of their data—as long as the amount charged is reasonably related to the value to the business of that consumer’s data.  A business may also offer financial incentives for the collection of personal information, as long as the incentives are not “unjust, unreasonable, coercive, or usurious” and the business notifies the consumer of the incentives and the consumer gives prior opt-in consent.            F.     Data Breach Provisions The CCPA provides a private right of action to consumers “whose nonencrypted or nonredacted personal information” is subject to a breach “as a result of the business’ violation of the duty to implement and maintain reasonable security procedures.”[21]  Under the CCPA, a consumer may seek statutory damages of $100 to $750 per incident or actual damages, whichever is greater.[22]  Notably, the meaning of “personal information” under this provision is the same as it is in California’s existing data breach law, rather than the broad definition used in the remainder of the CCPA.[23]  Consumers bringing a private action under this section must first provide written notice to the business of the alleged violations (and allow the business an opportunity to cure the violations), and must notify the Attorney General and give the Attorney General an opportunity to prosecute.[24]  Notice is not required for an “action solely for actual pecuniary damages suffered as a result of the alleged violations.”[25] IV.     Potential Liability Section 1798.150, regarding liability for data breaches, is the only provision in the CCPA expressly allowing a private right of action.  The damages available for such a civil suit are limited to the greater of (1) between $100 and $750 per consumer per incident, or (2) actual damages.  Individual consumers’ claims also can potentially be aggregated in a class action. The other rights embodied in the CCPA may be enforced only by the Attorney General—who may seek civil penalties not to exceed $2,500 for each violation, unless the violation was intentional, in which case the Attorney General can seek up to $7,500 per violation.[26] [1]   To be codified at Cal. Civ. Code § 1798.185(a) [2]      Cal. Civ. Code § 1798.150. [3]      By its own terms, the ballot initiative could be amended upon a statute passed by 70% of each house of the Legislature if the amendment furthered the purposes of the act, or by a majority for certain provisions to impose additional privacy restrictions.  See The Consumer Right to Privacy Act of 2018 No. 17-0039, Section 5. Otherwise, approved ballot initiatives in California can only be amended with voter approval. California Constitution, Article II, Section 10. [4]   Cal. Civ. Code § 1798.140(c)(1). [5]   Cal. Civ. Code § 1798.140(c)(2). [6]   Cal. Civ. Code § 1798.140(o). The definition of “personal information” does not include publicly available information, and the CCPA also does not generally restrict a business’s ability to collect or use deidentified aggregate consumer information. Cal. Civ. Code § 1798.145(a)(5). [7]   Assemb. Bill 375, 2017-2018 Reg. Sess., Ch. 55, Sec. 2 (Cal. 2018) [8]   Cal. Civ. Code § 1798.100 and 1798.110. [9]   Cal. Civ. Code § 1798.110(a). [10]     Cal. Civ. Code §§ 1798.100(b); 1798.110(c). [11]     Cal. Civ. Code §§ 1798.110(c); 1798.130(a)(5)(B). [12]   Cal. Civ. Code § 1798.130(a)(1). [13]   Cal. Civ. Code § 1798.105(d). [14]   Cal. Civ. Code § 1798.115. [15]   Cal. Civ. Code § 1798.115(a)-(b). [16]   Cal. Civ. Code § 1798.115(a). [17]   Cal. Civ. Code § 1798.115(c). [18]   Cal. Civ. Code § 1798.130(a)(5)(C). [19]   Cal. Civ. Code § 1798.120(d). [20]   Cal. Civ. Code § 1798.135. [21]   Cal. Civ. Code § 1798.150. [22]   Cal. Civ. Code § 1798.150. [23]   Cal. Civ. Code § 1798.81.5(d)(1)(A) [24]   Cal. Civ. Code § 1798.150(b). [25]   Cal. Civ. Code § 1798.150 (b)(1). [26]   Cal. Civ. Code § 1798.155. The following Gibson Dunn lawyers assisted in the preparation of this client alert: Joshua A. Jessen, Benjamin B. Wagner, Christina Chandler Kogan, Abbey A. Barrera, and Alison Watkins. Gibson Dunn’s lawyers are available to assist with any questions you may have regarding these issues.  For further information, please contact the Gibson Dunn lawyer with whom you usually work or the following leaders and members of the firm’s Privacy, Cybersecurity and Consumer Protection practice group: United States Alexander H. Southwell – Co-Chair, New York (+1 212-351-3981, asouthwell@gibsondunn.com) M. Sean Royall – Dallas (+1 214-698-3256, sroyall@gibsondunn.com) Debra Wong Yang – Los Angeles (+1 213-229-7472, dwongyang@gibsondunn.com) Christopher Chorba – Los Angeles (+1 213-229-7396, cchorba@gibsondunn.com) Richard H. Cunningham – Denver (+1 303-298-5752, rhcunningham@gibsondunn.com) Howard S. Hogan – Washington, D.C. (+1 202-887-3640, hhogan@gibsondunn.com) Joshua A. Jessen – Orange County/Palo Alto (+1 949-451-4114/+1 650-849-5375, jjessen@gibsondunn.com) Kristin A. Linsley – San Francisco (+1 415-393-8395, klinsley@gibsondunn.com) H. Mark Lyon – Palo Alto (+1 650-849-5307, mlyon@gibsondunn.com) Shaalu Mehra – Palo Alto (+1 650-849-5282, smehra@gibsondunn.com) Karl G. Nelson – Dallas (+1 214-698-3203, knelson@gibsondunn.com) Eric D. Vandevelde – Los Angeles (+1 213-229-7186, evandevelde@gibsondunn.com) Benjamin B. Wagner – Palo Alto (+1 650-849-5395, bwagner@gibsondunn.com) Michael Li-Ming Wong – San Francisco/Palo Alto (+1 415-393-8333/+1 650-849-5393, mwong@gibsondunn.com) Ryan T. Bergsieker – Denver (+1 303-298-5774, rbergsieker@gibsondunn.com) Europe Ahmed Baladi – Co-Chair, Paris (+33 (0)1 56 43 13 00, abaladi@gibsondunn.com) James A. Cox – London (+44 (0)207071 4250, jacox@gibsondunn.com) Patrick Doris – London (+44 (0)20 7071 4276, pdoris@gibsondunn.com) Bernard Grinspan – Paris (+33 (0)1 56 43 13 00, bgrinspan@gibsondunn.com) Penny Madden – London (+44 (0)20 7071 4226, pmadden@gibsondunn.com) Jean-Philippe Robé – Paris (+33 (0)1 56 43 13 00, jrobe@gibsondunn.com) Michael Walther – Munich (+49 89 189 33-180, mwalther@gibsondunn.com) Nicolas Autet – Paris (+33 (0)1 56 43 13 00, nautet@gibsondunn.com) Kai Gesing – Munich (+49 89 189 33-180, kgesing@gibsondunn.com) Sarah Wazen – London (+44 (0)20 7071 4203, swazen@gibsondunn.com) Alejandro Guerrero Perez – Brussels (+32 2 554 7218, aguerreroperez@gibsondunn.com) Asia Kelly Austin – Hong Kong (+852 2214 3788, kaustin@gibsondunn.com) Jai S. Pathak – Singapore (+65 6507 3683, jpathak@gibsondunn.com) © 2018 Gibson, Dunn & Crutcher LLP Attorney Advertising:  The enclosed materials have been prepared for general informational purposes only and are not intended as legal advice.

July 5, 2018 |
Supreme Court Finds Failure to Prove a Sherman Act Section 1 Violation in Credit Card Market

Click for PDF On June 25, 2018, the Supreme Court of the United States assuaged the concerns of many that antitrust enforcement would hobble new and creative ways of conducting business, particularly businesses that have relied on technology to bring consumers and sellers together by offering a “platform” that creates a highly convenient way for them to interact and consummate sales. In Ohio v. American Express, the Court held that plaintiffs failed to prove a Sherman Act Section 1 violation in the credit card market because they presented evidence of alleged anticompetitive effects only on the merchant side of the relevant market. Without evidence of the impact of the challenged practices on the cardholder side of the market, the Court concluded that plaintiffs failed to carry their burden to prove anticompetitive effects. The Court’s opinion has several important elements beyond its holding that certain two-sided platform markets must be evaluated as a single relevant market: Significantly, the Supreme Court discussed a framework for analyzing alleged restraints under the rule of reason for the first time.  Both the majority and dissent adopted the parties’ agreed-upon, three-step framework for analyzing restraints under the rule of reason.  Under this framework, the plaintiff bears the initial burden of proving anticompetitive effects, which shifts the burden to the defendant to show a procompetitive justification.  If the defendant meets its burden of proving procompetitive efficiencies, then the burden shifts back to the plaintiff to show that those efficiencies could have been achieved through less restrictive means.  Notably, the Court did not mention any balancing of anticompetitive effects against procompetitive justifications. The third step in the above rule of reason framework may be the focus of scrutiny as plaintiffs look to find “less restrictive alternatives” to overcome defendants’ evidence of a procompetitive rationale for a challenged practice.  DOJ-FTC Competitor Collaboration Guidelines provide, however, that the agencies “do not search for a theoretically less restrictive alternative that is not realistic given business realities.”  Section 3.36(b). The Court also found that evidence that output of transactions in the relevant market had increased during the relevant period undercut plaintiffs’ reliance solely on evidence of price increases by Amex.  The Court’s reliance on the failure to prove output restriction reinforces the continued vitality of the Court’s prior decision in Brooke Group Ltd. v. Brown & Williamson Tobacco Corp., 509 U.S. 209 (1993). The Court rejected the argument that market definition could be dispensed with based on evidence of purported actual anticompetitive effects in the form of merchant fee increases by Amex.  The Court in this regard distinguished horizontal restraints, which in some cases may be analyzed without “precisely defin[ing] the relevant market,” and vertical restraints, stating that vertical restraints frequently do not pose any threat to competition absent the defendant possessing market power. Therefore, it is critical to precisely define the relevant market when evaluating vertical restraints. The case arose out of a decades-old practice.  For more than fifty years, American Express Company and American Express Travel Services Company (together, “Amex”) have included “anti-steering” provisions in contracts with merchants who agree to accept American Express cards as a means of payment. These provisions prohibited merchants from trying to persuade customers to use cards other than American Express cards or imposing special conditions on customers using American Express cards. Absent the challenged provisions, merchants had a strong incentive to encourage customers to use other credit cards because other credit card providers charged merchants lower fees than Amex.  Amex uses the money received from its higher merchant fees to fund investments in its customer rewards program, which offers cardholders better rewards than those offered by rival credit card companies. The United States and several States (“plaintiffs”) sued Amex in October 2010, alleging that the anti-steering provisions violated Section 1 of the Sherman Act. The United States District Court for the Southern District of New York entered judgment for plaintiffs, finding that the provisions violated Section 1 because they caused merchants to pay higher fees by precluding merchants from encouraging cardholders to use an alternative card with a lower fee at the point of sale. The district court sided with plaintiffs in finding that the credit card market was really two separate markets: a merchant market and a cardholder market. The United States Court of Appeals for the Second Circuit reversed, holding that the district court erroneously considered only the dealings between Amex and merchants.  As a result, it failed to recognize that the credit card market was a single, “two-sided” market, not two separate markets.  Therefore, the impact of the anti-steering provisions on the cardholder side of the market had to be analyzed in order to determine if those provisions had a substantial anticompetitive effect in the relevant market.  The Supreme Court affirmed in a 5-4 decision. The majority, in an opinion authored by Justice Thomas, agreed with the Second Circuit that the credit card market should be considered as a single market because credit card providers compete to provide credit card transactions, but can create and sell those services only if both the cardholder and the merchant simultaneously choose to use the credit card network as a means of payment. The market is “two-sided” in that it involves the simultaneous provision of services to both cardholders and merchants; in any transaction, a credit card network cannot sell its payment services individually to only the cardholder or only the merchant. The majority observed that the credit card market exhibited strong “indirect” network effects because prices to cardholders affected demand by merchants and prices to merchants affected demand by cardholders.  Higher prices to cardholders would tend to decrease the number of cardholders, which would decrease the attractiveness of that card to merchants, which in turn would decrease the attractiveness of the card to cardholders.  Conversely, higher prices to merchants would decrease the number of merchants accepting the card, which would decrease the utility of the card to cardholders, decreasing the number of cardholders. In either case, the provider increasing prices faced the risk of “a feedback loop of declining demand.”  Providers therefore had to strike a balance between the prices charged on one side of the platform and the prices charged on the other side. In the credit card market, different cardholders might attribute different value to broad acceptance of their card by numerous merchants or to generosity of “cash back” or other loyalty or usage rewards. Similarly, merchants might assign different values to the level of fees by a credit card provider versus the card’s ability to present the merchant with a higher proportion of “big spenders.” Significantly for future cases, the majority observed that not every “platform” business bringing together buyers and sellers should be considered to be a single market. The majority focused on the strength of the indirect network effects—that is, the potential for increased prices on one side to reduce demand on the other side, prompting a feedback loop of declining demand.  The majority discussed a newspaper selling advertisements to advertisers as an example of a “platform” that should not be considered a single market. According to the majority, the indirect network effects operated only in one direction. Advertisers might well care if high subscription prices reduced the number of readers. But because readers are largely indifferent to the amount of advertising in a newspaper, a reduction in advertisements caused by higher advertising rates would not lead to a reduced number of readers. The Court emphasized the importance of market definition in analyzing alleged anticompetitive effects caused by vertical restraints. Unlike horizontal restraints among competitors, the majority wrote, “[v]ertical restraints often pose no risk to competition unless the entity imposing them has market power, which cannot be evaluated unless the Court first defines the relevant market.” Thus, the Court disagreed with plaintiffs’ assertion that under FTC v. Indiana Federation of Dentists, 476 U.S. 447 (1986), evidence of actual adverse effects in the form of increased merchant fees was sufficient proof.  The Court distinguished Indiana Federation of Dentists by noting that it involved a horizontal restraint, and therefore the Court concluded it did not need to precisely define the relevant market to evaluate the restraint’s competitive impact. The dissent, authored by Justice Breyer, accused the majority of “abandoning traditional market-definition approaches” by declining to define the relevant market by assessing the substitutability of other products or services for the product or service at issue. As the dissent noted, because consumers’ ability to shift to substitutes constrains the ability of a seller to raise prices, it is necessary to include reasonable substitutes within the relevant market. The dissent argued that the card providers’ services to merchants and services to cardholders were complements, not substitutes, in the sense that, like gasoline and tires for a car, both must be purchased to have value. But this analogy is inapt in at least two respects. First, there is no need for simultaneity in the purchase of gasoline and tires. Few, if any, consumers buy new tires each time they purchase gasoline. Second, the two complementary products are both purchased by the owner or operator of the vehicle. The seller of gasoline and tires does not have to purchase a service from anyone in order to sell the gasoline or tires (unless the buyer wishes to use a credit card, in which case both the buyer and the merchant must simultaneously choose to use the payment services offered by the credit card provider). This is unlike the credit card context where both the cardholder and the merchant must simultaneously choose to use the payment services offered by the credit card provider. The Court’s acceptance that some businesses operate in a single, two-sided market has implications for antitrust cases involving technology-based “platform” businesses, such as ride-sharing and short-term home rentals, that have become a substantial and growing component of the economy. The outcomes in future cases are likely to turn on the strength of the evidence showing that network effects constrain pricing decisions. Makan Delrahim, the head of the DOJ’s Antitrust Division, said this past week that he had feared the Supreme Court would cause “harm to our economy” by creating a rule for evaluating two-sided markets that would harm new “platform” business models like Uber, AirBnB and eBay. He described DOJ’s philosophy with respect to the case as “it’s one interrelated market, it’s a new business model, and you can’t stick your head in the sand and say, ‘If you’re raising the prices – whether on the consumer or driver – it can’t have an effect.’ And it could be a positive effect, because a Lyft can do the same thing and now be able to compete better with an Uber or whatever the next one would be.”  While Mr. Delrahim acknowledged that the Amex ruling likely would apply to companies like Uber and AirBnB, he does not believe Google will benefit from it, noting that consumers do not use Google Search just to see advertisements. Although the Amex decision is notable for its focus on commercial realities and acceptance of the existence of two-sided markets, there are other significant aspects of the decision.  Most notably, the Court discussed a three-step, burden-shifting framework for analyzing restraints under the rule of reason. This provides welcome guidance, as the Court had not previously discussed any framework or methodology for evaluating claims under the rule of reason.  While the framework was agreed-upon among the parties below, its adoption by the majority (and acceptance by the dissent) nevertheless provides important instruction regarding the steps to be conducted by courts in weighing rule of reason claims under either Section 1 or Section 2.  In the first step of the decision’s framework, the plaintiff bears the burden to prove anticompetitive effects in the relevant market. If the plaintiff carries that burden, in the second step the burden shifts to the defendant to demonstrate a procompetitive rationale for the challenged restraint. If the defendant makes that showing, then in the third step the burden shifts back to the plaintiff to “demonstrate that the procompetitive efficiencies could reasonably be achieved through less restrictive means.” The Court held that plaintiffs had not satisfied the first step of the rule of reason framework. As with many cases, the Court’s definition of the relevant market determined the outcome. To prove anticompetitive effects, plaintiffs relied solely on direct evidence of Amex’s increases in merchant fees during 2005-2010. However, the Court concluded that because the market was two-sided, such evidence was incomplete and did not demonstrate anticompetitive effects in the form of either higher prices for credit card transactions or a reduction in the number of such transactions. Indeed, the Court found that certain evidence in the record cut against plaintiffs’ claim that the anti-steering provisions were the cause of any increases in merchant fees by Amex—for example, rival card companies had also increased merchant fees. The Court also noted that credit card transaction output had increased substantially during the relevant period, further undermining any claim of anticompetitive effects. Quoting from Brooke Group, 509 U.S. at 237, the majority wrote that it will “not infer competitive injury from price and output data absent some evidence that tends to prove that output was restricted or prices were above a competitive level.”  The Court’s focus on output restriction under Brooke Group demonstrates that the Court’s continued insistence on the application of sound economic principles in evaluating antitrust claims. While it noted Amex’s rationale for the anti-steering provisions, the Court did not address the second or third step of the rule of reason framework given its finding that the plaintiffs had failed to satisfy the first step. The Court’s recognition in the third step that proven procompetitive efficiencies may be overcome by a showing of less restrictive means of achieving those efficiencies will likely cause private plaintiffs and enforcement agencies to increase their focus on potential alternatives. Gibson Dunn’s lawyers are available to assist in addressing any questions you may have regarding these developments. Please feel free to contact any member of the firm’s Antitrust and Competition practice group or the following authors: Trey Nicoud – San Francisco (+1 415-393-8308, tnicoud@gibsondunn.com) Rod J. Stone – Los Angeles (+1 213-229-7256, rstone@gibsondunn.com) Daniel G. Swanson – Los Angeles (+1 213-229-7430, dswanson@gibsondunn.com) Richard G. Parker – Washington, D.C. (+1 202-955-8503, rparker@gibsondunn.com) M. Sean Royall – Dallas (+1 214-698-3256, sroyall@gibsondunn.com) Chelsea G. Glover – Dallas (+1 214-698-3357, cglover@gibsondunn.com)

June 22, 2018 |
Supreme Court Holds That Individuals Have Fourth Amendment Privacy Rights In Cell Phone Location Records

Click for PDF Carpenter v. United States, No. 16-402  Decided June 22, 2018 The Supreme Court held 5-4 that law enforcement officials must generally obtain a warrant when seeking historical cell phone location records from a telecommunications provider. Background: Wireless carriers regularly collect and store information reflecting the location of cell phones when those phones connect to cell sites to transmit and receive information.  Prosecutors collected a suspect’s cell-site location data from wireless carriers following the procedure in the Stored Communications Act, 18 U.S.C. §§ 2701-12, but without obtaining a warrant.  The suspect argued that the Government’s acquisition of this data without a warrant was an unconstitutional search that violated the Fourth Amendment.  This argument set up a conflict between two lines of Supreme Court precedent: the longstanding third-party doctrine, which holds that information a person voluntarily reveals to others is not protected by the Fourth Amendment; and several recent cases holding that cell phones implicate significant privacy concerns because so many people store large amounts of information on them. Issue: Whether an individual has a protected privacy interest under the Fourth Amendment in historical cell phone location records. Court’s Holding: Yes.  The Fourth Amendment protects cell phone location records because of their comprehensive and private nature, even though they are collected and held by the phone company.  The Government must ordinarily obtain a warrant before acquiring the records. “In light of the deeply revealing nature of [cell site location data], its depth, breadth, and comprehensive reach, and the inescapable and automatic nature of its collection, the fact that such information is gathered by a third party does not make it any less deserving of Fourth Amendment protection.” Chief Justice Roberts, writing for the 5-4 majority What It Means: The decision continues a trend of recent Supreme Court decisions limiting Government access to personal information stored electronically.  In United States v. Jones (2012), the Court unanimously rejected the Government’s argument that it could place a GPS tracker on a suspect’s car without a warrant, although it divided as to the reason.  Likewise, in Riley v. California (2014), the Court unanimously declined to allow police officers to routinely search cell phones incident to arrest, based in part on the volume and importance of personal information stored on them. The Court emphasized that its decision was limited to the collection of historical cell phone location records covering an extended period of time.  The Court declined to consider whether the Fourth Amendment protected real-time cell phone location information or historical location data covering a shorter period of time than the Government collected here (seven days).  The Court also emphasized that it was not calling into question conventional surveillance tools such as security cameras, or collection techniques involving foreign affairs or national security. The Court expressly declined to overrule the third-party doctrine.  Instead, it stated that the doctrine should not be extended to historical cell site location data because the breadth and depth of the information available made that data “qualitatively different” from other information that the Court had previously allowed the Government to obtain from third parties without a warrant. Gibson Dunn’s lawyers are available to assist in addressing any questions you may have regarding developments at the Supreme Court.  Please feel free to contact the following practice leaders: Appellate and Constitutional Law Practice Caitlin J. Halligan +1 212.351.3909 challigan@gibsondunn.com Mark A. Perry +1 202.887.3667 mperry@gibsondunn.com Nicole A. Saharsky +1 202.887.3669 nsaharsky@gibsondunn.com   Related Practice: Privacy, Cybersecurity and Consumer Protection Ahmed Baladi +33 (0) 1 56 43 13 00 abaladi@gibsondunn.com Alexander H. Southwell +1 212.351.3981 asouthwell@gibsondunn.com   Related Practice: White Collar Defense and Investigations Joel M. Cohen +1 212.351.2664 jcohen@gibsondunn.com Charles J. Stevens +1 415.393.8391 cstevens@gibsondunn.com F. Joseph Warin +1 202.887.3609 fwarin@gibsondunn.com   © 2018 Gibson, Dunn & Crutcher LLP Attorney Advertising:  The enclosed materials have been prepared for general informational purposes only and are not intended as legal advice.

May 7, 2018 |
A Closer Look At Barnes & Noble Data Breach Ruling

Orange County partner Joshua Jessen and associate Ashley Van Zelst are the authors of “A Closer Look At Barnes & Noble Data Breach Ruling,” [PDF] published by Law360 on May 7, 2018.

April 17, 2018 |
Supreme Court Holds That Recent Legislation Moots Dispute Over Emails Stored Overseas

Click for PDF United States v. Microsoft Corp., No. 17-2 Decided April 17, 2018 Today, the Supreme Court held that Microsoft’s dispute with the federal government over the government’s attempts to access email stored oversees is moot. Background: The Stored Communications Act, 18 U.S.C. § 2701 et seq., authorizes the government to require an email provider to disclose the contents of emails (and certain other electronic data) within its control if the government obtains a warrant based on probable cause. In this case, the federal government obtained a warrant to obtain emails from an email account used in drug trafficking. The drug trafficking allegedly occurred in the United States, but the emails were stored on a data server in Ireland. Microsoft refused to provide the emails on the ground that the Stored Communications Act does not apply to emails stored overseas. Issue: Whether the Stored Communications Act requires an email provider to disclose to the government emails stored abroad. Court’s Holding: The case is moot. On March 23, 2018, the President signed the Clarifying Lawful Overseas Use of Data Act (CLOUD Act), which amended the Stored Communications Act so that it now applies to emails stored abroad. The parties’ dispute under the old version of the law therefore was moot. “No live dispute remains between the parties over the issue with respect to which certiorari was granted.” Per Curiam What It Means: Given passage of the CLOUD Act, there was no longer any need for the Supreme Court to interpret the prior version of the Stored Communications Act. The CLOUD Act requires an email provider to disclose emails, so long as the statute’s procedures have been followed, regardless of whether those emails are “located within or outside of the United States.” CLOUD Act § 103(a)(1) (to be codified at 18 U.S.C. § 2713). But the CLOUD Act permits courts to exempt providers from disclosing emails of customers who are not U.S. Citizens or residents, if disclosure would risk violating the laws of certain foreign governments. CLOUD Act § 103(b) (to be codified at 18 U.S.C. § 2703(h)).   Gibson Dunn’s lawyers are available to assist in addressing any questions you may have regarding developments at the Supreme Court.  Please feel free to contact the following practice leaders: Appellate and Constitutional Law Practice Caitlin J. Halligan +1 212.351.3909 challigan@gibsondunn.com Mark A. Perry +1 202.887.3667 mperry@gibsondunn.com Nicole A. Saharsky +1 202.887.3669 nsaharsky@gibsondunn.com Related Practice: White Collar Defense and Investigations Joel M. Cohen +1 212.351.2664 jcohen@gibsondunn.com Charles J. Stevens +1 415.393.8391 cstevens@gibsondunn.com F. Joseph Warin +1 202.887.3609 fwarin@gibsondunn.com Related Practice: Privacy, Cybersecurity and Consumer Protection Alexander H. Southwell +1 212.351.3981 asouthwell@gibsondunn.com   © 2018 Gibson, Dunn & Crutcher LLP Attorney Advertising:  The enclosed materials have been prepared for general informational purposes only and are not intended as legal advice.

April 12, 2018 |
Trump Administration Imposes Unprecedented Russia Sanctions

Click for PDF On April 6, 2018, the U.S. Department of the Treasury’s Office of Foreign Assets Control (“OFAC”) significantly enhanced the impact of sanctions against Russia by blacklisting almost 40 Russian oligarchs, officials, and their affiliated companies pursuant to Obama-era sanctions, as modified by the Countering America’s Adversaries Through Sanctions Act (“CAATSA”) of 2017.  In announcing the sanctions, Treasury Secretary Steven Mnuchin cited Russia’s involvement in “a range of malign activity around the globe,” including the continued occupation of Crimea, instigation of violence in Ukraine, support of the Bashal al-Assad regime in Syria, attempts to subvert Western democracies, and malicious cyber activities.[1]  Russian stocks fell sharply in response to the new measures, and the ruble depreciated almost 5 percent against the dollar.[2] Although this is not the first time that the Trump administration imposed sanctions against Russia, it is the most significant action taken to date.  In June 2017, OFAC added 38 individuals and entities involved in the Ukraine conflict to OFAC’s list of Specially Designated Nationals (“SDNs”).[3]  The April 6 sanctions added seven Russian oligarchs and 12 companies they own or control, 17 senior Russian government officials, the primary state-owned Russian weapons trading company and its subsidiary, a Russian bank, to the SDN List.[4]  These designations include major, publicly-traded companies that have been listed on the London and Hong Kong exchanges and that have thousands of customers and tens of thousands of investors throughout the world. OFAC has never designated similar companies, and the potential challenges for global companies seeking to comply with OFAC measures are substantial.  An SDN designation prohibits U.S. persons—including U.S. companies, U.S. financial institutions, and their foreign branches—from engaging in any transactions with the designees or with entities in which they hold an aggregate ownership of 50 percent or more.  The designation of a small company in a regional market can be devastating for the company, but rarely would it impose meaningful collateral consequences on global markets or investors.  In this case, sanctions on companies such as EN+ and RUSAL (amongst others) have already impacted a substantial portion of a core global commodity (the aluminum market) while also preventing further trades in their shares, a move that could harm pension funds, mutual funds, and other investors that have long held stakes worth billions of dollars. To minimize the immediate disruptions, OFAC issued two time-limited general licenses (regulatory exemptions) permitting companies and individuals to undertake certain transactions to “wind down” business dealings related to the designated parties.[5]  However, our assessment is that disruptions are inevitable and the size of the sanctions targets in this case means that the general licenses will have potentially limited effect in reducing dislocations. Background OFAC’s April 6 designations mark a clear change in tone from the Trump administration, which had initially resisted imposing the full force of CAATSA’s sanctions.  For example, as we wrote in our 2017 Year-End Sanctions Update, CAATSA required the imposition of secondary sanctions on any person the President determined to have been engaging in “a significant transaction with a person that is part, or operates for or on behalf of, the defense or intelligence sectors of the Government Russia.”[6]  On the day such sanctions were to be imposed, State Department representatives provided classified briefings to Congressional leaders to explain their decision not to impose any such sanctions under CAATSA, namely because the Trump administration felt that CAATSA was already having an deterrent effect which removed any immediate need to impose sanctions.[7] Section 241 of CAATSA also required OFAC to publish a report on January 29, 2018 identifying “the most significant senior foreign political figures and oligarchs in the Russian Federation,”[8] (the “Section 241 List”).  The Treasury Department issued the report shortly before midnight on the due date, publicly naming 114 senior Russian political figures and 96 oligarchs.[9]  Although the report did not result in any sanctions or legal repercussions, the public naming of such persons did cause confusion for those who sought to engage with them in compliance with U.S. law.[10]  However, most observers were highly critical of the list, claiming that it demonstrated that the Trump administration was failing to adequately address Congressional intent to punish Moscow.  Interestingly, almost all of the oligarchs designated on April 6 originally appeared on the Section 241 List.[11] Designations Included among the list of sanctioned parties were seven Russian oligarchs designated for being a Russian government official or operating in the energy sector of the Russian Federation economy, and 12 companies they own or control.  In its press release, OFAC warned that the 12 companies identified as owned or controlled by the designated Russian oligarchs “should not be viewed as exhaustive, and the regulated community remains responsible for compliance with OFAC’s 50 percent rule.”  This rule extends U.S. sanctions prohibitions to entities owned 50 percent or more, even if those companies are not themselves listed by OFAC.  The opacity of ownership in the Russian economy makes the 50 percent rule very difficult to operationalize. In addition, OFAC designated 17 senior Russian government officials, a state-owned company and its subsidiary.  The sanctioned individuals and entities, as described by OFAC, are provided in the following table. SDN Description Designated Russian Oligarchs 1. Vladimir Bogdanov Bogdanov is the Director General and Vice Chairman of the Board of Directors of Surgutneftegaz, a vertically integrated oil company operating in Russia. OFAC imposed sectoral sanctions on Surgutneftegaz pursuant to Directive 4 issued under E.O. 13662 in September 2014. 2. Oleg Deripaska Deripaska has said that he does not separate himself from the Russian state.  He has also acknowledged possessing a Russian diplomatic passport, and claims to have represented the Russian government in other countries.  Deripaska has been investigated for money laundering, and has been accused of threatening the lives of business rivals, illegally wiretapping a government official, and taking part in extortion and racketeering.  There are also allegations that Deripaska bribed a government official, ordered the murder of a businessman, and had links to a Russian organized crime group. 3. Suleiman Kerimov Kerimov is a member of the Russian Federation Council.  On November 20, 2017, Kerimov was detained in France and held for two days. He is alleged to have brought hundreds of millions of euros into France – transporting as much as 20 million euros at a time in suitcases, in addition to conducting more conventional funds transfers – without reporting the money to French tax authorities.  Kerimov allegedly launders the funds through the purchase of villas.  Kerimov was also accused of failing to pay 400 million euros in taxes. 4. Kirill Shamalov Shamalov married Putin’s daughter Katerina Tikhonova in February 2013 and his fortunes drastically improved following the marriage; within 18 months, he acquired a large portion of shares of Sibur, a Russia-based company involved in oil and gas exploration, production, processing, and refining.  A year later, he was able to borrow more than one $1 billion through a loan from Gazprombank, a state-owned entity subject to sectoral sanctions pursuant to E.O. 13662.  That same year, long-time Putin associate Gennady Timchenko, who is himself designated pursuant to E.O. 13661, sold an additional 17 percent of Sibur’s shares to Shamalov.  Shortly thereafter, Kirill Shamalov joined the ranks of the billionaire elite around Putin. 5. Andrei Skoch Skoch is a deputy of the Russian Federation’s State Duma.  Skoch has longstanding ties to Russian organized criminal groups, including time spent leading one such enterprise. 6. Viktor Vekselberg Vekselberg is the founder and Chairman of the Board of Directors of the Renova Group.  The Renova Group is comprised of asset management companies and investment funds that own and manage assets in several sectors of the Russian economy, including energy.  In 2016, Russian prosecutors raided Renova’s offices and arrested two associates of Vekselberg, including the company’s chief managing director and another top executive, for bribing officials connected to a power generation project in Russia. Designated Oligarch-Owned Companies 7. B-Finance Ltd. British Virgin Islands company owned or controlled by, directly or indirectly, Oleg Deripaska. 8. Basic Element Limited Basic Element Limited is based in Jersey and is the private investment and management company for Deripaska’s various business interests. 9. EN+ Group Owned or controlled by, directly or indirectly, Oleg Deripaska, B-Finance Ltd., and Basic Element Limited.  EN+ Group is located in Jersey and is a leading international vertically integrated aluminum and power producer.  This is a publicly traded company that has been listed, inter alia, on the London Stock Exchange. 10. EuroSibEnergo Owned or controlled by, directly or indirectly, Oleg Deripaska and EN+ Group. EuroSibEnergo is one of the largest independent power companies in Russia, operating power plants across Russia and producing around nine percent of Russia’s total electricity. 11. United Company RUSAL PLC Owned or controlled by, directly or indirectly, EN+ Group.  United Company RUSAL PLC is based in Jersey and is one of the world’s largest aluminum producers, responsible for seven percent of global aluminum production.  This is a publicly traded company that has been listed, inter alia¸ on the Hong Kong Stock Exchange. 12. Russian Machines Owned or controlled by, directly or indirectly, Oleg Deripaska and Basic Element Limited.  Russian Machines was established to manage the machinery assets of Basic Element Limited. 13. GAZ Group Owned or controlled by, directly or indirectly, Oleg Deripaska and Russian Machines.  GAZ Group is Russia’s leading manufacturer of commercial vehicles. 14. Agroholding Kuban Owned or controlled by, directly or indirectly, Oleg Deripaska and Basic Element Limited. 15. Gazprom Burenie, OOO Owned or controlled by Igor Rotenberg.  Gazprom Burenie, OOO provides oil and gas exploration services in Russia. 16. NPV Engineering Open Joint Stock Company Owned or controlled by Igor Rotenberg.  NPV Engineering Open Joint Stock Company provides management and consulting services in Russia. 17. Ladoga Menedzhment, OOO Owned or controlled by Kirill Shamalov.  Ladoga Menedzhment, OOO is located in Russia and engaged in deposit banking. 18. Renova Group Owned or controlled by Viktor Vekselberg.  Renova Group, based in Russia, is comprised of investment funds and management companies operating in the energy sector, among others, in Russia’s economy. Designated Russian State-Owned Firms 19. Rosoboroneksport State-owned Russian weapons trading company with longstanding and ongoing ties to the Government of Syria, with billions of dollars’ worth of weapons sales over more than a decade.  Rosoboroneksport is being designated under E.O. 13582 for having materially assisted, sponsored, or provided financial, material, or technological support for, or goods or services in support of, the Government of Syria. 20. Russian Financial Corporation Bank (RFC Bank) Owned by Rosoboroneksport.  RFC Bank incorporated is in Moscow, Russia and its operations include deposit banking activities. Designated Russian Government Officials 21. Andrey Akimov Chairman of the Management Board of state-owned Gazprombank 22. Mikhail Fradkov President of the Russian Institute for Strategic Studies (RISS), a major research and analytical center established by the President of the Russian Federation, which provides information support to the Presidential Administration, Federation Council, State Duma, and Security Council. 23. Sergey Fursenko Member of the board of directors of Gazprom Neft, a subsidiary of state-owned Gazprom 24. Oleg Govorun Head of the Presidential Directorate for Social and Economic Cooperation with the Commonwealth of Independent States Member Countries.  Govorun is being designated pursuant to E.O. 13661 for being an official of the Government of the Russian Federation. 25. Alexey Dyumin Governor of the Tula region of Russia.  He previously headed the Special Operations Forces, which played a key role in Russia’s purported annexation of Crimea. 26. Vladimir Kolokoltsev Minister of Internal Affairs and General Police of the Russian Federation 27. Konstantin Kosachev Chairperson of the Council of the Federation Committee on Foreign Affairs 28. Andrey Kostin President, Chairman of the Management Board, and Member of the Supervisory Council of state-owned VTB Bank 29. Alexey Miller Chairman of the Management Committee and Deputy Chairman of the Board of Directors of state-owned company Gazprom 30. Nikolai Patrushev Secretary of the Russian Federation Security Council 31. Vladislav Reznik Member of the Russian State Duma 32. Evgeniy Shkolov Aide to the President of the Russian Federation 33. Alexander Torshin State Secretary – Deputy Governor of the Central Bank of the Russian Federation 34. Vladimir Ustinov Plenipotentiary Envoy to Russia’s Southern Federal District 35. Timur Valiulin Head of the General Administration for Combatting Extremism within Russia’s Ministry of Interior 36. Alexander Zharov Head of Roskomnadzor (the Federal Service for the Supervision of Communications, Information Technology, and Mass Media) 37. Viktor Zolotov Director of the Federal Service of National Guard Troops and Commander of the National Guard Troops of the Russian Federation All assets subject to U.S. jurisdiction of the designated individuals and entities, and of any other entities blocked by operation of law as a result of their ownership by a sanctioned party, are frozen, and U.S. persons are generally prohibited from dealings with them.  OFAC’s Frequently Asked Questions (“FAQs”) make clear that if a blocked person owns less than 50 percent of a U.S. company, the U.S. company will not be blocked.  However, the U.S. company (1) must block all property and interests in property in which the blocked person has an interest and (2) cannot make any payments, dividends, or disbursement of profits to the blocked person and must place them in a blocked account at a U.S. financial institution.[12] Non-U.S. persons could face secondary sanctions for knowingly facilitating significant transactions for or on behalf of the designated individuals or entities.  CAATSA strengthened the secondary sanctions measures that could be used to target such persons, although such measures typically carry less risk because as a matter of implementation OFAC traditionally warns those who may be transacting with parties that could subject them to secondary sanctions and provides them with an opportunity to cure.  While this outreach and deterrence model of imposing secondary sanctions was developed under the Obama administration (and resulted in very few impositions of secondary sanctions), the Trump administration could theoretically change it and impose secondary sanctions without the traditional warning.  However, that appears unlikely and the Trump administration has indicated that it will continue to provide warnings before imposing secondary sanctions. Two CAATSA provisions bear particular note as they are implicated by Friday’s actions:  section 226, which authorizes sanctions on foreign financial institutions for facilitating a transaction on behalf of a Russian person on the SDN List, and section 228, which seeks to impose sanction on a person who “facilitates a significant transaction…for or on behalf of any person subject to sanctions imposed by the United States with respect to the Russian Federation.”[13]  OFAC has clarified that the section 228 provision extends to persons listed on either the SDN or the Sectoral Sanctions Identifications (“SSI”) List, as well as persons they may own or control pursuant to OFAC’s 50 percent rule.[14]  As we noted when CAATSA was passed, despite the mandatory nature of these sections, the President appears to retain the discretion to impose restrictions based upon whether he finds certain transaction significant or for other reasons.  With the increase in the SDN list to include major players in global commodities such as EN+ or RUSAL, more companies around the world that rely on these companies could find themselves at least theoretically at risk of being sanctioned themselves.  Companies should also consider this risk where there is reliance on material produced by any company in the Russian military establishment and sold by the Russian state arms company such as Rosoboronexport, which was also sanctioned. General Licenses In an effort to minimize the immediate disruptions to U.S. persons and global markets (especially given the sanctioning of major publicly traded corporations that have thousands of clients and investors throughout the world), OFAC issued General Licenses 12 and 13, permitting companies to undertake certain transactions and activities to “wind down” certain business dealings related to certain, listed designated parties.  These General Licenses only cover U.S. persons, which has led some non-U.S. companies to inquire whether their ability to wind down operations with respect to the SDN companies would place them at risk for secondary sanctions (as they would be engaging with sanctioned parties and perhaps trigger the CAATSA provisions above).  OFAC has noted in its FAQs that the U.S. Government would not find a transaction “significant” if a U.S. person would not need a specific license to undertake it.[15]  That is, it would seem that at least for the duration of the General Licenses a non-U.S. party can engage in similar wind down operations without risking secondary sanctions. General License 12, which expires June 5, 2018, authorizes U.S. persons to engage in transactions and activities with the 12 oligarch-owned designated entities that are “ordinarily incident and necessary to the maintenance or wind down of operations, contracts, or other agreements” related to these 12 entities (as well as those entities impacted by operation of OFAC’s 50 percent rule).  This is a broader wind down provision than OFAC has issued in the past in that it allows not just “wind down” activities but also non-defined “maintenance” activities.  Despite this breadth it is already uncertain how this General License will actually work in practice.  Permissible transactions and activities include importation from blocked entities and broader dealings with them.  However, no payments are allowed to be made to blocked entities–rather such payments can only be made to the blocked entities listed in General License 12 into blocked, interest-bearing accounts and reported to OFAC by June 18, 2018 (10 business days after the expiration of the license).[16]  It is not clear why a sanctioned party would wish to deliver goods and services to parties if the sanctioned party cannot be paid.  In line with the FAQ noted above, for non-U.S. companies it would seem that in order to avoid secondary sanctions implications the same restrictions would apply–that is, continued transactions are permitted on a wind down basis, but transfer of funds to the SDN companies could be viewed as “significant” or otherwise sanctionable. Recognizing how broad the sanctions are and how far they may implicate subsidiaries of SDN companies inside the United States, OFAC’s FAQs clarify that General License 12 generally permits the blocked entities listed to pay U.S. persons their salaries, pension payments, or other benefits due during the wind down period.  U.S. persons employed by entities that are not explicitly listed in General License 12—principally the designated Russian state-owned entities—do not have the benefit of this wind down period.  OFAC FAQs note that such U.S. persons may seek authorization from OFAC to maintain or wind down their relationships with any such blocked entity, but make clear that continued employment or board membership related to these entities is prohibited.[17]  The implications of these restrictions are significant where, as is the case with the blocked entities listed in General License 12, U.S. subsidiaries exist and U.S. persons are involved throughout company operations. General License 13, which expires May 7, 2018, similarly allows transactions and activities otherwise prohibited under the April 6 sanctions.  This license allows transactions and activities necessary to “divest or transfer debt, equity, or other holdings” in three designated Russia entities:  EN+ Group PLC, GAZ Group, and United Company RUSAL PLC.  Permitted transactions include facilitating, clearing, and settling transactions.  General License 13, however, does not permit any divestment or transfer to a blocked person, including the three entities listed in General License 13.[18]  As with General License 12, transactions permitted under General License 13 must be reported to OFAC within 10 business days after the expiration of the license. Once again, it is uncertain how the General License will work in practice.  Given the designations which have depressed the share prices of the sanctions parties it is unknown who might be willing to purchase the shares even if U.S. holders are permitted to sell them. Other Ramifications for Investors, Supply Chains, and Customers The April 6 sanctions raise other significant questions and practical challenges for U.S. and non-U.S. companies, with particular risks for investors as well as the manufacturers, suppliers, and customers of the SDN companies. Investors and fund managers will need to conduct significant diligence into the participants and ownership structures of their funds, including fund limited partners, to determine whether sanctioned persons or entities are involved.  Moreover, for those who have seen the value of any assets tied to these companies decline significantly, they are allowed to continue to try sell their assets to non-U.S. persons.  However, given the challenge in finding buyers and evidence that certain financial institutions and brokers are already refusing to engage in any trades (even during the wind down period), the investment community needs to potentially prepare for long-term holding of blocked assets (by setting up sequestered accounts). For those within the supply chains of sanctioned companies, from suppliers of commodities to finished goods, as well as customers of sanctioned companies, the concern will be to potentially replace key commercial relationships which will become increasingly difficult (if not prohibited) to maintain.  For companies that have relied on RUSAL, for example, as a source of aluminum or as a customer for their goods they will potentially need to find replacements.  While aluminum is not in short supply globally, in certain jurisdictions RUSAL has a commanding position and even a monopoly.  It is unclear how companies that seek to be compliant with OFAC regulations will navigate a world in which RUSAL has been a primary or secondary supplier (and there is no clear way to avoid such engagement so long as the company seeks to be active in that jurisdiction and in need of aluminum).  Moreover, it is not just U.S. person counterparties that are likely to be affected by prohibitions on dealing with sanctioned parties.  In line with the FAQ noted above, if non-U.S. companies were to make payments to the sanctioned companies for deliveries, these could be deemed “significant transactions” and could make the non-U.S. companies, themselves, the target of OFAC designations and/or secondary sanctions.  One option—reportedly pursued by one major trading company—is to declare force majeure on contracts with Rusal. As noted above, relief contemplated by General Licenses 12 and 13 may be operationally difficult to implement.  The sanctions apply to companies 50 percent owned or controlled by blocked parties.  Companies will need to undertake, under a short time line, significant due diligence to determine whether any such companies are involved in its operations.  The wind down process may be further complicated by any Russian response to the U.S. sanctions. What Happens Next? The April 6 sanctions are likely not the end of the story.  The next steps to watch include: 1.)    Potential Russian Retaliation:  During an address to the State Duma on April 11, Prime Minister Dmitry Medvedev said, for example, that Russia should consider targeting U.S. goods or goods produced in Russia by U.S. companies when considering a possible response.[19]  Any such measures could implicate further U.S. business dealings with Russian entities, including the blocked entities. 2.)    Changing Ownership and Structure of Sanctioned Parties:  Given that the sanctioned companies were listed due to their ownership/control by sanctioned persons (pursuant to the 50 percent rule) there have already been moves to dilute their ownership and thus potentially have the companies de-listed.  While possible, it is important to note that because the companies were explicitly listed by OFAC (and now appear on the SDN list), any reduction in ownership or control will not result in an automatic de-listing.  Rather, OFAC will need to process these changes and formally de-list the entities before they can be treated as non-sanctioned.  OFAC could opt not to de-list, or could decide to list the companies on other bases.  Regardless the process will undoubtedly take some time.  We note that at least one engineering firm whose stock was held by a designated entity has already obtained a license to complete the transfer of these shares; this is helpful precedent for any company impacted but only tangentially related to the designated entities.  Sanctioned entities have also changed their board membership in response to the U.S. sanctions.  On Monday, April 11, for example, the entire board at Renova Management AG, the Swiss subsidiary of the Renova Group, was dismissed after Renova Group’s designation.[20] 3.)    European Follow On Restrictions:  The shock of many of Europe’s major powers following the poisoning of Sergei and Yulia Skripal in Salisbury in early March and the resulting mass expulsion of Russian diplomats from European capitals suggests that sanctions may be next.  Core European U.S. allies were likely notified in advance of the April 6 measures.  In the run up to sanctions in 2014, Washington and Brussels worked very closely to institute parallel measures against Moscow.  While that unity has broken down under the Trump administration, especially since CAATSA was passed in August, it would appear as though some European sanctions are liking in the offing. 4.)    OFAC FAQs/Licenses and Potentially New Measures:  Due to the complexity of the April 6 measures, we expect that OFAC will issue additional FAQs and potentially revisions to General Licenses 12 and 13 (or new General Licenses) in the near term to clear up questions and further calibrate response.  Depending upon next steps from Russia and Europe we may see additional sanctions as well.  Secretary of State-designate Mike Pompeo’s statement that the United States “soft” policy toward Russia is over suggests as much.[21] Unfortunately, there is no clear path towards a de-escalation in Washington-Moscow tensions.  When the U.S. first issued sanctions against Russia in response to the Crimea incursion in 2014 the sanctions “off-ramp” was very clearly defined: if Russia altered its behavior in Crimea/Ukraine there was a way that sanctions could be removed.  Since 2014, as Secretary Mnuchin noted, Russia’s activities have exacerbated in scope and territory to include support for the Bashar regime in Syria, election meddling, cyber-attacks, and the nerve agent attack in the United Kingdom.  The breadth and boldness of this activity makes it even more unlikely that Russia will comply with the West’s wishes and thus even less likely that the sanctions would be removed or even reduced at any point in the near term.  For its part, bipartisan Congressional leadership expressed broad support for the Trump administration’s actions—however, Congress will likely demand more from the President in the near term.  Perhaps eager to placate Congress and dispel any notion that he is “soft” on Russia and buffeted by external circumstances ranging from any potential attack in Syria to the investigation by Robert Mueller, the President may impose still harsher measures on Moscow. [1]      Press Release, U.S. Department of the Treasury, Treasury Designates Russian Oligarchs, Officials, and Entities in Response to Worldwide Malign Activity (Apr. 6, 2018), available at https://home.treasury.gov/news/featured-stories/treasury-designates-russian-oligarchs-officials-and-entities-in-response-to. [2]      Natasha Turak, US sanctions are finally proving a ‘major game changer’ for Russia, CNBC, (Apr. 10, 2018) available at https://www.cnbc.com/2018/04/10/us-moscow-sanctions-finally-proving-a-major-game-changer-for-russia.html. [3]      Press Release, U.S. Dep’t of the Treasury, Treasury Designates Individuals and Entities Involved in the Ongoing Conflict in Ukraine (June 20, 2017), available at https://www.treasury.gov/press-center/press-releases/Pages/sm0114.aspx.  Designated persons and entities included separatists and their supporters; entities operating in and connected to the Russian annexation of Crimea; entities owned or controlled by, or which have provided support to, persons operating in the Russian arms or materiel sector; and Russian government officials. [4]      U.S. Department of the Treasury, supra, n. 1. [5]      Id. [6]      CAATSA, Title II, § 231 (a). Specifically, CAATSA Section 231(a) specified that the President shall impose five or more of the secondary sanctions described in Section 235 with respect to a person the President determines knowingly “engages in a significant transaction with a person that is part of, or operates for or on behalf of, the defense or intelligence sectors of the Government of the Russian Federation, including the Main Intelligence Agency of the General Staff of the Armed Forces of the Russian Federation or the Federal Security Service of the Russian Federation.”  The measures that could be imposed under Section 231 are discretionary in nature.  The language of the legislation is somewhat misleading in this regard.  Section 231 is written as a mandatory requirement—providing that the President “shall impose” various restrictions.  However, the legislation itself—and the October 27, 2017 guidance provided by the State Department—makes clear that secondary sanctions are only imposed after the President makes a determination that a party “knowingly” engaged in “significant” transactions with a listed party.  The terms “knowingly” and “significant” have imprecise meanings, even under the State Department guidance.  OFAC Ukraine-/Russia-related Sanctions FAQs (“OFAC FAQs”), OFAQ No. 545, available at https://www.treasury.gov/resource-center/faqs/Sanctions/Pages/faq_other.aspx#567. [7]      Press Release, U.S. Dep’t of State, Background Briefing on the Countering America’s Adversaries Through Sanctions Act (CAATSA) Section 231 (Jan. 30, 2018), available at https://www.state.gov/r/pa/prs/ps/2018/01/277775.htm. [8]      CAATSA, Title II, § 241. [9]      See U.S. Dep’t of the Treasury, Report to Congress Pursuant to Section 241 of the Countering America’s Adversaries Through Sanctions Act of 2017 Regarding Senior Foreign Political Figures and Oligarchs in the Russian Federation and Russian Parastatal Entities (Unclassified) (Jan. 29, 2018), available at https://www.scribd.com/document/370313106/2018-01-29-Treasury-Caatsa-241-Final. [10]     See, e.g., Press Release, U.S. Dep’t of the Treasury, Treasury Releases CAATSA Reports, Including on Senior Foreign Political Figures and Oligarchs in the Russian Federation (Jan. 29, 2018), available at https://home.treasury.gov/news/press-releases/sm0271. [11]     The one exception is Igor Rotenberg.  Although Igor Rotenberg did not appear on the Section 241 List, his father and uncle were included.  According to the April 6 OFAC announcement, Igor Rotenberg acquired significant assets from his father, Arkady Rotenberg, after OFAC designated the latter in March 2014.  Specifically Arkady Rotenberg sold Igor Rotenberg 79 percent of the Russian oil and gas drilling company Gazprom Burenie.  Igor Rotenberg’s uncle, Boris Rotenberg, owns 16 percent of the company.  Like his brother Arkady Rotenberg, Boris Rotenberg was designated in March 2014. [12]     OFAC FAQ No. 573. [13]     CAATSA, Title II, §228. [14]     OFAC FAQ No. 546.  In its implementing guidance, OFAC confirmed that Section 228 extends to SDNs and SSI entities but clarified that it would not deem a transaction “significant” if U.S. persons could engage in the transaction without the need for a specific license from OFAC.  In other words, only transactions prohibited by OFAC—specifically, transactions with SDNs and/or transactions with SSI entities that are prohibited by the sectoral sanctions—will “count” as significant for purposes of Section 228.  OFAC also noted that even a transaction with an SSI that involves prohibited debt or equity would not automatically be deemed “significant”—it would need to also involve “deceptive practices” and OFAC would assess this criteria on a “totality of the circumstances” basis. [15]     OFAC FAQ No. 574. [16]     General License 12; OFAC FAQ No. 569. [17]     See also OFAC FAQ Nos. 567-568. [18]     See also OFAC FAQ Nos. 570-571. [19]     Russia’s Renova says board at its Swiss subsidiary dismissed due to sanctions, Reuters (Apr. 11, 2018), available at https://uk.reuters.com/article/usa-russia-sanctions-renova/russias-renova-says-board-at-its-swiss-subsidiary-dismissed-due-to-sanctions-idUKR4N1NE02P. [20]     Russia ready to prop Up Deripaska’s Rusal as US sanctions bite, Financial Times (Apr. 11, 2018), available at https://www.ft.com/content/4904f6d4-3d97-11e8-b7e0-52972418fec4. [21]     Patricia Zengerle, Lesley Wroughton, As Pompeo signals hard Russia line, lawmakers want him to stand on his own, Reuters (Apr. 12, 2018), available at https://www.reuters.com/article/us-usa-trump-pompeo/as-pompeo-signals-hard-russia-line-lawmakers-want-him-to-stand-on-his-own-idUSKBN1HJ0HO. The following Gibson Dunn lawyers assisted in preparing this client update: Adam Smith, Judith Alison Lee, Christopher Timura, Stephanie Connor, and Courtney Brown. Gibson Dunn’s lawyers are available to assist in addressing any questions you may have regarding the above developments.  Please contact the Gibson Dunn lawyer with whom you usually work, the authors, or any of the following leaders and members of the firm’s International Trade Group: United States: Judith Alison Lee – Co-Chair, International Trade Practice, Washington, D.C. (+1 202-887-3591, jalee@gibsondunn.com) Ronald Kirk – Co-Chair, International Trade Practice, Dallas (+1 214-698-3295, rkirk@gibsondunn.com) Jose W. Fernandez – New York (+1 212-351-2376, jfernandez@gibsondunn.com) Marcellus A. McRae – Los Angeles (+1 213-229-7675, mmcrae@gibsondunn.com) Daniel P. Chung – Washington, D.C. (+1 202-887-3729, dchung@gibsondunn.com) Adam M. Smith – Washington, D.C. (+1 202-887-3547, asmith@gibsondunn.com) Christopher T. Timura – Washington, D.C. (+1 202-887-3690, ctimura@gibsondunn.com) Stephanie L. Connor – Washington, D.C. (+1 202-955-8586, sconnor@gibsondunn.com) Kamola Kobildjanova – Palo Alto (+1 650-849-5291, kkobildjanova@gibsondunn.com) Courtney M. Brown – Washington, D.C. (+1 202-955-8685, cmbrown@gibsondunn.com) Laura R. Cole – Washington, D.C. (+1 202-887-3787, lcole@gibsondunn.com) Europe: Peter Alexiadis – Brussels (+32 2 554 72 00, palexiadis@gibsondunn.com) Attila Borsos – Brussels (+32 2 554 72 10, aborsos@gibsondunn.com) Patrick Doris – London (+44 (0)207 071 4276, pdoris@gibsondunn.com) Penny Madden – London (+44 (0)20 7071 4226, pmadden@gibsondunn.com) Mark Handley – London (+44 (0)207 071 4277, mhandley@gibsondunn.com) Benno Schwarz – Munich (+49 89 189 33 110, bschwarz@gibsondunn.com) Richard Roeder – Munich (+49 89 189 33-160, rroeder@gibsondunn.com) © 2018 Gibson, Dunn & Crutcher LLP Attorney Advertising:  The enclosed materials have been prepared for general informational purposes only and are not intended as legal advice.

March 7, 2018 |
The Convergence of Law and Cybersecurity

Washington, D.C. associate Melinda Biancuzzo is the co-author of “The Convergence of Law and Cybersecurity,” [PDF] published by Nuix on March 7, 2018.

January 30, 2018 |
Law360 Names Gibson Dunn Among its Privacy 2017 practice Groups of the Year

Law360 named Gibson Dunn one of its five Privacy Practice Groups of the Year [PDF] for 2017. Gibson Dunn was selected for being “a go-to firm for tech giants in behind-the-scenes cybersecurity matters”. The firm’s profile was published on January 30, 2018.

January 29, 2018 |
International Cybersecurity and Data Privacy Outlook and Review – 2018

Click for PDF In honor of Data Privacy Day—an international effort to raise awareness and promote privacy and data protection best practices—we recently offered Gibson Dunn’s sixth annual Cybersecurity and Data Privacy Outlook and Review.  This year again, in addition to that U.S.-focused report, we offer this separate International Outlook and Review. Like many recent years, 2017 saw significant developments in the evolution of the data protection and cybersecurity landscape outside the United States: Following the adoption of a General Data Protection Regulation governing the collection, processing and transfer of personal data in 2016 (“GDPR”),[1] several Member States of the European Union started to adapt their national legal frameworks in light of the future entry into application of the GDPR on 25 May 2018, and the Article 29 Working Party (“WP29”) provided details regarding the implementation thereof. The first proposals for an upcoming European regulation with respect to private life and the protection of personal data in electronic communications, intended to repeal the currently applicable legal framework, were made public (“ePrivacy Regulation”). The Member States of the European Union started working on the transposition into national law of the directive on the security of network and information systems (“NIS Directive”). The framework for international data transfers between the U.S. and the European Union—the Privacy Shield—was subjected to various legal challenges. We cover these topics and many more in this year’s International Cybersecurity and Data Privacy Outlook and Review. Table of Contents __________________________________________ I.     European Union A.   Privacy Shield 1.    Reviews of the European Commission and the WP29 2.    Challenges to Privacy Shield B.   EU Data Protection Regulation and Reform 1.    GDPR 2.    Principal Elements of the GDPR 3.    National Data Protection Reforms Implementing the GDPR C.   EU Cyber Security Directive 1.    Digital Service Providers 2.    Member State Obligations 3.    Minimum Harmonization and Coordination Among EU Member States D.   Other EU Developments 1.    Reform of the ePrivacy Directive – the Draft EU ePrivacy Regulation 2.    CJEU Case Law 3.    Article 29 Working Party (WP29) Opinions II.   Asia-Pacific and Other Notable International Developments __________________________________________ I.     European Union A.     Privacy Shield On 12 July 2016, the European Commission formally approved the EU-U.S. Privacy Shield (“Privacy Shield”), a framework for navigating the transatlantic transfer of data from the EU to the United States.  The Privacy Shield replaces the EU-U.S. Safe Harbor framework, which was invalidated by the European Court of Justice (“ECJ”) on 6 October 2015 in Maximilian Schrems v. Data Protection Commissioner (the “Schrems” decision).[2]  We provided an in-depth discussion of the Schrems decision in a previous Outlook and Review.[3] 1.     Reviews of the European Commission and the WP29 Following the adoption of the Privacy Shield, the WP29—an advisory body that includes representatives from the data protection authorities of each EU Member State—stated that “the national representatives of the WP29 will not only assess if the remaining issues have been solved but also if the safeguards provided under the EU-U.S. Privacy Shield are workable and effective” during a joint annual review of the Privacy Shield mechanism.[4] The first review was conducted in mid-September 2017 by the European Commission and U.S. authorities.  The European Commission published its report on 18 October 2017.[5]  It concluded that the Privacy Shield continues to ensure an adequate level of protection, noting that various important structures and procedures have been put in place by U.S. authorities—namely, new redress possibilities for EU nationals, a complaint-handling and enforcement procedure, an increased level of cooperation with EU data protection authorities, and necessary safeguards for government access to personal data.  Overall, the European Commission determined that the framework, including the self-certification process, is functioning well, and the European Commission continues to support the Privacy Shield.  The European Commission did, however, make several recommendations to further improve the Privacy Shield’s functioning: More proactive and regular monitoring of companies’ compliance with their obligations under the Privacy Shield by the U.S. Department of Commerce, including the use of review questionnaires or annual compliance reports. Increased searches for and enforcement against companies that falsely claim to participate in the Privacy Shield by U.S. authorities. Raising awareness of how EU individuals can exercise their rights under the Privacy Shield, particularly how they can submit complaints. Closer cooperation between EU and U.S. authorities to achieve a consistent interpretation and to develop guidance for companies and enforcers. The appointment of a permanent Privacy Shield Ombudsman and the appointment of additional members to the Privacy and Civil Liberties Oversight Board (“PCLOB”). A codification of Presidential Policy Directive 28 (“PPD-28”), as part of the reauthorization and reform of Section 702 of the Foreign Intelligence Surveillance Act (“FISA”). It should be noted on this last point that on 19 January 2018 the United States renewed FISA Section 702 without enshrining the protections set forth in the PPD-28.[6]  It remains to be seen how this, and the success of efforts to follow up on the other recommendations, will affect the next annual review of the Privacy Shield in fall 2018. On 28 November 2017, the WP29 released its own opinion on the first annual joint review of the Privacy Shield mechanism.[7]  The WP29’s findings are quite different from the Commission’s, as the WP29 identified “significant concerns” with the Privacy Shield’s mechanisms as currently operated.  While the WP29 recognized the Privacy Shield as an improvement compared to the invalidated Safe Harbor mechanism, and welcomed the increased transparency of the U.S. government and legislator regarding the use of their surveillance powers, the WP29 set forth several recommendations, namely: U.S. authorities should provide more guidance on the principles of the Privacy Shield, particularly regarding transfers, available rights, and recourses and remedies, to make it easier for companies to interpret their obligations and individuals to exercise their rights. More oversight by U.S. authorities concerning compliance with Privacy Shield principles—for instance, compliance with limits on monitoring—and more proactive supervision of the participating organizations. Distinguishing the status of processors and controllers established in the U.S., as the opinion notes there is currently no differentiation made during the application process between the two. Increasing the level of protection concerning profiling data or automated decision-making by creating specific rules to provide sufficient safeguards. Avoiding exceptions for the processing of Human Resources (“HR”) data, as according to the WP29 the U.S. Department of Commerce considers HR data too narrowly, allowing for the transfer of some HR data as commercial data. Shoring up safeguards against the access of data by U.S. public authorities. Addressing the lack of a permanent and independent Ombudsman and the several vacancies on the PCLOB. The WP29 warned that should their concerns fail to be addressed, the group would then take appropriate actions, including challenging the Privacy Shield before national courts.  The WP29 therefore called on the European Commission and U.S. authorities to resume discussions, and to set up an action plan to demonstrate that these concerns will be addressed. 2.     Challenges to Privacy Shield Advocacy groups have already filed challenges to the Privacy Shield.  Specifically, in October 2016 Digital Rights Ireland (“DRI”) filed a challenge with a Luxembourg-based General Court, a lower court of the ECJ, to annul the European Commission’s 12 July 2016 Adequacy Decision, which approved and adopted the Privacy Shield.[8]  However, this action was dismissed by the General Court of the European Union on 22 November 2017.[9]  The European judges held that DRI neither had an interest in bringing proceedings in its own name nor had standing to act in the name of its members and supporters or on behalf of the general public. This is not the only challenge to the Privacy Shield, however:  In 2016, a French privacy advocacy group also challenged the Adequacy Decision in a legal action to the ECJ, claiming that the U.S. Ombudsman redress mechanism is not sufficiently independent and effective and therefore the Adequacy Decision must be annulled.[10]  This case remains ongoing.[11] B.     EU Data Protection Regulation and Reform 1.     GDPR On 15 December 2015, the European Commission, the European Parliament, and the European Council agreed to an EU data protection reform to boost the EU Digital Single Market.  The bill was adopted by the European Council and the European Parliament in early April 2016 and came into force on 24 May 2016 as the GDPR.  However, the GDPR provides for a two-year “grace period,” such that it will not become fully applicable until 25 May 2018.  The GDPR replaces the EU Data Protection Directive[12] and constitutes a set of data protection rules that are directly applicable to the processing of personal data across EU Member States (for additional details regarding the main requirements of the GDPR, please refer to Section 2 below). 2.     Principal Elements of the GDPR The core substantive elements of the GDPR, which will become fully applicable in May 2018, include the following: Extraterritorial Scope:  The GDPR will cover not only data controllers established in the EU, but will also apply to organizations that offer goods or services to residents in the EU, even if these organizations are not established in the EU and do not process data using servers in the EU.[13] Transparency Principle:  Under the GDPR, transparency is a general requirement applicable to three central areas: (i) the provision of information to data subjects; (ii) the way data controllers communicate with data subjects in relation to their rights under the GDPR; and (iii) how data controllers allow and facilitate the exercise of their rights by data subjects.  In late 2017, the WP29 made draft Guidelines on transparency public.[14]  Even though the final version of this document is not available yet, the purpose of such Guidelines is to provide practical guidance and interpretative assistance on the new transparency obligations as resulting from the GDPR. Consent of the Data Subjects:  The GDPR put emphasis on the notion of consent of data subjects by providing further clarification and specification of the requirements for obtaining and demonstrating valid consent.  In November 2017, the WP29 adopted Guidelines specifically dedicated to the concept of consent and focusing on the changes in this respect resulting from the GDPR.[15] “Right to Be Forgotten”:  The GDPR further develops the “right to be forgotten” (formally called the “right to erasure”) whereby personal data must be deleted when an individual no longer wants his or her data to be processed by a company and there are no legitimate reasons for retaining the data.[16]  This right was already introduced in the EU Data Protection Directive, and was the object of the litigation before the CJEU in Google Spain SL and Google Inc. v. AEPD and Mario Costeja González.[17] Among other points, the GDPR clarifies that this right is not absolute and will always be subject to the legitimate interests of the public, including the freedom of expression and historical and scientific research.  The GDPR also obliges controllers who have received a request for erasure to inform other controllers of such request in order to achieve the erasure of any links to or copy of the personal data involved.  This part of the GDPR may impose significant burdens on affected companies, as the creation of selective data destruction procedures often leads to significant costs. Data Breach Notification Obligation:  The GDPR requires data controllers to provide notice of serious security breaches to the competent Data Protection Authority/ies (“DPA(s)”) without undue delay and, in any event, within 72 hours after having become aware of any such breach.  The WP29 has issued Guidelines in order to explain the mandatory breach notification and communication requirements of the GDPR as well as some of the steps data controllers and data processors can take to meet these new obligations.[18] Profiling Activities:  The GDPR specifically addresses the use of profiling and other automated individual decision-making. In 2017, the WP29 made Guidelines public in this respect.[19]  These clarify the provisions of the GDPR regarding profiling, in particular by defining in more detail what profiling is. Data Protection Impact Assessment (“DPIA”):  Where processing activities are deemed likely to result in high risk to the rights and freedoms of data subjects, the GDPR requires that data controllers carry out, prior to the contemplated processing, an assessment of the impact thereof on the protection of personal data.[20]  However, the GDPR does not specifically detail the criteria to be taken into account for determining whether given processing activities represent “high risk.”  Instead, the GDPR provides a non-exhaustive list of examples falling within this scope.  Similarly, no process for performing DPIAs is detailed as part of the GDPR.  Considering the need for additional information in this respect, the WP29 issued Guidelines in 2017 intended to clarify which processing operations must be subject to DPIAs and how they should be carried out.[21]  These Guidelines were subsequently revised throughout the year.[22] Privacy-Friendly Techniques and Practices:  “Privacy by design” is the idea that a product or service should be conceived from the outset to ensure a certain level of privacy for an individual’s data.  “Privacy by default” is the idea that a product or service’s default settings should help ensure privacy of individual data.  The GDPR establishes privacy by design and privacy by default as essential principles.  Accordingly, businesses should only process personal data to the extent necessary for their intended purposes and should not store it for longer than is necessary for those purposes.  These principles will require data controllers to design data protection safeguards into their products and services from the inception of the product development process. Data Portability:  The GDPR establishes a right to data portability, which is intended to make it easier for individuals to transfer personal data from one service provider to another.According to the WP29, as a matter of good practice, companies should develop the means that will contribute to answering data portability requests, such as download tools and Application Programming Interfaces.  Companies should guarantee that personal data is transmitted in a structured, commonly used and machine-readable format, and they should be encouraged to ensure the interoperability of the data format provided in the exercise of a data portability request.  The WP29 has also called industry stakeholders and trade associations to work together on a common set of interoperable standards and formats to deliver the requirements of the right to data portability.[23]  In 2017, the WP29 issued revised Guidelines on the right to data portability providing guidance on the way to interpret and implement the right to data portability introduced by the GDPR.[24] Competent Supervisory Authority:  To date, in the EU the monitoring of the application of data protection rules has fallen almost exclusively under the jurisdiction of national DPAs.  Subject to the EU Data Protection Directive and the case law of the CJEU, DPAs only had jurisdiction to find a violation of their data protection laws and impose fines where, inter alia, their respective national laws were applicable.[25]With the adoption of the GDPR, a complex set of rules has been established to govern the applicability of the rules to data controllers that have cross-border processing practices.  First, where a case relates only to an establishment of a data controller or processor in a Member State or substantially affects residents only in a Member State, the DPA of the Member State will have jurisdiction to deal with the case.[26] Second, in other cases concerning cross-border data processing, the DPA of the main establishment of the controller or processor within the EU will have jurisdiction to act as lead DPA for the cross-border processing of this controller or processor.[27]  Articles 61 and 62 provide for mutual assistance and joint operations mechanisms, respectively, to ensure compliance with the GDPR.  Furthermore, the lead DPA will need to follow the cooperation mechanism provided in Article 60 with other DPAs “concerned.”  Ultimately, the European Data Protection Board (“EDPB,” where all EU DPAs and the European Commission are represented) will have decision-making powers in case of disagreement among DPAs as to the outcome of specific investigations.[28]  Third, the GDPR establishes an urgency procedure that any DPA can use to adopt time-barred measures regarding data processing in case of urgency.  These measures will only be applicable in the DPA’s own territory, pending a final decision by the EDPB.[29] In 2016, the WP29 issued Guidelines that aim to assist controllers and processors in the identification of their lead DPA.[30]  These Guidelines were updated in 2017, in particular for addressing circumstances involving joint data controllers.[31] Governance: Data controllers and processors may be required to designate a Data Protection Officer (“DPO”) in certain circumstances.  Small and medium-sized enterprises will be exempt from the obligation to appoint a data protection officer insofar as data processing is not their core business activity.  The WP29 has issued Guidelines that clarify the conditions for the designation, position and tasks of the DPO to ensure compliance with the GDPR; these Guidelines were revised in 2017.[32] These requirements will be supplemented by a much more rigid regime of fines for violations.  DPAs will be able to fine companies that do not comply with EU rules up to 4% of their global annual turnover. 3.     National Data Protection Reforms Implementing the GDPR Because the GDPR is a regulation, there is no need for Member States of the European Union to transpose its provisions in order to render them applicable within their national legal systems.  However, some Member States nonetheless have adapted their legal frameworks regarding data protection in light of the GDPR. The GDPR contains provisions granting flexibility to the Member States to implement such adaptations.  For example, Article 8 of the GDPR provides specific rules regarding the processing of personal data of children below the age of 16.  Nevertheless, Member States may provide by law for a lower age provided it is not below 13 years.  Another example is to be found under Article 58 of the GDPR, as Member States may provide by law that their supervisory authorities have additional powers beyond those already specified under the GDPR. Below is an overview of the national data protection reforms implemented throughout the European Union during 2017: Member State Status of National Data Protection Reform Austria The Datenschutz-Anpassungsgesetz 2018 was published in July 2017.  This act is expected to support the application of the GDPR and will enter into effect by 25 May 2018.  The Datenschutzgesezt 2000 will be replaced accordingly. Belgium Belgium is currently adapting its national data protection legal framework by: reforming the Belgian Privacy Commission (the draft bill in this respect was adopted by the Parliament on 16 November 2017 and was submitted for the King’s approval); and preparing a framework law for addressing the national considerations resulting from the GDPR (although no draft has been disclosed yet). Bulgaria In 2017, Bulgaria did not enact or propose a bill concerning GDPR-related privacy issues. Croatia In 2017, Croatia did not enact or propose a bill concerning GDPR-related privacy issues. Cyprus In 2017, Cyprus did not enact or propose a bill concerning GDPR-related privacy issues. Czech Republic A draft Data Protection Act, intended to adapt the current national legal framework to the GDPR, was discussed by the government.  The upcoming Data Protection Act is expected to replace the current act on data protection. Denmark On 25 October 2017, a proposal for a new Data Protection Act implementing the GDPR was made public.  This proposal was discussed by the Danish Parliament in late 2017 and is expected to pass in the first months of 2018. Estonia The Ministry of Justice rendered public a first draft of the legislation intended to implement the GDPR.  However, the draft was not submitted to Parliament for review in 2017. Finland A working group set up by the Ministry of Justice issued a report in June 2017 proposing to replace the current Finnish Data Protection Act with a new act intended to supplement the GDPR when the GDPR enters into application. France A draft data law intended to modify the current French Data Protection Act was made public in December 2017.  It is likely that this initial draft will go through subsequent modifications before the final law is eventually passed. Germany In June 2017, Germany adapted its Data Protection Act to the GDPR.  The previous version of the German Data Protection Act will remain in force until 25 May 2018. Greece In 2017, Greece did not enact or propose a bill concerning GDPR-related privacy issues. Hungary In 2017, Hungary launched a public consultation on a proposal to amend the current Hungarian Data Protection Act.  This proposal is expected to become final in early 2018. Ireland In May 2017, Ireland issued a General Scheme of Data Protection Bill providing a general scheme for the act intended to give effect to and complement the GDPR. Italy On 6 November 2017, the Italian Parliament passed a law (Law No. 163) adopting specific provisions with respect to the GDPR.  The currently applicable Italian Data Protection Code is to be modified within 6 months from the passage of Law No. 163. Latvia Latvia made public a draft Personal Data Processing Law in October 2017. Lithuania The law applicable in Lithuania (i.e., the Lithuanian Law on Legal Protection of Personal Data) is currently being amended so as to integrate the requirements of the GDPR. Luxembourg The government of Luxembourg proposed a bill specifically addressing data protection in order to adapt the local law to the requirements of the GDPR. Malta In 2017, Malta did not enact or propose a bill concerning GDPR-related privacy issues. Netherlands The data protection law currently applicable in the Netherlands results from the Dutch Personal Data Protection Act (Wet bescherming persoonsgegevens).  This Act will no longer be applicable after the GDPR enters into effect in May 2018. Poland In September 2017, Poland published a draft Personal Data Protection Act, intended to provide a legal framework for the GDPR.  This draft was made subject to public consultations and is expected to be enacted in 2018, prior to the entry into application of the GDPR. Portugal In 2017, Portugal did not enact or propose a bill concerning GDPR-related privacy issues. Romania Draft legislation for implementing the GDPR was disclosed and submitted for public debate in 2017. Slovakia On 29 November 2017, the Slovakian Data Protection Act was adopted by the Slovak Parliament with an entry into force on the same date as the GDPR. Slovenia The currently applicable Slovenian Data Protection Act is expected to be repealed by a new data protection act (“ZVOP-2”) intended to ensure the proper implementation of data protection requirements following the entry into application of the GDPR.  ZVOP-2 was subject to the legislative process in 2017 and is likely to be adopted in early 2018. Spain A bill regarding data protection intended to amend the current legal framework was published and made subject to debate, with an eye toward eventual approval by the Spanish Parliament. Sweden A report of the Swedish government proposing provisions intended to complement the GDPR was issued in May 2017, but no government bill was passed in this respect during 2017. United Kingdom On 14 September 2017, the Data Protection Bill was published with the aim to modernize data protection law.  Even though the Data Protection Bill has a wider scope than the mere adaptation of national law to the GDPR, one of its core features includes detailing how the UK uses the flexibility granted by the GDPR to Member States with respect to specific data protection issues. C.     EU Cyber Security Directive On 6 July 2016, the European Parliament officially adopted the Network and Information Security (“NIS”) Directive[33] which is expected to be fully applicable (via national regulations) as of May 2018.  The NIS Directive is the first set of cybersecurity rules to be adopted on the EU level, adding to an already complex array of laws with which companies must comply when implementing security and breach response plans.  It aims to set a minimum level of cybersecurity standards and to streamline cooperation between EU Member States at a time of growing cybersecurity breaches. In February 2017, the European Agency for Network and Information Security (“ENISA”) issued guidelines related to incident notification for digital service providers in the context of the NIS Directive, in order to provide practical information on the cases covered by the NIS Directive and the actions to be taken in such a case.[34] More details as to how the NIS Directive will be implemented at local level were also disclosed in 2017 as Member States started to adopt national legislation to transpose the NIS Directive.  For example, in France on 19 December 2017, a national bill for transposing the NIS Directive was adopted by the French Senate.  This bill specifies fines up to EUR 100,000 if officers of essential services providers do not comply with the security requirements specified by the French Prime Minister and fines up to EUR 75,000 if such officers do not comply with the obligation to provide notifications of data breaches.  Regarding legal persons, the fines for non-compliance with the security requirements specified by the French Prime Minister can be up to EUR 500,000, and up to EUR 375,000 in case data breaches are not duly notified. The final text of the NIS Directive sets out separate cybersecurity obligations for essential service and digital service providers: Essential service providers include actors in the energy, transport, banking and financial markets, as well as health, water and digital infrastructure[35] sectors. Digital service providers will include online marketplaces, search engines and cloud services (with an exemption for companies with less than 50 employees) but not social networks, app stores or payment service providers. In terms of geographic scope, the NIS Directive aims to address potential incidents taking place “within the [European] Union“[36] and will apply to all entities providing the above services[37] within the EU territory/to EU residents, regardless of their physical location.  In particular, all digital service providers that are not established in the EU, but offer services covered by the NIS Directive within the EU, are required to designate an EU-based representative.[38] Companies covered by the NIS Directive will have to ensure that their digital infrastructure is robust enough to withstand cyber-attacks and may need to report major security incidents to the national authorities.  Businesses will also be required to apply procedures demonstrating effective use of security policies and measures. 1.     Digital Service Providers Digital service providers will be obliged to report all incidents that have a “substantial impact” on their services (in terms of the duration, geographic spread and the number of users affected by the incident).[39]  It will be up to regulators to decide whether to inform the public about these incidents after consulting the company involved.  As a practical matter, the NIS Directive states that jurisdiction over a digital service provider should be attributed to the Member State in which it has its main EU establishment, which in principle corresponds to the place where the provider has its head office in the EU.[40]  Digital service providers not established in the EU will be deemed to be under the primary jurisdiction of the Member State where their EU representative has been appointed.[41] Notably, where an incident involves personal data, there may be an additional requirement to report to DPAs under the GDPR, which will come into effect on 25 May 2018.  As indicated above, the GDPR will also have a reporting provision for data breaches, although the notification obligation will focus on the protection of personal information, in contrast to the NIS Directive’s data reporting requirement which is aimed at improving computer and information technology systems overall.  Thus, it is possible that a single cybersecurity breach will need to be notified to more than one authority in each EU Member State affected. 2.     Member State Obligations The NIS Directive itself is not directly applicable.  It will first have to be transposed and implemented into national law by the Member States by May 2018.  Member States will need to, for example, designate the competent national authorities, identify operators of essential services, indicate which types of incidents they must report and establish sanctions for failure to notify.[42]  National procedural rules (for both administrative and court proceedings) will govern the application of the NIS Directive and the relevant national laws to affected entities.[43] In addition, each Member State is to adopt a national strategy to maintain the security of network and information systems and will designate one or more national competent authorities to monitor the application of the NIS Directive.  They are also to designate one or more Computer Security Incident Response Teams (“CSIRTs”) responsible for monitoring and responding to incidents and providing early warnings about risks. 3.     Minimum Harmonization and Coordination Among EU Member States The clear aim of the NIS Directive is to harmonize the EU Member State rules applicable to the security levels of network and information systems across the EU.  However, given the strategic character of certain services covered by the NIS Directive, the NIS Directive gives some powers and margin of discretion to Member States.  For example, the NIS Directive mandates each EU Member State to adopt a national strategy on the security of network and information systems, defining objectives, policies and measures envisaged with a view to achieve the aims of the NIS Directive.[44]  Thus, despite the ability of Member States to seek the assistance of the ENISA, the development of a strategy will remain a national competence.  Furthermore, as far as operators of essential services are concerned, EU Member States will identify the relevant operators subject to the NIS Directive and may impose stricter requirements than those laid down in the NIS Directive (in particular with regard to matters affecting national security).[45] In contrast, Member States should not identify digital service providers (as the NIS Directive applies to all digital service providers within its scope) and, in principle, may not impose any further obligations on such entities.[46]   The European Commission retains powers to adopt implementing rules regarding the application of the security and notification requirements rules applicable to digital service providers.[47]  It is expected that these rules will be developed in cooperation with the ENISA and stakeholders, and will enable uniform treatment of digital service providers across the EU.  In addition, the competent authorities will be able to exercise supervisory activities only when provided with evidence that a digital service provider is not complying with its obligations under the NIS Directive. Another tool for coordination among authorities will be the envisaged “Cooperation Group,” similar to the WP29 operating currently under the 1995 Data Privacy Directive.  The Cooperation Group will bring together the regulators of all EU Member States, who have different legal cultures and hold different approaches to IT and security matters (e.g., affecting national security).  It is therefore expected that the European Commission will play an active role in building trust and consensus among the Cooperation Group’s members with a view of providing meaningful and clear guidance to businesses. D.     Other EU Developments 1.     Reform of the ePrivacy Directive – the Draft EU ePrivacy Regulation 2016 has seen the initiation of the procedures for the reform of the EU’s main set of rules on ePrivacy, the ePrivacy Directive.  In this context, further to a public consultation held by the European Commission, a draft of the future EU ePrivacy Regulation (the “draft ePrivacy Regulation”) was leaked in December 2016.[48]  Such draft was followed by the release of the European Commission’s final proposal on 10 January 2017,[49] which, despite some changes, is mostly similar to the leaked version.  Later in 2017, the European Commission’s proposal was followed by an Opinion of the WP29 released on 4 April 2017.[50]  The European Parliament also proposed an amended version thereof on 20 October 2017,[51] and discussions at the Council of the European Union are still ongoing to date to adopt a final proposal, even though a first redraft has already been published.[52] a.     The European Commission’s ePrivacy Regulation proposal The Commission’s ePrivacy Regulation proposal released in January 2017 seeks to accommodate the reform of the ePrivacy regime to the feedback received from stakeholders and the WP29.  In summary, the draft ePrivacy Regulation prepared by the European Commission constitutes a more comprehensive piece of legislation that aims to fix and close certain open issues identified in the application of the ePrivacy Directive: Regulation versus Directive: The draft instrument that is deemed to replace the ePrivacy Directive is a Regulation.  Under EU law, a Directive is an instrument that only binds EU Member States as to its content and objectives; it cannot be directly applied against individuals, and needs to be transposed into national laws and regulations for its terms to be fully effective.  The ePrivacy Directive has been incorporated into numerous different acts and regulations at national level, which are subject to uneven enforcement by the respective national authorities.The European Commission’s proposal to replace the ePrivacy Directive with a Regulation means that its terms will in principle apply directly across all EU Member States.  This decision is consistent with the approach adopted with regard to the GDPR.  Although Member States will still be given some freedom to deviate from the ePrivacy Regulation (particularly in the area of national security), the choice to adopt a Regulation will increase the homogeneous application of the ePrivacy Regulation across all EU Member States. Alignment with the GDPR:  A number of provisions in the draft ePrivacy Regulation demonstrate alignment with the GDPR.  For example, as with the GDPR, the draft ePrivacy Regulation has a broad territorial scope and applies to the provision of electronic communication services (e.g., voice telephony, SMS services) from outside the EU to residents in the EU.As indicated below, the draft ePrivacy Regulation also aims to close the gap with the GDPR from an enforcement perspective, by empowering DPAs to monitor the application of the privacy-related provisions of the draft ePrivacy Regulation under the conditions established in the GDPR.  The regime for sanctions is also aligned with the GDPR, foreseeing the possibility that organizations be imposed fines up to EUR 20 million or 4% of their worldwide annual turnover for certain infringements (e.g., breaches of secrecy requirements, cookies requirements and the rules on the use of metadata).From a substantive perspective, the definition of a number of legal concepts used in both the GDPR and in the draft ePrivacy Regulation has also been aligned (e.g., the conditions for “consent,” the “appropriate technical and organization measures to ensure a level of security appropriate to the risks”). Inclusion of OTT Service Providers:  In response to the feedback of stakeholders, the draft ePrivacy Regulation indicates that the new Regulation will apply to providers of services that run over the Internet (referred to as “over-the-top” or “OTT” service providers), such as instant messaging services, video call service providers and other interpersonal communications services.[53]  This expansion in scope is achieved by the broad definition of “electronic communications services” of the draft, and is consistent with the current regulatory overhaul that is ongoing in the field of electronic communications.[54] Cookies and Other Connection Data:  Like the ePrivacy Directive, the draft ePrivacy Regulation contains a provision that addresses the circumstances under which the storage and collection of data on users’ devices is lawful.  These practices can continue to be based on the prior consent obtained from users.  Absent users’ consent, according to the draft ePrivacy Regulation, it will still be possible to carry out these practices provided that:[55] they serve the purpose of carrying out (not facilitating) the transmission of a communication over an electronic communications network; or they are necessary (albeit not strictly necessary) for providing: (i) a service requested by the end user; or (ii) first-party web audience measuring. The recitals of the draft ePrivacy Regulation suggest that the circumstances in which consent is not required can be interpreted more broadly than under the current ePrivacy Directive.[56]  For example, first-party analytics cookies, cookies used to give effect to users’ website preferences and cookies required to fill out online forms could be understood to be exempt from the consent requirement.[57] The ePrivacy Regulation contains a new set of seemingly more stringent rules applicable to the “collection of information emitted by terminal equipment to enable it to connect to another device and, or to network equipment.”  Under the current draft, this collection may only occur “if it is done exclusively in order to, for the time necessary for, and for the purpose of establishing a connection,” and is subject to significant information and consent requirements.[58]    Marketing Communications: The draft ePrivacy Regulation requires all end users (including corporate and individual subscribers) to consent to direct marketing communications undertaken via electronic communications services.  While telephone marketing continues to be permitted on an opt-out basis, the draft ePrivacy Regulation requires entities placing marketing calls to use a specific code or prefix identifying it as a marketing call.[59] Supervisory Authorities and EDPB:  One of the novelties introduced by the draft ePrivacy Regulation is a section devoted to the appointment and powers of national supervisory authorities.[60]  The relevant provisions clarify that the DPAs responsible for monitoring the application of the GDPR shall also be responsible for monitoring the application of the provisions of the draft ePrivacy Regulation related to privacy in electronic communications, and that the rules on competence, cooperation and powers of action of DPAs foreseen in the GDPR also apply to the draft ePrivacy Regulation.  Finally, the EDPB is empowered to ensure the consistent application of the relevant provisions of the draft ePrivacy Regulation. Implementation:  The draft provides for the ePrivacy Regulation to enter into force on 25 May 2018, at the same time as the GDPR.  However, it is highly unlikely to come into force on that date, or even any time later in 2018. b.     The WP29 Opinion on the European Commission Proposal Following the release of the European Commission’s proposal, the WP29 released its opinion on the proposed regulation in April 2017[61]. The WP29 stated that it “welcomes the proposal” and “the choice for a regulation as the regulatory instrument.”  More broadly, it supported the approach of the regulation and its broad scope, along with its principle of “broad prohibitions and narrow exceptions.”  However, it highlighted four points of “grave concern” that would “lower the level of protection enjoyed under the GDPR” if adopted, and made recommendations in this respect concerning: The rules concerning the tracking of the location of terminal equipment, for instance WiFi tracking, which are inconsistent with the rules of the GDPR.  The WP29 advised the European Commission to “promote a technical standard for mobile devices to automatically signal an objection against such tracking.” The conditions under which the content and metadata can be analyzed should be limited:  Consent of all end-users (senders and recipients) should be the principle with limited exceptions for “purely personal purposes.” Barriers used by some websites to completely block access to the service unless visitors agree to third-party tracking, known as “tracking walls,” should be explicitly prohibited to give individuals the choice to refuse such tracking while still being able to access the website. Terminal equipment and software should offer “privacy protective settings” by default, in addition to allowing the user to adjust these settings.  It is interesting to note that this was initially in the Commission’s leaked draft but not in its final proposal. The WP29 expects that their concerns will be addressed during the ongoing legislative process. c.     The European Parliament’s amended proposal In October 2017, the European Parliament proposed an amended version of the European Commission’s proposal.[62]  It draws on some of the propositions made by the WP29.  For example, the Parliament’s version is more stringent on the use of personal data, and users’ privacy.  Some of the notable changes include: The prohibition to block access to a service solely because the user has refused the processing of personal data which is not necessary for the functioning of the service. The requirement for providers of electronic communications services to ensure the confidentiality of the data, for instance with end-to-end encryption and the prohibition of backdoors. The requirement for browsers to block third-party cookies by default until the user has adjusted his/her cookie settings. The prohibition of “cookie walls” and cookie banners that prevent the use of the service unless users agree to all cookies. In addition to the Parliament’s version, the Council of the European Union has also published a working proposal.[63]  However it is merely a draft of the presidency of the Council, which has yet to adopt a final proposal.  Bulgaria, which takes the presidency of the Council of the European Union during the first half of 2018 has indicated it intends to focus on moving negotiations ahead on the ePrivacy Regulation.[64]  Tripartite negotiations will then need to begin in order to agree upon a common text to be adopted. In any case, it most likely will not be adopted by May 2018 as initially planned. 2.     CJEU Case Law 2017 has also witnessed important cases before the Court of Justice of the European Union (“CJEU”). a.     The Determination of the Data Controller and Applicable Law Under the EU Data Protection Directive, the applicability of the data protection laws of a Member State depends primarily on the existence of a relevant “establishment” in that Member State.  In the past years, the concept of “establishment” gave rise to considerable debate.  (See, for example, the 2016 ruling in the Verein für Konsumenteninformation v. Amazon EU Sàrl case[65], repeating the CJEU’s findings in the Weltimmo judgment of 1 October 2015[66] where it defined broadly the concept of “establishment” contained in Article 4(1)(a) of the EU Data Protection Directive.)  While the CJEU has indicated that the absence of “a branch or subsidiary in a Member State does not preclude [the controller] from having an establishment there within the meaning of Article 4(1)(a)” (e.g., through the existence of other stable arrangements, like an office), such an establishment cannot be presumed to exist “merely […] because the undertaking’s website is accessible there.” Regarding the interpretation of the notion of “establishment,”, additional information was brought to light in the course of 2017.  Indeed, on 24 October 2017 Advocate General Bot made his opinion public regarding the determination of the applicable law in a case where data processing activities were performed through a social media page.[67]  A German company set up a fan page through a U.S.-based social network, which provided statistics based on the personal data of the visitors (such as their preferences and habits) to the company administrating the fan page.  The data protection authority of Schleswig-Holstein required the German company to shut down its fan page as neither the social media site nor the company itself allegedly informed visitors that their personal data was used for this particular purpose. The German Federal Administrative Court sought a preliminary ruling from the CJEU, requesting clarification.  In his opinion, Advocate General Bot first determined that the company administrating the fan page was a joint controller with the social media company regarding the collection of personal data. Second, Advocate General Bot held that data processing is carried out in the context of the activities of an establishment of the controller on the territory of a Member State when an undertaking, operating a social network, sets up in that Member State a subsidiary which is intended to promote and sell advertising space offered by that undertaking and which directs its activities toward residents in that Member State.[68] It is worth noting yet that the opinion of Advocate General Bot in this respect is controversial. A ruling from the CJEU, which could either follow the opinion of Advocate General Bot or depart therefrom, is expected in 2018. b.     Claims Assignment On 14 November 2017, Advocate General Bobek delivered his opinion on the Maximilian Schrems v. Facebook Ireland Limited case pending in the CJEU.[69] Mr. Schrems had started legal proceedings against Facebook Ireland Limited before a court in Austria, which raised the question of whether jurisdiction was established in the domicile of a consumer claimant who was assigned claims by other consumers, thus opening up the possibility of collecting consumer claims from around the world.  Advocate General Bobek held that a consumer cannot invoke, at the same time as his own claims, claims on the same subject assigned by other consumers domiciled in other places in the same Member State, in other Member States, or in non-member States. c.     Outlook On 3 October 2017, the Irish High Court referred the issue of the validity of the standard contractual clauses decisions to the CJEU for a preliminary ruling.[70]  If the CJEU were to decide to invalidate the standard contractual clauses, this ruling would in all likelihood have tremendous impact on businesses around the world, many of which rely on these legal warranties to ensure an adequate level of data protection to data transfers outside the European Union. 3.     Article 29 Working Party (WP29) Opinions As indicated above, during 2017 the WP29 issued several Guidelines concerning the application of the GDPR to the right to data portability, the appointment and duties of DPOs, the identification of lead DPAs, the concepts of consent and transparency, and other issues.  In parallel, within the framework of the GDPR, the WP29 also adopted Guidelines intended for use by the supervisory authorities to ensure better application and enforcement of the GDPR regarding the application and setting of administrative fines.[71] In addition to the abovementioned Guidelines, the WP29 issued various opinions regarding the key issues of the Law Enforcement Directive No. 2016/680,[72] data processing in the context of Cooperative Intelligent Transport Systems (C-ITS),[73] and data processing at work,[74] as well as the draft ePrivacy Regulation proposal.[75] The WP29 also rendered public some working documents on the adequacy referential within the framework of data transfers to third countries[76] and the elements and principles to be found in Binding Corporate Rules.[77] II.     Asia-Pacific and Other Notable International Developments In an increasingly connected world, 2017 also saw many other countries try to get ahead of the challenges within the cybersecurity and data protection landscape.  Several international developments bear brief mention here: On 1 June 2017, China’s Cybersecurity Law went into effect, becoming the first comprehensive Chinese law to regulate how companies manage and protect digital information.  The law also imposes significant restrictions on the transfer of certain data outside of the mainland (data localization) enabling government access to such data before it is exported.[78]Despite protests and petitions by governments and multinational companies, the implementation of the Cybersecurity Law continues to progress with the aim of regulating the behavior of many companies in protecting digital information.[79]  While the stated objective is to protect personal information and individual privacy, and according to a government statement in China Daily, a state media outlet, to “effectively safeguard national cyberspace sovereignty and security,” the law in effect gives the Chinese government unprecedented access to network data for essentially all companies in the business of information technology.[80]  Notably, key components of the law disproportionately affect multinationals because the data localization requirement obligates international companies to store data domestically and undergo a security assessment by supervisory authorities for important data that needs to be exported out of China.  Though the law imposes more stringent rules on critical information infrastructure operators (whose information could compromise national security or public welfare) in contrast to network operators (whose information capabilities could include virtually all businesses using modern technology), the law effectively subjects a majority of companies to government oversight.  As a consequence, the reality for many foreign companies is that these requirements would likely be onerous, will increase the costs of doing business in China, and will heighten the risk of exposure to industrial espionage.[81]  Despite the release of additional draft guidelines meant to clarify certain provisions of the law, there is a general outlook that the law is still a work in progress, with the scope and definition still vague and uncertain.[82]  Nonetheless, companies should endeavor to assess their data and information management operations to evaluate the risks of the expanding scope of the data protection law as well as their risk appetite for compliance with the Chinese government’s access to their network data. With the growing threat of hacking and identity theft, the Personal Data Protection Commission of Singapore issued proposed advisory guidelines on 7 November 2017 for the collection and use of national registration identification numbers.  The guidance, which covers a great deal of personal and biometric data, emphasized the obligations of companies to ensure policies and practices are in place to meet the obligations for data protection under the Personal Data Protection Act of 2012.  The Commission is giving businesses and organizations 12 months from publication to review their processes and implement necessary changes to ensure compliance.[83] Several other countries, such as Australia and Turkey, also sought to address privacy issues and published important guidelines regarding procedures for deleting, destroying, and anonymizing personal data.  Other countries like Argentina forged ahead with an overhaul of the country’s data protection regime by publishing a draft data protection bill that would align the country’s privacy laws with the GDPR requirements of the European Union.[84] There has also been civic engagement with the public as a number of countries solicited public comments to certain proposed regulations.  For example, Canada opened up for comments a proposed regulation that would mandate reporting of privacy breaches under its Personal Information Protection and Electronic Documents Act of 2015, while India recently issued a white paper inviting comments that would inform the legal framework for drafting a data protection bill to “ensure growth of the digital economy while keeping personal data of citizens secure and protected.”[85] [1]   See Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation), OJ L 119, 4.5.2016, pp. 1-88, available at http://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A32016R0679. [2]   Case C-362/14, Maximillian Schrems v. Data Protection Commissioner (Oct. 6, 2016), European Court of Justice. [3]   For a detailed analysis of the Schrems decision, please see Gibson Dunn Client Alert: Cybersecurity and Data Privacy Outlook and Review: 2016 (Jan. 28, 2016) available at http://www.gibsondunn.com/publications/Pages/Cybersecurity-and-Data-Privacy-Outlook-and-Review–2016.aspx. [4]   http://ec.europa.eu/justice/data-protection/article-29/press-material/press-release/art29_press_material/2016/20160726_wp29_wp_statement_eu_us_privacy_shield_en.pdf. [5]   http://ec.europa.eu/newsroom/just/item-detail.cfm?item_id=605619. [6]   https://www.whitehouse.gov/briefings-statements/statement-president-fisa-amendments-reauthorization-act-2017/. [7]   http://ec.europa.eu/newsroom/just/document.cfm?doc_id=48782. [8]   http://curia.europa.eu/juris/document/document.jsf?text=&docid=185146&pageIndex=0&doclang=EN&mode=lst&dir=&occ=first&part=1&cid=320298 [9]   Order of the General Court of the European Union, Digital Rights Ireland v. Commission, 22 November 2017, T-670/16. [10]  http://curia.europa.eu. [11]  http://curia.europa.eu. [12]  See Directive 95/46/EC of the European Parliament and of the Council of 24 October 1995 on the protection of individuals with regard to the processing of personal data and on the free movement of such data, OJ L 281, 23.11.1995, pp. 31-50. [13] See GDPR, at Article 3. [14]  See WP29, Guidelines on Transparency under Regulation 2016/679 (WP260; draft not adopted yet), available at http://ec.europa.eu/newsroom/just/item-detail.cfm?item_id=50083. [15]  See WP29, Guidelines on Consent under Regulation 2016/679 (WP259; 28 November 2017), available at http://ec.europa.eu/newsroom/just/item-detail.cfm?item_id=50083. [16] See GDPR, at Article 17. [17] See EU Data Protection Directive, at Articles 12 and 14; and Case C-131/12 Google Spain SL and Google Inc. v. AEPD and Mario Costeja González ECLI:EU:C:2014:317. [18]  See WP29, Guidelines on Personal Data Breach Notification under Regulation 2016/679 (WP250; 3 October 2017), available at http://ec.europa.eu/newsroom/just/item-detail.cfm?item_id=50083. [19]  See WP29, Guidelines on Automated individual decision-making and Profiling for the purposes of Regulation 2016/679 (WP251; 3 October 2017). [20]  See GDPR, at Article 35. [21]  See WP29, Guidelines on Data Protection Impact Assessment (DPIA) and determining whether processing is “likely to result in a high risk” for the purposes of Regulation 2016/679 (WP248; 4 April 2017), available at http://ec.europa.eu/newsroom/just/item-detail.cfm?item_id=50083. [22]  See WP29, Guidelines on Data Protection Impact Assessment (DPIA) and determining whether processing is “likely to result in a high risk” for the purposes of Regulation 2016/679 (WP248; 4 October 2017), available at http://ec.europa.eu/newsroom/just/item-detail.cfm?item_id=50083. [23]  See WP29, Guidelines on the right to data portability (WP 242; 13 December 2016), available at http://ec.europa.eu/information_society/newsroom/image/document/2016-51/wp242_en_40852.pdf. [24]  See WP29, Guidelines on the right to data portability (WP242 rev.01; 5 April 2017), available at http://ec.europa.eu/newsroom/just/item-detail.cfm?item_id=50083. [25]  See EU Data Protection Directive, at Articles 4(1) and 28; and Case C-230/14 Weltimmo s.r.o v. Nemzeti Adatvédelmi és Információszabadság Hatóság ECLI:EU:C:2015:639. [26]  See GDPR, at Article 56(2). [27]  See GDPR, at Article 56(1). [28]  See GDPR, at Article 63. [29]  See GDPR, at Article 66. [30]  See WP29, Guidelines for Identifying a Controller or Processor’s Lead Supervisory Authority (WP 244; 13 December 2016), available at http://ec.europa.eu/information_society/newsroom/image/document/2016-51/wp244_en_40857.pdf. [31] See WP29, Guidelines for Identifying a Controller or Processor’s Lead Supervisory Authority (WP244 rev.01; 5 April 2017), available at http://ec.europa.eu/newsroom/just/item-detail.cfm?item_id=50083. [32]  See WP29, Guidelines on Data Protection Officers (‘DPOs’) (WP243 rev.01; 5 April 2017), available at http://ec.europa.eu/newsroom/just/item-detail.cfm?item_id=50083. [33]  See Directive (EU) 2016/1148 of the European Parliament and of the Council of 6 July 2016 concerning measures for a high common level of security of network and information systems across the Union, OJ L 194, 19.7.2016, pp. 1-30, available at http://eur-lex.europa.eu/legal-content/EN/TXT/?uri=uriserv:OJ.L_.2016.194.01.0001.01.ENG&toc=OJ:L:2016:194:TOC. [34]  See ENISA, Incident Notification for DSPs in the Context of the NIS Directive: A Comprehensive Guideline on How to Implement Incident Notification for Digital Service Providers, in the Context of the NIS Directive, February 2017, available at https://www.enisa.europa.eu/publications/incident-notification-for-dsps-in-the-context-of-the-nis-directive/. [35]  E.g., domain name systems (DNS) providers and top level domain (TLD) registries; see Article 4, NIS Directive. [36]  See NIS Directive, at Article 1(1). [37]  With regard to essential services, the NIS Directive will apply to all entities identified by the respective national authorities as “essential” providers of such services in that Member State, see NIS Directive, at Article 5(2). [38]  See NIS Directive, at Article 18(2). [39]  See NIS Directive, at Article 16(3). [40]  See NIS Directive, at Article 18(1).  This criterion will not depend on whether the network and information systems are physically located in a given place. See NIS Directive, at Recital 64. [41]  See NIS Directive, at Article 18(2). [42]  Member States will have an additional six months after the transposition into national law to identify operators of essential services (i.e., a total of 27 months). See NIS Directive, at Article 5(1). [43]  These should respect the fundamental rights of the effective remedy and the right to be heard.  See NIS Directive, at Recital 75. [44]  See NIS Directive, at Article 7. [45]  See NIS Directive, at Recital (57) and Article 3. [46]  See NIS Directive, at Article 16(10). [47]  See NIS Directive, at Articles 16(8) and (9). [48]  See Proposal for a Regulation of the European Parliament and of the Council concerning the respect for private life and personal data in electronic communications and repealing Directive 2002/58/EC (‘Privacy and Electronic Communications Regulation’), available at http://www.politico.eu/wp-content/uploads/2016/12/POLITICO-e-privacy-directive-review-draft-december.pdf. [49] https://ec.europa.eu/digital-single-market/en/proposal-eprivacy-regulation. [50] http://ec.europa.eu/newsroom/document.cfm?doc_id=44103. [51] http://www.europarl.europa.eu/sides/getDoc.do?type=REPORT&reference=A8-2017-0324&language=EN. [52] https://iapp.org/resources/article/council-of-the-eu-eprivacy-regulation-proposal-december-2017/. [53]  See draft ePrivacy Regulation, at Recital (13).  See Explanatory Memorandum, at Section 3.2. [54]  See, e.g., Proposal for a Directive of the European Parliament and of the Council establishing the European Electronic Communications Code (Recast), COM/2016/0590, available at http://eur-lex.europa.eu/legal-content/EN/ALL/?uri=comnat:COM_2016_0590_FIN. [55]  See draft ePrivacy Regulation, at Article 8(1). [56]  However, in practice, the WP29 had already expressed the possibility that operators do not obtain consent for the setting and receipt of cookies in some of the circumstances now covered in the draft ePrivacy Regulation, provided that certain conditions are met.  See WP29, Opinion 04/2012 on Cookie Consent Exemption (WP 194; 7 June 2012), available at http://ec.europa.eu/justice/data-protection/article-29/documentation/opinion-recommendation/files/2012/wp194_en.pdf. [57]  See draft ePrivacy Regulation, at Recital (25). [58]  See draft ePrivacy Regulation, at Article 8(2). [59]  See draft ePrivacy Regulation, at Article 16. [60]  See draft ePrivacy Regulation, at Articles 18 ff. [61]  See WP29, Opinion 01/2017 on the Proposed Regulation for the ePrivacy Regulation (2002/58/EC) (WP247; 4 April 2017) available at http://ec.europa.eu/newsroom/just/item-detail.cfm?item_id=50083. [62]  See European Parliament’s proposal available at http://www.europarl.europa.eu/sides/getDoc.do?type=REPORT&reference=A8-2017-0324&language=EN. [63]  See Council of the European Union’s working proposal available at http://eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri=CONSIL:ST_11995_2017_INIT&from=EN. [64]  https://www.euractiv.com/section/digital/news/bulgaria-makes-telecoms-overhaul-a-focus-during-council-presidency/. [65]  See Case C-191/15 Verein für Konsumenteninformation v. Amazon EU Sàrl available at http://curia.europa.eu/juris/document/document.jsf?text=&docid=182286&pageIndex=0&doclang=EN&mode=lst&dir=&occ=first&part=1&cid=1126849. [66]  See Case C-230/14 Weltimmo s.r.o v. Nemzeti Adatvédelmi és Információszabadság Hatóság ECLI:EU:C:2015:639. [67]  See, Case C-210/16 Unabhängiges Landeszentrum für Datenschutz Schleswig-Holstein v. Wirtschaftsakademie Schleswig-Holstein GmbH. [68]  See Opinion of Advocate General Bot delivered on 24 October 2017, Case C-210/16 Unabhängiges Landeszentrum für Datenschutz Schleswig-Holstein v. Wirtschaftsakademie Schleswig-Holstein GmbH. [69]  See Opinion of Advocate General Bobek on Case C-498/16 Maximilian Schrems v. Facebook Ireland Limited. [70]  See Irish High Court Commercial, The Data Protection Commissioner v. Facebook Ireland Limited and Maximilian Schrems, 2016 No. 4809 P. [71]  See WP29, Guidelines on the Application and Setting of Administrative Fines for the Purposes of the Regulation 2016/679 (WP253; 3 October 2017), available at http://ec.europa.eu/newsroom/just/item-detail.cfm?item_id=50083. [72]  See WP29, Opinion on Some Key Issues of the Law Enforcement Directive (EU 2016/680) (WP258; 29 November 2017), available at http://ec.europa.eu/newsroom/just/item-detail.cfm?item_id=50083. [73]  See WP29, Opinion 03/2017 on Processing Personal Data in the Context of Cooperative Intelligent Transport Systems (C-ITS) (WP252; 4 October 2017), available at http://ec.europa.eu/newsroom/just/item-detail.cfm?item_id=50083. [74]  See WP29, Opinion 2/2017 on Data Processing at Work (WP249; 8 June 2017), available at http://ec.europa.eu/newsroom/just/item-detail.cfm?item_id=50083. [75]  See WP29, Opinion 01/2017 on the Proposed Regulation for the ePrivacy Regulation (2002/58/EC) (WP247; 4 April 2017), available at http://ec.europa.eu/newsroom/just/item-detail.cfm?item_id=50083. [76]  See WP29, Adequacy Referential (updated) (WP254; 28 November 2017), available at http://ec.europa.eu/newsroom/just/item-detail.cfm?item_id=50083. [77]  See WP29, Working Document Setting up a Table with the Elements and Principles to be Found in Binding Corporate Rules (WP256 and WP257; 29 November 2017), available at http://ec.europa.eu/newsroom/just/item-detail.cfm?item_id=50083. [78]  See FT Cyber Security, “China’s cyber security law rattles multinationals,” Financial Times (30 May 2017), available at https://www.ft.com/content/b302269c-44ff-11e7-8519-9f94ee97d996. [79]  Alex Lawson, “US Asks China Not To Implement Cybersecurity Law,” Law360 (Sept. 27, 2017) available at https://www.law360.com/articles/968132/us-asks-china-not-to-implement-cybersecurity-law. [80]  Sophie Yan, “China’s new cybersecurity law takes effect today, and many are confused,” CNBC.com (1 June 2017), available at https://www.cnbc.com/2017/05/31/chinas-new-cybersecurity-law-takes-effect-today.html. [81]  Christina Larson, Keith Zhai, and Lulu Yilun Chen, “Foreign Firms Fret as China Implements New Cybersecurity Law”, Bloomberg News (24 May 2017), available at https://www.bloomberg.com/news/articles/2017-05-24/foreign-firms-fret-as-china-implements-new-cybersecurity-law. [82]  Clarice Yue, Michelle Chan, Sven-Michael Werner and John Shi, “China Cybersecurity Law update: Draft Guidelines on Security Assessment for Data Export Revised!,” Lexology (Sept. 26, 2017), available at https://www.lexology.com/library/detail.aspx?g=94d24110-4487-4b28-bfa5-4fa98d78a105. [83]  Singapore Personal Data Protection Commission, Proposed Advisory Guidelines on the Personal Data Protection Act For NRIC Numbers, published 7 November 2017, available at https://www.pdpc.gov.sg/docs/default-source/public-consultation-6—nric/proposed-nric-advisory-guidelines—071117.pdf?sfvrsn=4. [84]  Office of the Australian Information Commissioner, “De-identification Decision-Making Framework”, Australian Government (Sept. 18, 2017), available at https://www.oaic.gov.au/agencies-and-organisations/guides/de-identification-decision-making-framework; Lyn Nicholson, “Regulator issues new guidance on de-identification and implications for big data usage”, Lexology (Sept. 26, 2017) available at https://www.lexology.com/library/detail.aspx?g=f6c055f4-cc82-462a-9b25-ec7edc947354; “New Regulation on the Deletion, Destruction or Anonymization of Personal Data,” British Chamber of Commerce of Turkey (Sept. 28, 2017), available at https://www.bcct.org.tr/news/new-regulation-deletion-destruction-anonymization-personal-data-2/64027; Jena M. Valdetero and David Chen, “Big Changes May Be Coming to Argentina’s Data Protection Laws,” Lexology (5 June 2017), available at https://www.lexology.com/library/detail.aspx?g=6a4799ec-2f55-4d51-96bd-3d6d8c04abd2. [85]  Naïm Alexandre Antaki and Wendy J. Wagner, “No escaping notification: Government releases proposed regulations for federal data breach reporting & notification”, Lexology (Sept. 6, 2017), available at https://www.lexology.com/library/detail.aspx?g=0a98fd33-1f2c-4a52-98c0-cf1feeaf0b90; Ministry of Electronics & Information Technology, “White Paper of the Committee of Experts on a Data Protection Framework for India,”  Government of India (Nov. 27, 2017), available at http://meity.gov.in/white-paper-data-protection-framework-india-public-comments-invited. The following Gibson Dunn lawyers assisted in the preparation of this client alert:  Ahmed Baladi, Alexander Southwell, Ryan Bergsieker and Bastien Husson. Gibson Dunn’s lawyers are available to assist with any questions you may have regarding these issues.  For further information, please contact the Gibson Dunn lawyer with whom you usually work or any of the following leaders and members of the firm’s Privacy, Cybersecurity and Consumer Protection practice group: Europe Ahmed Baladi – Paris (+33 (0)1 56 43 13 00, abaladi@gibsondunn.com) James A. Cox – London (+44 (0)207071 4250, jacox@gibsondunn.com) Patrick Doris – London (+44 (0)20 7071 4276, pdoris@gibsondunn.com) Bernard Grinspan – Paris (+33 (0)1 56 43 13 00, bgrinspan@gibsondunn.com) Penny Madden – London (+44 (0)20 7071 4226, pmadden@gibsondunn.com) Jean-Philippe Robé – Paris (+33 (0)1 56 43 13 00, jrobe@gibsondunn.com) Michael Walther – Munich (+49 89 189 33-180, mwalther@gibsondunn.com) Nicolas Autet – Paris (+33 (0)1 56 43 13 00, nautet@gibsondunn.com) Kai Gesing – Munich (+49 89 189 33-180, kgesing@gibsondunn.com) Sarah Wazen – London (+44 (0)20 7071 4203, swazen@gibsondunn.com) Emmanuelle Bartoli – Paris (+33 (0)1 56 43 13 57, ebartoli@gibsondunn.com) Alejandro Guerrero Perez – Brussels (+32 2 554 7218, aguerreroperez@gibsondunn.com) Asia Kelly Austin – Hong Kong (+852 2214 3788, kaustin@gibsondunn.com) Jai S. Pathak – Singapore (+65 6507 3683, jpathak@gibsondunn.com) United States Alexander H. Southwell – Chair, PCCP Practice, New York (+1 212-351-3981, asouthwell@gibsondunn.com) Caroline Krass – Chair, National Security Practice, Washington, D.C. (+1 202-887-3784, ckrass@gibsondunn.com) M. Sean Royall – Dallas (+1 214-698-3256, sroyall@gibsondunn.com) Debra Wong Yang – Los Angeles (+1 213-229-7472, dwongyang@gibsondunn.com) Richard H. Cunningham – Denver (+1 303-298-5752, rhcunningham@gibsondunn.com) Howard S. Hogan – Washington, D.C. (+1 202-887-3640, hhogan@gibsondunn.com) Joshua A. Jessen – Orange County/Palo Alto (+1 949-451-4114/+1 650-849-5375, jjessen@gibsondunn.com) Kristin A. Linsley – San Francisco (+1 415-393-8395, klinsley@gibsondunn.com) Shaalu Mehra – Palo Alto (+1 650-849-5282, smehra@gibsondunn.com) Karl G. Nelson – Dallas (+1 214-698-3203, knelson@gibsondunn.com) Eric D. Vandevelde – Los Angeles (+1 213-229-7186, evandevelde@gibsondunn.com) Benjamin B. Wagner – Palo Alto (+1 650-849-5395, bwagner@gibsondunn.com) Michael Li-Ming Wong – San Francisco/Palo Alto (+1 415-393-8333/+1 650-849-5393, mwong@gibsondunn.com) Ryan T. Bergsieker – Denver (+1 303-298-5774, rbergsieker@gibsondunn.com) Questions about SEC disclosure issues concerning data privacy and cybersecurity can also be addressed to the following leaders and members of the Securities Regulation and Corporate Disclosure Group: James J. Moloney – Orange County, CA (+1 949-451-4343, jmoloney@gibsondunn.com) Elizabeth Ising – Washington, D.C. (+1 202-955-8287, eising@gibsondunn.com) Lori Zyskowski – New York (+1 212-351-2309, lzyskowski@gibsondunn.com) © 2018 Gibson, Dunn & Crutcher LLP Attorney Advertising:  The enclosed materials have been prepared for general informational purposes only and are not intended as legal advice.

January 1, 2018 |
WTR1000 Recognizes Gibson Dunn’s Trademark Work

The 2018 edition of the World Trademark Review 1000 recognized Gibson Dunn’s work in the area of trademarks, noting that the firm “deftly serves global brand leaders and makes light work of even the most complicated suits.”  Washington, D.C. partner Howard Hogan is also recognized as “a leader in helping to shape policy initiatives that benefit trademark practice in the United States and elsewhere.”  The WTR 1000, published January 2018, recommends individual practitioners and their firms exclusively in the trademark field, and identifies the leading players in 70 key jurisdictions globally.