New York partner Joel Cohen, London partner Sacha Harber-Kelly and London associate Steve Melrose are the authors of “Why Corporations Should Rethink How They Evaluate Deferred Prosecution Agreements,” [PDF] published by the New York Law Journal on May 6, 2021.

Washington, D.C. partner Mark Perry and Los Angeles partner Perlette Michèle Jura are the authors of the “United States” [PDF] chapter in Appeals 2021 published by Lexology in May 2021.

London partners Doug Watson and Patrick Doris and associate Daniel Barnett are the authors of the “United Kingdom” [PDF] chapter in Appeals 2021 published by Lexology in May 2021.

Join our panelists for a discussion of cap and trade programs in the United States and Europe, and a forecast of what to expect for a U.S. carbon market under the Biden administration. The panel will cover the potential for federal action by the new administration as well as the existing cap and trade systems of California, the Regional Greenhouse Gas Initiative, and the EU Emissions Trading System, including lessons learned and key takeaways from these existing systems.

View Slides (PDF)



PANELISTS:

Lena Sandberg is a partner in the Brussels office where she is a member of the firm’s Energy Group. Ms. Sandberg’s practice covers all aspects of competition law, including subsidies (State aid) and she has extensive regulatory experience in the energy and environmental sectors, including gas, renewables, electricity production and transmission, carbon emission reduction schemes and hydrogen. Prior to joining Gibson Dunn, Ms. Sandberg served as Senior Officer in the Competition and State Aid Directorate at the EFTA Surveillance Authority, where she covered complex questions in the energy and environmental area particularly in the field of the EU Emissions Trading Scheme, renewable energy, energy taxes, electricity supply, carbon capture, and a string of related issues.

Jeffrey L. Steiner is a partner in the Washington, D.C. office, where he co-leads the firm’s Derivatives practice and is co-leader of the firm’s Digital Currencies and Blockchain Technology team practice. Mr. Steiner advises financial institutions, energy companies, private funds, corporations and others on compliance and implementation issues relating to derivatives and commodities trading, including compliance with CFTC, SEC, the Dodd-Frank Act, and other rules and regulations. He also helps clients to navigate through cross-border issues resulting from global derivatives and commodities requirements. Before joining the firm, Mr. Steiner was special counsel in the Division of Market Oversight at the CFTC where he handled a range of issues relating to trading and execution of futures and swaps.

Abbey Hudson is a partner in the Los Angeles office where she is a member of the Environmental Litigation and Mass Tort Practice Group. Ms. Hudson devotes a significant portion of her time to helping clients navigate environmental and emerging regulations and related governmental investigations. She has handled all aspects of environmental and mass tort litigation and regulatory compliance. She also provides counseling and advice on environmental and regulatory compliance to clients on a wide range of issues, including supply chain transparency requirements, comments on pending regulatory developments, and enforcement.

Jennifer C. Mansh is a senior associate in the Washington, D.C. office and a member of the firm’s Energy, Regulation and Litigation Practice Group. Ms. Mansh advises clients on energy litigation, regulatory, and transactional matters before the FERC, CFTC, the Department of Energy, and state public utility commissions. Ms. Mansh has represented a wide variety of electric utilities, merchant transmission companies, power marketers, and natural gas and oil pipeline companies in rate, licensing, and enforcement proceedings, and she assists clients on a variety of transactional matters and compliance issues.

Mark Tomaier is an associate in the Orange County office where he currently practices general litigation in the firm’s Litigation Department. Mr. Tomaier earned his law degree cum laude in 2017 from Harvard Law School, where he was an Articles Editor on the Harvard Environmental Law Review. In 2012, he graduated with highest honors from the University of California Berkeley with a Bachelor of Arts Degree, double majoring in English and in Rhetoric. Prior to joining the firm, Mr. Tomaier served as a law clerk to The Honorable Marilyn L. Huff in the United States District Court for the Southern District of California and as a law clerk to The Honorable Michael D. Wilson in the Supreme Court of Hawaii.


MCLE CREDIT INFORMATION:

This program has been approved for credit in accordance with the requirements of the New York State Continuing Legal Education Board for a maximum of 1.0 credit hours, of which 1.0 credit hours may be applied toward the areas of professional practice requirement.

This course is approved for transitional/non-transitional credit. Attorneys seeking New York credit must obtain an Affirmation Form prior to watching the archived version of this webcast. Please contact CLE@gibsondunn.com to request the MCLE form.

Gibson, Dunn & Crutcher LLP certifies that this activity has been approved for MCLE credit by the State Bar of California in the amount of 1.0 hours.

California attorneys may claim “self-study” credit for viewing the archived version of this webcast. No certificate of attendance is required for California “self-study” credit.

Los Angeles partner Theane Evangelis is the author of “Don’t turn classrooms into courtrooms and retraumatize victims,” [PDF] published by the Daily Journal on April 28, 2021.

Los Angeles partner Michael Dore is the author of “Legal issues to watch in navigating the secondary market for NFTs,” [PDF] published by the Daily Journal on April 27, 2021.

On April 27, 2021, a federal court in the Northern District of California dismissed federal and state law claims brought derivatively on behalf of The Gap, Inc., holding that the California proceedings were foreclosed by a forum selection bylaw designating the Delaware Court of Chancery as the exclusive forum for derivative suits (the “Forum Bylaw”). See Lee v. Fisher, Case No. 20-cv-06163-SK, ECF No. 59. This decision strikes a blow against what has become a new tactic of the plaintiff’s bar:  asserting violations of the federal securities laws in the guise of shareholder derivative claims. This ruling furthers the purpose of exclusive forum bylaws to prevent duplicative litigation in multiple forums, and highlights the benefits these bylaws may achieve for companies.

The plaintiff in Fisher brought derivative claims purportedly on behalf of Gap against certain directors and officers based on their alleged failure to promote diversity at Gap and for allegedly making misleading statements about Gap’s commitment to diversity. The plaintiff asserted both state law claims (like breach of fiduciary duty) and a federal securities law claim for violation of Section 14(a) of the Securities Exchange Act.

Defendants moved to dismiss on forum non conveniens grounds pursuant to the Forum Bylaw. Plaintiff argued that the court could not enforce the Forum Bylaw as to the federal Section 14(a) claim because (1) that claim was subject to exclusive federal jurisdiction and could not be asserted in the Delaware Court of Chancery, and (2) enforcing the Forum Bylaw would violate the Exchange Act provision that prohibits waiving compliance with the Exchange Act (the “anti-waiver” provision).

The court rejected plaintiff’s arguments and enforced the Forum Bylaw, effectively precluding the plaintiff from asserting a Section 14(a) claim in any forum. First, the court noted the strong policy in favor of enforcing forum selection clauses, which the Ninth Circuit has held supersedes anti-waiver provisions like those in the Exchange Act. See Yei A. Sun v. Advanced China Healthcare, Inc., 901 F.3d 1081 (9th Cir. 2018). Second, relying on the Ninth Circuit’s holding in Sun that a forum selection clause should be enforced unless the forum “affords the plaintiffs no remedies whatsoever,” the court held that the Forum Bylaw was enforceable because the plaintiff could file a separate state law derivative action in Delaware, even if that action could not include federal securities law claims.

This ruling is notable because other federal courts confronted with a similar argument have decided to enforce these bylaws only as to state law claims, and to keep the federal claims in federal court. The result of those rulings, though, is that derivative actions involving the same alleged misconduct could proceed in two forums—actions in federal court involving federal law claims, and actions in state court involving state law claims. This result undermines the purpose of exclusive forum bylaws to prevent duplicative litigation in multiple forums.

The Fisher decision, as well as a similar ruling reached in Seafarers Pension Plan v. Bradway, 2020 WL 3246326 (N.D. Ill. June 8, 2020), should help establish that exclusive forum bylaws require all derivative actions to proceed in a single forum. When drafting and (later) enforcing exclusive forum bylaws, companies should have these recent decisions top of mind to make sure that these bylaws achieve their goal of efficiently litigating disputes in one forum only.


Gibson Dunn lawyers are available to assist in addressing any questions you may have regarding these developments. Please contact the Gibson Dunn lawyer with whom you usually work, any member of the Securities Litigation or Securities Regulation and Corporate Governance practice groups, or the following authors:

Brian M. Lutz – San Francisco/New York (+1 415-393-8379/+1 212-351-3881, blutz@gibsondunn.com)
Jason J. Mendro – Washington, D.C. (+1 202-887-3726, jmendro@gibsondunn.com)
Ronald O. Mueller – Washington, D.C. (+1 202-955-8671, rmueller@gibsondunn.com)
Michael J. Kahn – San Francisco (+1 415-393-8316, mjkahn@gibsondunn.com)

Please also feel free to contact any of the following practice leaders and members:

Securities Litigation Group:
Monica K. Loseman – Co-Chair, Denver (+1 303-298-5784, mloseman@gibsondunn.com)
Brian M. Lutz – Co-Chair, San Francisco/New York (+1 415-393-8379/+1 212-351-3881, blutz@gibsondunn.com)
Robert F. Serio – Co-Chair, New York (+1 212-351-3917, rserio@gibsondunn.com)
Craig Varnen – Co-Chair, Los Angeles (+1 213-229-7922, cvarnen@gibsondunn.com)
Jefferson Bell – New York (+1 212-351-2395, jbell@gibsondunn.com)
Matthew L. Biben – New York (+1 212-351-6300, mbiben@gibsondunn.com)
Michael D. Celio – Palo Alto (+1 650-849-5326, mcelio@gibsondunn.com)
Paul J. Collins – Palo Alto (+1 650-849-5309, pcollins@gibsondunn.com)
Jennifer L. Conn – New York (+1 212-351-4086, jconn@gibsondunn.com)
Thad A. Davis – San Francisco (+1 415-393-8251, tadavis@gibsondunn.com)
Mark A. Kirsch – New York (+1 212-351-2662, mkirsch@gibsondunn.com)
Jason J. Mendro – Washington, D.C. (+1 202-887-3726, jmendro@gibsondunn.com)
Alex Mircheff – Los Angeles (+1 213-229-7307, amircheff@gibsondunn.com)
Robert C. Walters – Dallas (+1 214-698-3114, rwalters@gibsondunn.com)

Securities Regulation and Corporate Governance Group:
Elizabeth Ising – Co-Chair, Washington, D.C. (+1 202-955-8287, eising@gibsondunn.com)
James J. Moloney – Co-Chair, Orange County, CA (+ 949-451-4343, jmoloney@gibsondunn.com)
Lori Zyskowski – Co-Chair, New York (+1 212-351-2309, lzyskowski@gibsondunn.com)
Brian J. Lane – Washington, D.C. (+1 202-887-3646, blane@gibsondunn.com)
Ronald O. Mueller – Washington, D.C. (+1 202-955-8671, rmueller@gibsondunn.com)
Thomas J. Kim – Washington, D.C. (+1 202-887-3550, tkim@gibsondunn.com)
Michael A. Titera – Orange County, CA (+1 949-451-4365, mtitera@gibsondunn.com)

© 2021 Gibson, Dunn & Crutcher LLP

Attorney Advertising: The enclosed materials have been prepared for general informational purposes only and are not intended as legal advice.

On April 28, 2021, the U.S. Senate approved a resolution to repeal EPA’s 2020 policy amendments to regulations of upstream and midstream oil and gas operations. Under the 2020 policy amendments, the Trump Administration had declined to regulate oil and gas transmission and storage operations or set methane emission limits under Section 111 of the Clean Air Act’s (“CAA”) New Source Performance Standards (“NSPS”). If the U.S. House of Representatives approves the resolution and it is signed by the President, then the 2020 policy amendments would no longer be in effect, thus restoring key aspects of an earlier rule from the Obama Administration regulating methane from production and processing facilities at upstream oil and gas facilities as well as transmission and storage operations.

Key Takeaways:

  • The recent Senate resolution targets the last Administration’s rulemaking declining to regulate methane emissions from production and processing operations at oil and gas facilities. The soon-to-be repealed rule also declined to regulate associated transmission and storage operations.
  • Once the House of Representatives passes the same resolution and it is signed into law by President Biden, EPA will be able to quickly commence regulation of methane emissions for this sector as well as volatile organic compounds (“VOC”) and methane emissions for transmission and storage operations.
  • Impacted sources in the sector should begin evaluating compliance with the 2016 Obama Administration rules governing methane from production and processing operations as well as transmission and storage operations.
  • For production and processing operations, compliance with methane requirements should complement existing VOC compliance programs under NSPS Subpart OOOOa, although additional requirements could attach for operations in areas of ozone nonattainment.
  • The 2020 technical amendments to the NSPS Subpart OOOOa program governing production and processing operations remain unaffected.

Detailed Analysis: Beginning in 2012 and again in 2016, the Obama administration promulgated new regulations of the oil and gas industry under the CAA’s NSPS (“2016 NSPS”). Pursuant to the 2016 NSPS, the transmission and storage segment of the oil and gas industry was included in the NSPS regulated source category.[1] This applied the NSPS standards to storage tanks, compressors, equipment leaks, and pneumatic controllers, among other sources in the transmission and storage segment.[2] The 2016 NSPS also added methane emission limits for the same segment.[3]

In 2020, EPA repealed these changes, issuing final policy amendments that removed the transmission and storage segment sources from the NSPS source category.[4] Further, EPA rescinded the separate methane emission limits for the production and processing segments of the source category while retaining limits for VOCs, and EPA also interpreted the CAA to require a “significant contribution finding” for any particular air pollutant before setting performance standards for that pollutant unless EPA addressed the pollutant when first listing or regulating the source category.[5] This latter requirement was significant, among other reasons, because the EPA did not consider methane emissions at the time it initially listed the oil and gas source category in 1979, and would thus require “significant contribution finding” for methane.[6]

In a separate rulemaking also finalized in 2020, EPA made a number of separate technical amendments to the 2016 NSPS.[7] This final rule was not cited in the resolution that passed the Senate.

This week, Congress began the process of reversing course. The Senate passed a resolution, S.J. RES. 14,[8] which disapproved of the EPA’s 2020 policy amendments. The Senate voted by a 52-42 margin, with three Republicans voting in the majority, to repeal the 2020 policy amendments pursuant to its authority under the Congressional Review Act (“CRA”). Pursuant to the CRA, certain agency rules must be reported to Congress and the Government Accountability Office.[9] After receiving the report, Congress is authorized to disapprove of the promulgated rule within 60 session days.[10] Significantly, when certain criteria are met, a joint resolution of disapproval cannot be filibustered in the Senate.[11] Moreover, disapproval carries with it longer term effects: the CRA prohibits a rule in “substantially the same form” as the disapproved rule from being subsequently promulgated (unless so authorized by a subsequent law).

Although the Senate resolution is a significant step towards repeal of the 2020 policy amendments, the 2016 NSPS are not yet back in effect. In order to repeal the 2020 rule and reinstate the 2016 NSPS (subject to the technical changes finalized in 2020 that are unaffected by the CRA resolution), the House of Representatives will also need to pass the same resolution, which it is expected to vote on in the coming weeks.[12] Disapproval renders the 2020 rule “as though such rule had never taken effect.”[13] Questions remain as to whether this repeal will reignite past litigation challenging the 2012 and 2016 NSPS rulemakings.

Affected facilities in the transmission and storage segments of the source category that will soon be subject to the NSPS should prepare for compliance. Furthermore, all facilities in the source category subject to NSPS, including in the production and processing segments, should ensure that they have adequate controls to meet the 2016 NSPS requirements for methane emissions. The practical impact of this reversion is uncertain, particularly given EPA’s findings in 2020 that separate methane limitations for these segments of the industry are redundant because controls used to reduce VOC emissions also reduce methane. Moreover, given the uncertainty created by the CRA’s language that a disapproved rule is rendered not only ineffective moving forward but also “as though such rule had never taken effect,” EPA likely will need to issue guidance to regulated entities in order to explain its expectations for compliance and the timing thereof. EPA likely also will need to promulgate a ministerial rule restoring the applicable regulatory text from the 2016 NSPS in the Code of Federal Regulations.

Litigation over the 2012 and 2016 rulemakings, currently held in abeyance, likely will resume in the wake of this resolution. In addition, EPA will once again be responsible for issuing an existing source rule for this source category. Because EPA rescinded methane limits for the source category, EPA was no longer required to issue emission guidelines to address existing sources. This will change after the CRA resolution is approved.

_____________________

  [1]  EPA Issues Final Policy Amendments to the 2012 and 2016 New Source Performance Standards for the Oil and Natural Gas Industry: Fact Sheet, epa.gov (Aug. 13, 2020), https://www.epa.gov/sites/production/files/2020-08/documents/og_policy_amendments.fact_sheet._final_8.13.2020_.pdf.

  [2]  EPA’s Policy Amendments to the New Source Performance Standards for the Oil and Gas Industry, epa.gov (Aug. 2020), here.

  [3]  Supra note 1.  For additional analysis of the previous standard, see S. Fletcher and D. Schnitzer, “Inside EPA’s Plan for Reducing Methane Emissions,” Law360 (Aug. 20, 2015), available at https://www.gibsondunn.com/wp-content/uploads/documents/publications/Fletcher-Schnitzer-Inside-EPAs-Plan-For-Reducing-Methane-Emissions-Law360-08-20-2015.pdf; “Client Alert: EPA Announces Program Addressing Methane Emissions from Oil and Gas Production,” (Jan. 15, 2015), available at https://www.gibsondunn.com/epa-announces-program-addressing-methane-emissions-from-oil-and-gas-production/

  [4]  Supra note 1.

  [5]  Id.

  [6]  See id.

  [7]  Id.

  [8]  A joint resolution providing for congressional disapproval under chapter 8 of title 5, United States Code, of the rule submitted by the Environmental Protection Agency relating to “Oil and Natural Gas Sector: Emission Standards for New, Reconstructed, and Modified Sources Review”, S.J.Res.14, 117th Cong. (2021).

  [9]  5 U.S.C. §801(a)(1)(A).

[10]  See 5 U.S.C. §802.

[11]  See 5 U.S.C. §802(d).

[12]  Jeff Brady, Senate Votes To Restore Regulations On Climate-Warming Methane Emissions, NPR (Apr. 28, 2021), https://www.npr.org/2021/04/28/991635101/senate-votes-to-restore-regulations-on-climate-warming-methane-emissions.

[13]  5 U.S.C. §801(f).


Gibson Dunn’s lawyers are available to assist in addressing any questions you may have regarding these developments. Please contact the Gibson Dunn lawyer with whom you usually work, any member of the firm’s Environmental Litigation and Mass Tort practice group, or the following authors:

Stacie B. Fletcher – Washington, D.C. (+1 202-887-3627, sfletcher@gibsondunn.com)
David Fotouhi – Washington, D.C. (+1 202-955-8502, dfotouhi@gibsondunn.com)
Mark Tomaier – Orange County, CA (+1 949-451-4034, mtomaier@gibsondunn.com)

Please also feel free to contact the following practice group leaders:

Environmental Litigation and Mass Tort Group:
Stacie B. Fletcher – Washington, D.C. (+1 202-887-3627, sfletcher@gibsondunn.com)
Daniel W. Nelson – Washington, D.C. (+1 202-887-3687, dnelson@gibsondunn.com)

© 2021 Gibson, Dunn & Crutcher LLP

Attorney Advertising: The enclosed materials have been prepared for general informational purposes only and are not intended as legal advice.

This edition of Gibson Dunn’s Federal Circuit Update summarizes a new petition for certiorari in a case originating in the Federal Circuit concerning the Kessler preclusion doctrine, it addresses the Federal Circuit’s announcement that Judge Moore will become Chief Judge on May 22, 2021, and it discusses recent Federal Circuit decisions concerning self-enabling references, Article III standing, patent eligibility, and more Western District of Texas venue issues.

Federal Circuit News

Supreme Court:

This month, the Supreme Court did not add any new cases originating at the Federal Circuit. As we summarized in our January and February updates, the Court has two such cases pending: United States v. Arthrex, Inc. (U.S. Nos. 19-1434, 19-1452, 19-1458) and Minerva Surgical Inc. v. Hologic Inc. (U.S. No. 20-440).

The Court heard argument on the doctrine of assignor estoppel on Wednesday, April 21, 2021, in Minerva v. Hologic.

Noteworthy Petitions for a Writ of Certiorari:

There is one new potentially impactful certiorari petition that is currently before the Supreme Court:

PersonalWeb Technologies, LLC v. Patreon, Inc. (U.S. No. 20-1394): 1. Whether the Federal Circuit correctly interpreted Kessler v. Eldred, 206 U.S. 285 (1907), to create a freestanding preclusion doctrine that may apply even when claim and issue preclusion do not. 2. Whether the Federal Circuit properly extended its Kessler doctrine to cases where the prior judgment was a voluntary dismissal.   

Other updates include:

On April 1, the Court requested a response in Warsaw Orthopedic v. Sasso (U.S. No. 20-1284) concerning state versus federal court jurisdiction.

As of April 26, the cert-stage briefing is complete in Sandoz v. Immunex (U.S. No. 20-1110), which concerns obviousness-type double patenting. Association for Accessible Medicines has filed an amicus brief in support of Sandoz and the Court has distributed this case for its May 13 conference.

On April 19, Illumina filed its brief in opposition in Ariosa Diagnostics, Inc. v. Illumina, Inc. (U.S. No. 20-892) concerning patent eligibility under 35 U.S.C. § 101.

American Axle & Manufacturing, Inc. v. Neapco Holdings LLC (U.S. No. 20-891), also concerning patent eligibility under 35 U.S.C. § 101, is scheduled for the Court’s April 30 conference.

Other Federal Circuit News:

The Federal Circuit announced that, on May 22, 2021, the Honorable Kimberly A. Moore will become Chief Judge. She will succeed the Honorable Sharon Prost who has served as Chief Judge since May 31, 2014, and has served as Circuit Judge since September 24, 2001. Judge Moore has served as Circuit Judge on the Federal Circuit since September 8, 2006.

Upcoming Oral Argument Calendar

The list of upcoming arguments at the Federal Circuit are available on the court’s website.

Live streaming audio is available on the Federal Circuit’s new YouTube channel. Connection information is posted on the court’s website.

Key Case Summaries (April 2021)

Raytheon Technologies Corp. v. General Electric Co. (Fed. Cir. No. 20-1755): Raytheon appealed a final inter partes review decision from the PTAB determining that certain claims of the asserted patent unpatentable as obvious. Raytheon argued before the Board that the prior art reference, Knip, failed to enable a skilled artisan to make the claimed invention because Knip relied on “revolutionary” materials unavailable as of the priority date of the asserted patent. The Board found that Knip was “enabling,” because it provided enough information to allow a skilled artisan to calculate the power density of Knip’s advanced engine, which fell within the claimed density range. The Board thus concluded that Knip rendered the challenged claims obvious.

The Federal Circuit (Chen, J., joined by Lourie, J. and Hughes, J.) reversed. The court agreed with Raytheon that the Board legally erred it in its prior art enablement analysis. To render a claim obvious, the prior art must enable a skilled artisan to make and use the claimed invention. The Board, rather than determining whether Knip enabled a skilled artisan to make and use the claimed invention, focused only on whether a skilled artisan was provided with sufficient parameters in Knip to determine the claimed power density without undue experimentation. The Board defended its position by noting that the claims did not require the advanced materials disclosed in Knip. However, Raytheon had presented unrebutted testimony that Knip fails to enable a skilled artisan to physically make Knip’s engine given the unavailability of the revolutionary composite material contemplated by Knip. The court thus concluded that the Board’s finding that Knip is “enabling” was legal error, because without a physical working engine, a skilled artisan could not achieve the claimed power density.

Apple Inc. v. Qualcomm Inc. (Fed. Cir. No. 20-1561): Apple appealed two PTAB inter partes review final written decisions holding that Apple did not prove several claims of two patents were obvious. These two patents were also asserted against Apple in district court. However, before Apple filed its appeal to the Federal Circuit, Apple and Qualcomm settled all litigation between the companies. Based on that settlement, the district court action was dismissed with prejudiced at the parties’ request.

The Federal Circuit (Moore, J., joined by Reyna, J. and Hughes, J.) dismissed Apple’s appeal for lack of standing. As a preliminary matter, the court stated that Apple should have addressed arguments and evidence establishing its standing in its opening brief. The court declined to apply waiver, however, and addressed the merits of the standing issue. The court rejected Apple’s argument that its ongoing payment obligations provides standing because Apple did not provide evidence that the validity of any single patent, including the two patents at issue, would impact its ongoing payment obligations. The court also found Apple’s argument that Qualcomm could later sue for infringement after the settlement agreement expires was too speculative to confer standing. Finally, the court explained that any injury based on the inter partes review estoppel’s provision was also too speculative to provide standing, especially where Apple did not show that it will likely be practicing the patent claims.

In Re: Board of Trustees of the Leland Stanford Junior University (Fed. Cir. No. 20-1288): The PTAB affirmed an examiner’s final rejection of claims directed to “computerized statistical methods for determining haplotype phase,” on the basis that the claims were not patent-eligible under 35 U.S.C. § 101.  Haplotype phasing “is a process for determining the parent from whom alleles—i.e., versions of a gene—are inherited.” The PTAB held that the claims were directed to two abstract mental processes: (1) “the step of ‘imputing an initial haplotype phase for each individual in the plurality of individuals based on a statistical model’”; and (2) “the step of automatically replacing an imputed haplotype phase with a randomly modified haplotype phase when the latter is more likely correct than the former.” The PTAB also held that the claims lacked an inventive concept, as they “recited generic steps of receiving and storing genotype data in a computer memory, extracting the predicted haplotype phase from the data structure, and storing it in a computer memory.”

The Federal Circuit (Reyna, J., joined by Prost, C.J. and Lourie, J.) affirmed. At step one, the court held that the claims were directed to the abstract idea of “the use of mathematical calculations and statistical modeling.” The court rejected the applicant’s argument that the claims provided a technological improvement by allowing for “more accurate haplotype predictions.” The court explained that “[t]he different use of a mathematical calculation, even one that yields different or better results, does not render patent eligible subject matter.” At step two, the court held that the claims lacked an inventive concept because the “the recited steps of receiving, extracting, and storing data amount to well-known, routine, and conventional steps taken when executing a mathematical algorithm on a regular computer.”

In Re TracFone Wireless (Fed. Cir. No. 21-136) (nonprecedential): As discussed in our March update, the Federal Circuit granted TracFone’s first mandamus petition, ordering Judge Albright to “issue [his] ruling on the motion to transfer within 30 days from the issuance of this order, and to provide a reasoned basis for its ruling that is capable of meaningful appellate review.” On April 20, the court granted mandamus for a second time, holding that Judge Albright “clearly abused” his discretion in denying transfer under § 1404(a) by relying on a “rigid and formulaic” application of the Fifth Circuit’s 100-mile rule.


Gibson Dunn’s lawyers are available to assist in addressing any questions you may have regarding developments at the Federal Circuit. Please contact the Gibson Dunn lawyer with whom you usually work or the authors of this alert:

Blaine H. Evanson – Orange County (+1 949-451-3805, bevanson@gibsondunn.com)
Jessica A. Hudak – Orange County (+1 949-451-3837, jhudak@gibsondunn.com)

Please also feel free to contact any of the following practice group co-chairs or any member of the firm’s Appellate and Constitutional Law or Intellectual Property practice groups:

Appellate and Constitutional Law Group:
Allyson N. Ho – Dallas (+1 214-698-3233, aho@gibsondunn.com)
Mark A. Perry – Washington, D.C. (+1 202-887-3667, mperry@gibsondunn.com)

Intellectual Property Group:
Wayne Barsky – Los Angeles (+1 310-552-8500, wbarsky@gibsondunn.com)
Josh Krevitt – New York (+1 212-351-4000, jkrevitt@gibsondunn.com)
Mark Reiter – Dallas (+1 214-698-3100, mreiter@gibsondunn.com)

© 2021 Gibson, Dunn & Crutcher LLP

Attorney Advertising: The enclosed materials have been prepared for general informational purposes only and are not intended as legal advice.

Los Angeles partner Maurice Suh and of counsel Daniel Weiss are the authors of “NCAA under scrutiny in grant-in-aid cap antitrust litigation,” [PDF] published by the Daily Journal on April 26, 2021.

On April 26, 2021, in McMorris v. Carlos Lopez & Associates, LLC,[1] Judges Calabresi, Katzmann, and Sullivan of the Second Circuit entered the muddy waters at the intersection of data privacy and constitutional law in addressing when a plaintiff in a data breach case has suffered a sufficient injury to establish standing to bring a lawsuit in federal court under Article III of the United States Constitution based on an increased risk of future identity theft. This question presented a matter of first impression for the Second Circuit, which sought to harmonize the divergent approaches taken by its sister circuits on this pressing—and oft-recurring—issue by articulating a non-exhaustive three-factor test to aid courts’ future adjudication of these highly fact-specific disputes. Applying this test, the Second Circuit affirmed the district court’s dismissal for lack of standing because the plaintiffs had failed to plead a sufficient risk of future identity fraud.

I.   Article III Standing and Data Privacy

Under Article III of the United States Constitution, “federal courts lack jurisdiction if no named plaintiff has standing.”[2] To establish standing, plaintiffs must demonstrate that they have (1) “suffered an injury in fact” (2) that “was caused by the defendant,” and which (3) “would likely be redressed by the requested judicial relief.”[3]   In turn, an injury in fact requires “‘an invasion of a legally protected interest’ that is ‘concrete and particularized’ and ‘actual or imminent, not conjectural or hypothetical.’”[4] While an alleged risk of future harm may suffice, a mere “possible future injury” or even an “objectively reasonable likelihood” of a future injury is not enough to meet the injury in fact requirement.[5] Instead, the future injury must be “certainly impending” or there must be “a substantial risk that the harm will occur.”[6]

Whether an injury in fact has been adequately pleaded is often a threshold issue raised at the motion to dismiss stage in litigation concerning data breaches. Despite the frequency with which this question arises, however, it is widely recognized that “courts have struggled” to answer it in a consistent manner[7] and the federal courts of appeals “are divided.”[8]

For instance, the D.C. Circuit has found it “at least plausible” that data breach victims “run a substantial risk of falling victim” to future identity theft, particularly where some plaintiffs “have already experienced some form of identity theft since the breaches.”[9] Similarly, the Ninth Circuit suggested that it was sufficient for standing purposes if hackers “accessed information that could be used to help commit identity fraud or identity theft” or had “the means” to access such information going forward in light of the data breach.[10]

On the other hand, the Third Circuit has long held that plaintiffs lack standing if “no misuse is alleged” and there is “no quantifiable risk of damage in the future.”[11] More recently, the Eighth Circuit similarly held that “a mere possibility” of future harm following hackers’ theft of financial information was not a constitutionally cognizable injury,[12] and earlier this year the Eleventh Circuit agreed that “a mere data breach does not, standing alone, satisfy the requirements of Article III standing.”[13]

II.   Facts and Procedural History of McMorris

In June 2018, an employee at Carlos Lopez & Associates, LLP (“CLA”) accidentally sent a spreadsheet containing the Social Security numbers, home addresses, dates of birth, telephone numbers, hiring dates, and other personal information for approximately 130 current and former CLA employees to all of the company’s then-current employees.[14] Three individuals whose personally identifiable information was disclosed filed a class-action complaint against CLA, asserting various state-law claims and alleging two distinct injuries.[15] First, they claimed that the disclosure put them “‘at imminent risk of suffering identity theft’ and becoming the victims of ‘unknown but certainly impending future crimes.’”[16] Second, they alleged they were injured “in the form of the time and money spent monitoring or changing their financial information and accounts.”[17] Notably, however, they never alleged that their personal information was actually shared outside of CLA or misused by anyone.

Although the parties reached a proposed class settlement, Judge Furman of the United States District Court for the Southern District of New York declined to approve the settlement and instead dismissed the matter sua sponte for lack of subject-matter jurisdiction.[18]  In doing so, he held, that the plaintiffs’ alleged increased risk of future identity theft was not sufficiently concrete to support standing.[19] With no allegations that CLA’s release of personal information was intentional, involved malicious third parties, or had caused any actual misuse of data, Judge Furman found the plaintiffs’ injury too speculative and attenuated to qualify as an injury in fact.[20] He also rejected their theory of injury based on the actual costs they had incurred as a result of the disclosure of their personal information, reasoning that plaintiffs “cannot manufacture standing merely by inflicting harm on themselves based on their fears of hypothetical future harm that is not certainly impending.”[21] Since the possibility of identity theft was speculative, any costs taken to avoid it did not qualify as injuries in fact.

III.   The Second Circuit’s Legal Analysis

In an opinion written by Judge Sullivan, the Second Circuit affirmed the district court’s dismissal of the claims against CLA for lack of standing.

While it recognized that other circuits had wrestled with the question of “whether a plaintiff may establish standing based on a risk of future identity theft or fraud stemming from the unauthorized disclosure of that plaintiff’s data,”[22] the Second Circuit sought to bridge the apparent divide. Its reading of its sister circuits’ decisions was that none had “explicitly foreclosed” a future-harm theory.[23] Instead, Judge Sullivan reasoned that the Third, Eighth, and Eleventh Circuits had only “declined to find standing on the facts of a particular case.”[24] The Second Circuit therefore characterized itself as “join[ing] all of [its] sister circuits that have specifically addressed the issue in holding that plaintiffs may establish standing based on an increased risk of identity theft or fraud following the unauthorized disclosure of their data.”[25]

However, the Second Circuit did not hold that any such allegation was sufficient to plead an injury in fact. Instead, it endorsed three non-dispositive and non-exhaustive factors that, it said, other appellate courts have “consistently addressed in the context of data breaches and other data exposure incidents” as providing “helpful guidance” in assessing the presence or absence of constitutional standing: “(1) whether the plaintiffs’ data has been exposed as the result of a targeted attempt to obtain that data; (2) whether any portion of the dataset has already been misused, even if the plaintiffs themselves have not yet experienced identity theft or fraud; and (3) whether the type of data that has been exposed is sensitive such that there is a high risk of identity theft or fraud.”[26]

Applying these factors to CLA’s data disclosure, the Second Circuit held that the plaintiffs had failed to plead a sufficient risk of future identity theft or fraud to establish Article III standing. The first two factors weighed in favor of dismissal in McMorris because the case “merely involve[d] the inadvertent disclosure of [personal information] due to an errant email,”[27] not a targeted or malicious attempt to obtain data, and the plaintiffs never alleged that any of “the exposed dataset was compromised.”[28] Although the third factor weighed in favor of finding that the court had Article III jurisdiction because the disclosed data “included the sort of [personally identifiable information] that might put Plaintiffs at a substantial risk of identity theft or fraud, in the absence of any other facts suggesting that the [data] was intentionally taken by an unauthorized third party or otherwise misused,” the Second Circuit held that “this factor alone does not establish an injury in fact.”[29] As such, the first two factors proved fatal to plaintiffs’ claimed standing based on a risk of future harm. And, as a result, the plaintiffs’ claims based on their protective-measures theory also failed because absent “a substantial risk of future identity theft,” any efforts “protecting  . . . against [a] speculative threat cannot create an injury.[30]

IV.   Conclusion

Whether McMorris effectively synthesized the federal judiciary’s “disarray about the applicability of [the] ‘increased risk’ theory in data privacy cases”[31] or only (inadvertently) highlighted the stark differences among the courts of appeal remains an open question. But, regardless, it is now binding law in the Second Circuit, and its adoption of guiding non-dispositive factors should provide a roadmap for the resolution of similar litigation going forward. Such future developments may also be influenced by the Supreme Court’s highly anticipated upcoming decision in TransUnion LLC v. Ramirez,[32] in which oral argument was held on March 30, 2021, addressing the closely related question of whether Article III or Federal Rule of Civil Procedure 23 permit a damages class action where the majority of the putative class did not suffer an actual injury. As always, Gibson Dunn remains available to help its clients in navigating this evolving area of the law.

____________________

   [1]   — F.3d —-, 2021 WL 1603808 (2d Cir. Apr. 26, 2021).

   [2]   Frank v. Gaos, 139 S. Ct. 1041, 1046 (2019).

   [3]   Thole v. U.S. Bank N.A., 140 S. Ct. 1615, 1618 (2020).

   [4]   Spokeo, Inc. v. Robins, 136 S. Ct. 1540, 1548 (quoting Lujan v. Defs. of Wildlife, 504 U.S. 555, 560 (1992)).

   [5]   Clapper v. Amnesty Int’l USA, 568 U.S. 398, 409–10 (2013).

   [6]   Susan B. Anthony List v. Driehaus, 573 U.S. 149, 158 (2014) (internal quotation marks omitted).

   [7]   Allison Grande, High Court FCRA Case Could Shake Up Class Action Standing, Law360.com (Mar. 26, 2011), available at https://www.law360.com/articles/1368905/high-court-fcra-case-could-shake-up-class-action-standing.

   [8]   Tsao v. Captiva MVP Rest. Partners, LLC, 986 F.3d 1332, 1340 (11th Cir. 2021); Beck. v.McDonald, 848 F.3d 262, 273 (4th Cir. 2017).

   [9]   In re U.S. Off. of Pers. Mgmt. Data Sec. Breach Litig., 928 F.3d 42, 59 (D.C. Cir. 2019).

  [10]   In re Zappos.com, Inc., 888 F.3d 1020, 1027–28 (9th Cir. 2018) (emphasis added).

  [11]   Reilly v. Ceridian Corp., 664 F.3d 38, 45 (3d Cir. 2011).

  [12]   In re SuperValu, Inc., 870 F.3d 763, 771 (8th Cir. 2017).

  [13]   Tsao, 986 F.3d at 1344.

  [14]   Steven v. Carlos Lopez & Assocs., LLC, 422 F. Supp. 3d 801, 802 (S.D.N.Y. 2019).

  [15]   McMorris, 2021 WL 1603808, at *1.

  [16]   Id. at *1 (quoting Amended Complaint ¶¶ 6, 34).

  [17]   Steven, 422 F. Supp. 3d at 807.

  [18]   Id. at 803.

  [19]   Id. at 804.

  [20]   Id. at 804–07.

  [21]   Id. at 807 (quoting Clapper, 568 U.S. at 416).

  [22]   McMorris, 2021 WL 1603808, at *3.

  [23]   Id. .

  [24]   Id. at *3 & n.2.

  [25]   Id.

  [26]   Id. at *5.

  [27]   Id.

  [28]   Id. at *6.

  [29]   Id.

  [30]   Id. at *6 n.7 (quoting SuperValu, 870 F.3d at 771).

  [31]   Katz v. Pershing, LLC, 672 F.3d 64, 80 (1st Cir. 2012).

  [32]   No. 20-297.


The following Gibson Dunn lawyers assisted in the preparation of this alert: Alexander H. Southwell, Akiva Shapiro, Jeremy S. Smith, Michael Nadler, and Eric Hornbeck.

Gibson Dunn’s lawyers are available to assist in addressing any questions you may have regarding these developments. Please contact the Gibson Dunn lawyer with whom you usually work, any of the following members of the firm’s Privacy, Cybersecurity and Data Innovation practice group, or the following authors:

Alexander H. Southwell – New York (+1 212-351-3981, asouthwell@gibsondunn.com)
Akiva Shapiro – New York (+1 212-351-3830, ashapiro@gibsondunn.com)

United States
Alexander H. Southwell – Co-Chair, PCDI Practice, New York (+1 212-351-3981, asouthwell@gibsondunn.com)
S. Ashlie Beringer – Co-Chair, PCDI Practice, Palo Alto (+1 650-849-5327, aberinger@gibsondunn.com)
Debra Wong Yang – Los Angeles (+1 213-229-7472, dwongyang@gibsondunn.com)
Matthew Benjamin – New York (+1 212-351-4079, mbenjamin@gibsondunn.com)
Ryan T. Bergsieker – Denver (+1 303-298-5774, rbergsieker@gibsondunn.com)
Howard S. Hogan – Washington, D.C. (+1 202-887-3640, hhogan@gibsondunn.com)
Joshua A. Jessen – Orange County/Palo Alto (+1 949-451-4114/+1 650-849-5375, jjessen@gibsondunn.com)
Kristin A. Linsley – San Francisco (+1 415-393-8395, klinsley@gibsondunn.com)
H. Mark Lyon – Palo Alto (+1 650-849-5307, mlyon@gibsondunn.com)
Karl G. Nelson – Dallas (+1 214-698-3203, knelson@gibsondunn.com)
Ashley Rogers – Dallas (+1 214-698-3316, arogers@gibsondunn.com)
Deborah L. Stein – Los Angeles (+1 213-229-7164, dstein@gibsondunn.com)
Eric D. Vandevelde – Los Angeles (+1 213-229-7186, evandevelde@gibsondunn.com)
Benjamin B. Wagner – Palo Alto (+1 650-849-5395, bwagner@gibsondunn.com)
Michael Li-Ming Wong – San Francisco/Palo Alto (+1 415-393-8333/+1 650-849-5393, mwong@gibsondunn.com)
Cassandra L. Gaedt-Sheckter – Palo Alto (+1 650-849-5203, cgaedt-sheckter@gibsondunn.com)

Europe
Ahmed Baladi – Co-Chair, PCDI Practice, Paris (+33 (0)1 56 43 13 00, abaladi@gibsondunn.com)
James A. Cox – London (+44 (0) 20 7071 4250, jacox@gibsondunn.com)
Patrick Doris – London (+44 (0) 20 7071 4276, pdoris@gibsondunn.com)
Kai Gesing – Munich (+49 89 189 33-180, kgesing@gibsondunn.com)
Bernard Grinspan – Paris (+33 (0)1 56 43 13 00, bgrinspan@gibsondunn.com)
Penny Madden – London (+44 (0) 20 7071 4226, pmadden@gibsondunn.com)
Michael Walther – Munich (+49 89 189 33-180, mwalther@gibsondunn.com)
Alejandro Guerrero – Brussels (+32 2 554 7218, aguerrero@gibsondunn.com)
Vera Lukic – Paris (+33 (0)1 56 43 13 00, vlukic@gibsondunn.com)
Sarah Wazen – London (+44 (0) 20 7071 4203, swazen@gibsondunn.com)

Asia
Kelly Austin – Hong Kong (+852 2214 3788, kaustin@gibsondunn.com)
Connell O’Neill – Hong Kong (+852 2214 3812, coneill@gibsondunn.com)
Jai S. Pathak – Singapore (+65 6507 3683, jpathak@gibsondunn.com)

© 2021 Gibson, Dunn & Crutcher LLP

Attorney Advertising: The enclosed materials have been prepared for general informational purposes only and are not intended as legal advice.

Washington, D.C. partner David Fotouhi and associates Andrew Wilhelm and Amalia Reiss are the authors of “Water Rule Reinstatement Shows Specific Objections Are Key,” [PDF] published by Law360 on April 28, 2021.

Shareholder lawsuits are not only complicated to litigate, but due to the high financial stakes, these actions can be among the most threatening to a company and its directors and officers. It has been twenty-six years since Congress enacted the Private Securities Litigation Reform Act of 1995, and since that time, private actions under the federal securities laws have continued to be filed at a steady pace. Over the last decade, the U.S. Supreme Court and the State Supreme Courts have issued multiple decisions impacting the way shareholder actions are litigated and decided. This one-hour briefing will highlight recent developments and trends in this constantly evolving and complex area of the law.

We will discuss:

  • Shareholder actions filing and settlement trends, including COVID-19-related shareholder action trends
  • The potential impact on class certification in stockholder class actions from the U.S. Supreme Court’s pending decision in Goldman Sachs Group Inc. v. Arkansas Teacher Retirement System
  • The proliferation of parallel federal and state securities class action lawsuits since the U.S. Supreme Court’s 2018 ruling in Cyan v. Beaver County Employees Retirement Fund, and the effectiveness of companies’ response through the adoption of federal forum provisions in their corporate charters

View Slides (PDF)



PANELISTS:

Jennifer L. Conn is a partner in the New York office of Gibson, Dunn & Crutcher. Ms. Conn is a co-author of PLI’s Securities Litigation: A Practitioner’s Guide, Second Edition. She has extensive experience in a wide range of complex commercial litigation matters, including those involving securities, financial services, accounting malpractice, antitrust, contracts, insurance and information technology. She is also a member of Gibson Dunn’s General Commercial Litigation, Securities Litigation, Appellate, and Privacy, Cybersecurity and Data Innovation Practice Groups. In addition, Ms. Conn is an Adjunct Professor of Law at Columbia University School of Law, lecturing on securities litigation.

Alexander K. Mircheff is a partner in the Los Angeles office of Gibson, Dunn & Crutcher. Mr. Mircheff is a co-author of PLI’s Securities Litigation: A Practitioner’s Guide, Second Edition. His practice emphasizes securities and appellate litigation, and he has substantial experience representing issuers, officers, directors, and underwriters in class action and shareholder derivative matters. Mr. Mircheff has handled matters across a variety of industries, including biotech, financial services, accounting, real estate, entertainment, engineering, manufacturing, and consumer products. He is also a member of Gibson Dunn’s Securities Litigation, Appellate, Class Actions, Labor and Employment and Litigation Practice Groups.

Robert F. Serio is a partner in the New York office of Gibson, Dunn & Crutcher and a Co-Chair of Gibson Dunn’s Securities Litigation Practice Group. Mr. Serio is also a co-author of PLI’s Securities Litigation: A Practitioner’s Guide, Second Edition. His practice involves complex commercial and business litigation, with an emphasis on securities class actions, shareholder derivative litigation, SEC enforcement matters and corporate investigations. He is also a member of the Appellate, Class Actions, FCPA, and White Collar Defense and Investigations Practice Groups.


MCLE CREDIT INFORMATION:

This program has been approved for credit in accordance with the requirements of the New York State Continuing Legal Education Board for a maximum of 1.0 credit hour, of which 1.0 credit hour may be applied toward the areas of professional practice requirement.

This course is approved for transitional/non-transitional credit. Attorneys seeking New York credit must obtain an Affirmation Form prior to watching the archived version of this webcast. Please contact CLE@gibsondunn.com to request the MCLE form.

Gibson, Dunn & Crutcher LLP certifies that this activity has been approved for MCLE credit by the State Bar of California in the amount of 1.0 hour.

California attorneys may claim “self-study” credit for viewing the archived version of this webcast. No certificate of attendance is required for California “self-study” credit.

This update provides an overview and summary of key class action developments during the first quarter of 2021.

Part I discusses appellate decisions in the Ninth, Seventh, and Eleventh Circuits about predominance, numerosity, and administrative feasibility.

Part II covers two decisions from several circuit courts of appeals relating to the evidentiary standards at the class-certification stage.

Part III reports on a decision from the Eleventh Circuit discussing Article III standing and data breach class actions.

_____________________

I.   Class Certification Requirements: The Ninth Circuit on Predominance; the Seventh Circuit on Numerosity; and the Eleventh Circuit on Administrative Feasibility

The Ninth, Seventh, and Eleventh Circuits issued important opinions regarding predominance, numerosity, and administrative feasibility.

Predominance. In Olean Wholesale Grocery Cooperative, Inc. v. Bumble Bee Foods LLC, 993 F.3d 774, 2021 WL 1257845 (9th Cir. Apr. 6, 2021), a case involving an alleged tuna price fixing conspiracy, the Ninth Circuit clarified a district court’s obligations when assessing predominance, particularly where there is a dispute over whether all class members have suffered injury. Olean contains three key holdings:

  • First, it held for the first time that a district court must find that plaintiffs have established predominance by “a preponderance of the evidence,” joining the rule followed by the First, Second, Third, Fifth, and Seventh Circuits. at *4. The Ninth Circuit explicitly rejected the use of a “no reasonable juror” test outside of the wage-and-hour context. Id. at *5 & n.4.
  • Second, it held that plaintiffs in an antitrust price-fixing action can establish predominance using statistical evidence, but a district court must scrutinize—“with care and vigor”—the reliability of that evidence before certifying a class. at *5. Specifically, if the parties offer competing expert evidence regarding the number of uninjured class members, the district court must “resolve the competing expert claims” in order to determine whether predominance has been established. Id. at *10.
  • Finally, the court held that although there is no “threshold” percentage of uninjured class members that would defeat predominance, “it must be de minimis,” suggesting “that 5% to 6% constitutes the outer limits of a de minimis number”—and at the very least, 28% “would be out-of-bounds.” at *11. It also noted that the presence of uninjured class members presents “serious standing implications under Article III,” but did not reach the issue because class certification failed under Rule 23(b)(3).  Id. at *10 n.7.

Numerosity. The Seventh Circuit affirmed a district court’s determination that a proposed class of 37 seasonal employees failed to satisfy Rule 23’s numerosity requirement in Anderson v. Weinert Enterprises, Inc., 986 F.3d 773 (7th Cir. 2021). In its decision, the court acknowledged that its cases “have recognized that ‘a forty-member class is often regarded as sufficient to meet the numerosity requirement.’” Id. at 777 (citation omitted). But the court also noted that the “[t]he key numerosity inquiry under Rule 23(a)(1) is not the number of class members alone but the practicability of joinder.” Id. To that end, the Seventh Circuit determined that the district court did not abuse its discretion in finding that the factors it considered—including the class’s geographic dispersion, overall size, and small dollar amounts involved with each individual claim—all “weighed against certifying the class.” Id.

Administrative feasibility. In Cherry v. Dometic Corp., 986 F.3d 1296 (11th Cir. 2021), the Eleventh Circuit held that an “administratively feasible” method to identify absent class members is not required to certify a class. It further held that denying certification does not divest a federal court of CAFA jurisdiction.

In the district court, plaintiffs had proposed a class of individuals who purchased allegedly defective refrigerators between 1997 and 2016. Defendant contended that the class representatives failed to show that the class was “ascertainable” because they “provided no evidence that their proposed method of identification would be workable.” Id. at 1300. The district court agreed with defendant and denied certification.

The Eleventh Circuit reversed. First, it held that administrative feasibility is not required for class certification, though it remains relevant to whether a proposed class may proceed under Rule 23(b)(3). Recognizing a circuit split on this issue (in fact, calling it “[o]ne of the most hotly contested issues in class action practice today”), the court reasoned that Rule 23(a) says nothing about administrative feasibility, which bears only on “how the district court can locate the remainder of the class after certification.” Id. at 1301, 1303. As such, it concluded, “administrative difficulties—whether in class-member identification or otherwise—do not alone doom a motion for certification.” Id. at 1304. Second, the court held that because CAFA jurisdiction does not depend on certification, a district court retains jurisdiction even after it denies certification in a CAFA action.  Id. at 1305.

II.   Several Circuits Clarify the Standards for Assessing the Admissibility of Evidence at the Class-Certification Stage

The Supreme Court has made clear that Rule 23 “does not set forth a mere pleading standard” and that a plaintiff “must be prepared to prove” that Rule 23’s requirements are “in fact” satisfied. Wal-Mart Stores, Inc. v. Dukes, 564 U.S. 338, 350 (2011) (emphasis in original). The need to assess actual evidence at the class-certification stage raises an important question: does such evidence need to be admissible? The Fifth and Sixth Circuits issued important decisions this past quarter that take divergent approaches to that question.

The Fifth Circuit in Prantil v. Arkema Inc., 986 F.3d 570 (5th Cir. 2021), held that the Daubert v. Merrell Dow Pharmaceuticals, Inc., 509 U.S. 579 (1993), standard for admissibility of expert evidence applies at the class-certification stage when scientific evidence is relevant to the certification decision. 986 F.3d at 575 & n.12. The Fifth Circuit explained that applying Daubert at the certification stage was a natural extension of the Supreme Court’s admonition that courts conduct a “rigorous analysis” of the proposed class’s conformity with Rule 23. Id. at 575. The Fifth Circuit thus joined the Third, Seventh, and Eleventh Circuits in holding that “the Daubert hurdle must be cleared when scientific evidence is relevant to the decision to certify,” and declaring that the district court’s “hesitation to apply Daubert’s reliability standard with full force” was error. Id. at 575–76. The Fifth Circuit’s holding is consistent with the Ninth Circuit’s recent admonition in Olean, where it made clear that district courts must resolve competing expert claims on the reliability of evidence to support class certification. 2021 WL 1257845, at *10.

The Sixth Circuit took a slightly different approach—this time, with nonexpert evidence—in Lyngaas v. Curaden AG, 992 F.3d 412 (6th Cir. 2021). Lyngaas held that nonexpert evidence need not be admissible  in order to be considered at class certification. Id. at 428–29. The district court certified a class of individuals who purportedly received unsolicited faxes from defendants and offered summary report logs as proof. The court rejected the defendants’ assertion that the logs had not been properly authenticated and therefore plaintiff had failed to support his motion with admissible evidence.  Id. at 418–19.

On appeal, the Sixth Circuit held as a matter of first impression that a district court is not required to “decide conclusively at the class-certification stage what evidence will ultimately be admissible at trial.” Id. at 428. In so holding, the court adopted the reasoning of the Eighth and Ninth Circuits to conclude that the “evidentiary proof” required at class certification “need not amount to admissible evidence, at least with respect to nonexpert evidence.” Id. at 428–29. Rather, because of the “differences between Rule 23, summary judgment, and trial,” parties should be afforded “greater evidentiary freedom at the class certification stage.” Id. at 429 (quoting Sali v. Corona Reg’l Med. Ctr., 909 F.3d 996, 1005 (9th Cir. 2018)). Therefore, reliance on non-authenticated summary-report logs satisfied the district court’s obligation to conduct a “rigorous analysis” at class certification.  Id. at 430.

 III.   The Eleventh Circuit Addresses Article III Standing in Data Breach Class Actions

As discussed in prior updates, federal courts at all levels have continued to ponder and opine on the role Article III standing plays in class actions. The Eleventh Circuit weighed in this quarter in Tsao v. Captiva MVP Restaurant Partners, LLC, 986 F.3d 1332 (11th Cir. 2021), holding that absent class members lacked standing following a data breach when the only injuries alleged were (a) a future risk of identity theft and (b) costs incurred to mitigate the risk of identity theft.

In Tsao, the plaintiff filed a putative data-breach class action after a restaurant chain announced a data breach involving its point-of-sale systems. The district court dismissed the case for lack of Article III standing, holding “[e]vidence of a data breach, without more, [is] insufficient to satisfy injury in fact under Article III standing.” Id. at 1337. The Eleventh Circuit affirmed, rejecting both of plaintiff’s argument that the data breach established standing.

First, the Eleventh Circuit held that an elevated threat of identity theft is not sufficient to establish Article III standing. Citing the Supreme Court’s opinion in Clapper v. Amnesty International USA, 568 U.S. 398 (2013), the court explained “a plaintiff alleging a threat of harm does not have Article III standing unless the hypothetical harm alleged is either ‘certainly impending’ or there is a ‘substantial risk’ of such harm.” Id. at 1339. An “increased risk of identity theft,” without more, was not sufficient to meet these requirements. Id. at 1344. Thus, it determined that absent “specific evidence of some misuse of class members’ data, a named plaintiff’s burden to plausibly plead factual allegations sufficient to show that the threatened harm of future identify theft was ‘certainly impending’—or that there was a ‘substantial risk’ of such harm—will be difficult to meet.” Id. (emphasis in original).

Second, the court rejected plaintiff’s argument that his efforts to mitigate the risk of identity theft—such as by cancelling his credit cards—could create an injury in fact. Where a hypothetical harm is not “certainly impending,” “a plaintiff cannot conjure standing by inflicting some direct harm on itself to mitigate a perceived risk.” Id. at 1339. Were it otherwise, “an enterprising plaintiff” could “secure a lower standard for Article III standing simply by making an expenditure based on a nonparanoid fear,” which is “not permit[ted]” by the law. Id. at 1345.

While the role of Article III standing in class actions continues to be hotly debated issue, some additional clarity may be on its way. On March 30, 2021, the Supreme Court heard argument in TransUnion LLC v. Ramirez. As noted in our prior class action update, the question presented in this case is “whether either Article III or Rule 23 permits a damages class action where the vast majority of the class suffered no actual injury, let alone an injury anything like what the class representative suffered.” During argument, the Justices probed this issue further, asking whether the majority of the class members—whose information had never been disclosed to third parties—possessed Article III standing, and whether the representative’s claims were typical of those of the absent class members. A decision is expected by summer.

 

The following Gibson Dunn lawyers contributed to this client update: Christopher Chorba, Theane Evangelis, Kahn Scolnick, Bradley Hamburger, Lauren Blas, Wesley Sze, Emily Riff, Andrew Kasabian, and Tim Kolesk.

Gibson Dunn attorneys are available to assist in addressing any questions you may have regarding these developments. Please contact the Gibson Dunn lawyer with whom you usually work in the firm’s Class Actions or Appellate and Constitutional Law practice groups, or any of the following lawyers:

Theodore J. Boutrous, Jr. – Co-Chair, Litigation Practice Group – Los Angeles (+1 213-229-7000, tboutrous@gibsondunn.com)
Christopher Chorba – Co-Chair, Class Actions Practice Group – Los Angeles (+1 213-229-7396, cchorba@gibsondunn.com)
Theane Evangelis – Co-Chair, Class Actions Practice Group – Los Angeles (+1 213-229-7726, tevangelis@gibsondunn.com)
Kahn A. Scolnick – Los Angeles (+1 213-229-7656, kscolnick@gibsondunn.com)
Bradley J. Hamburger – Los Angeles (+1 213-229-7658, bhamburger@gibsondunn.com)
Lauren M. Blas – Los Angeles (+1 213-229-7503, lblas@gibsondunn.com)

© 2021 Gibson, Dunn & Crutcher LLP

Attorney Advertising: The enclosed materials have been prepared for general informational purposes only and are not intended as legal advice.

Join Gibson Dunn panelists Michelle Kirschner and Matthew Nunan for a discussion of:

  • Recent FCA criminal prosecutions;
  • Lessons for board governance from the Aviva plc Final Notice;
  • Update on the Investment Firms Prudential Regime (IFPR) and remuneration;
  • Crystal ball gazing

View Slides (PDF)



Michelle M Kirschner: A partner in the London office. She advises a broad range of financial institutions, including investment managers, integrated investment banks, corporate finance boutiques, private fund managers and private wealth managers at the most senior level.

Matthew Nunan: A partner in the London office. He specializes in financial services regulation and enforcement, investigations and white collar defense.

Martin Coombes: An associate in the London office and a member of the Financial Institutions group. He specialises in advising on UK and EU financial services regulation.  This includes a wide range of financial services and compliance issues including advice on UK and EU regulatory developments, the regulatory aspects of corporate transactions and the on-going compliance obligations of financial services firms.

Chris Hickey: An associate in the London office and a member of the firm’s Financial Institutions group. He advises on a range of UK and EU financial services regulatory matters. This includes the regulatory elements of corporate transactions, regulatory change management and ongoing compliance requirements to which firms are subject.


CPD TRAINING/MCLE CREDIT INFORMATION:

Gibson, Dunn & Crutcher LLP is authorized by the Solicitors Regulation Authority to provide in-house CPD training. This program is approved for CPD credit in the amount of 1.0 hour. Regulated by the Solicitors Regulation Authority (Number 324652).

This program has been approved for credit in accordance with the requirements of the New York State Continuing Legal Education Board for a maximum of 1.0 credit hour, of which 1.0 credit hour may be applied toward the areas of professional practice requirement.

This course is approved for transitional/non-transitional credit. Attorneys seeking New York credit must obtain an Affirmation Form prior to watching the archived version of this webcast. Please contact CLE@gibsondunn.com to request the MCLE form.

Gibson, Dunn & Crutcher LLP certifies that this activity has been approved for MCLE credit by the State Bar of California in the amount of 1.0 hour.

California attorneys may claim “self-study” credit for viewing the archived version of this webcast. No certificate of attendance is required for California “self-study” credit.

On his first day in office, President Biden signed an Executive Order that directed his administration to focus on addressing climate change, and issued a mandate that certain agencies immediately review a number of agency actions from the previous administration regarding greenhouse gas (GHG) emissions.[1] In keeping with that directive, the National Highway Traffic Safety Administration (NHTSA) and the U.S. Environmental Protection Agency (EPA) have formally announced their intent to reconsider the 2019 Safer Affordable Fuel-Efficient Vehicles Rule Part One: One National Program (SAFE 1),[2] which curtailed California’s ability to establish and enforce more stringent GHG emission standards and a Zero Emission Vehicle (ZEV) sales mandate.[3] These steps are consistent with the Biden administration’s efforts to move swiftly to reexamine—and possibly to revoke—environmental regulations promulgated under the Trump administration, and could serve as a prime example of the shifting regulatory landscape for industries subject to GHG regulations.

Through SAFE 1, EPA withdrew the portions of California’s waiver under Section 209(b)(1) of the Clean Air Act (CAA) that allowed California to establish its own GHG emission standards and establish a mandate for the sale of ZEVs.[4] EPA went on to interpret the CAA as preventing other states from adopting California’s GHG standards, as well.[5] In the same action, NHTSA similarly cut back on California’s independent regulatory powers by concluding that NHTSA’s authority to regulate fuel economy under the Energy Policy and Conservation Act (EPCA) preempted all state and local regulations “related to” fuel economy.[6]

On April 22 and April 23, 2021, respectively, NHTSA and EPA formally announced that they are reconsidering this action, and will be soliciting public comment on the agencies’ separate proposed paths forward.

NHTSA

On April 22, 2021, NHTSA issued a notice of proposed rulemaking that would repeal those portions of SAFE 1 (including the regulatory text and interpretive statements in the preamble) that found California’s GHG and ZEV mandates preempted by EPCA.[7] In particular, NHTSA proposes to conclude that it lacks legislative rulemaking authority to issue a preemption regulation. The notice does not take a position on the substance of EPCA preemption. Rather, NHTSA says merely that it seeks to restore a “clean slate”—i.e., to take no formal agency position on express preemption by EPCA.[8]

The notice goes on to state, however, that even if NHTSA had legislative rulemaking authority, it would nonetheless repeal SAFE 1 because “NHTSA now has significant doubts about the validity of its preemption analysis as applied to the specific state programs discussed in SAFE 1,”[9] including federalism concerns and concerns with the “categorical” manner of the analysis taken in SAFE 1.[10]

NHTSA’s notice of proposed rulemaking includes a comment period of 30 days after publication in the Federal Register, which is expected in the coming days.

EPA

One day after NHTSA issued its notice, EPA announced its parallel action on SAFE 1. In its notice, EPA takes no new positions on the Agency’s authority to withdraw a previously granted waiver or the statutory interpretation of CAA Section 209.[11] Rather, EPA’s notice merely summarizes its past positions and tees up these issues, along with issues raised in administrative petitions, for public comment as part of a reconsideration. The notice states that the Agency now believes that there are “significant issues” with the positions taken in SAFE 1 and that “there is merit in reviewing issues that petitioners have raised” in the reconsideration petitions submitted to EPA.[12] However, the notice does not propose to take any specific alternative interpretation.

Notably, EPA has not initiated a rulemaking proceeding, but rather describes this as an informal adjudication.[13] The Agency also states that for waiver decisions, “EPA traditionally publishes a notice of opportunity for public hearing and comment and then, after the comment period has closed, publishes a notice of its decision in the Federal Register. EPA believes it is appropriate to use the same procedures for reconsidering SAFE 1.”[14]

A virtual public hearing will take place on June 2, 2021, and EPA will accept comments until July 6, 2021.[15]

Conclusion

In announcing the reconsideration of SAFE 1, EPA Administrator Michael Regan stated, “Today, we are delivering on President Biden’s clear direction to tackle the climate crisis by taking a major step forward to restore state leadership and advance EPA’s greenhouse gas pollution reduction goals.”[16] Final agency actions resulting from these reconsiderations are still months away, but EPA’s and NHTSA’s announcements signal the agencies’ continuing focus on GHG emissions and revisiting regulations issued by the previous administration. As executive branch agencies continue to carry out the directives in President Biden’s Executive Orders related to climate change, the landscape for regulated industries will remain in flux.

___________________________

[1]      Exec. Order No. 13990, 86 Fed. Reg. 7037, 7041 (Jan. 25, 2021) (issued Jan. 20, 2021).

[2]      84 Fed. Reg. 51310 (Sept. 27, 2019). The SAFE 1 Rule was challenged in a series of consolidated cases before the U.S. Court of Appeals for the D.C. Circuit, where Gibson Dunn represented a coalition of automotive industry members as Intervenors in support of the rule. See Union of Concerned Scientists v. NHTSA, No. 19-1230 (D.C. Cir.). That matter has been held in abeyance at the request of the United States pending further review of the SAFE 1 rulemaking by EPA and NHTSA.

[3]      Press Release, U.S. Dep’t of Transp., NHTSA, NHTSA Advances Biden-Harris Administration’s Climate & Jobs Goals (Apr. 22, 2021), here; U.S. EPA, Notice of Reconsideration of a Previous Withdrawal of a Waiver for California’s Advanced Clean Car Program (Light-Duty Vehicle Greenhouse Gas Emission Standards and Zero Emission Vehicle Requirements), here.

[4]      84 Fed. Reg. at 51328.

[5]      Id. at 51350.

[6]      Id. at 51313.

[7]      U.S. Dep’t of Transp., NHTSA, Notice of Proposed Rulemaking (prepublication version), Corporate Average Fuel Economy (CAFE) Preemption (Apr. 22, 2021), here.

[8]      Id. at 12.

[9]      Id. at 13.

[10]    Id. at 37.

[11]    U.S. EPA, Notice of Reconsideration (prepublication version), California State Motor Vehicle Pollution Control Standards; Advanced Clean Car Program; Reconsideration of a Previous Withdrawal of a Waiver of Preemption; Opportunity for Public Hearing and Public Comment (Apr. 23, 2021), here.

[12]    Id. at 7.

[13]    Id. at 27.

[14]    Id.

[15]     Id. at 2.

[16]    Press Release, U.S. EPA, EPA Reconsiders Previous Administration’s Withdrawal of California’s Waiver to Enforce Greenhouse Gas Standards for Cars and Light Trucks (Apr. 26, 2021), here.


The following Gibson Dunn lawyers assisted in preparing this client update: Ray Ludwiszewski, Stacie Fletcher, David Fotouhi, Rachel Corley, and Veronica Till Goodson.

Gibson Dunn’s lawyers are available to assist in addressing any questions you may have regarding these developments. Please contact the Gibson Dunn lawyer with whom you usually work, any member of the firm’s Environmental Litigation and Mass Tort practice group, or the following practice leaders and authors in Washington, D.C.:

Stacie B. Fletcher – Co-Chair (+1 202-887-3627, sfletcher@gibsondunn.com)
David Fotouhi (+1 202-955-8502, dfotouhi@gibsondunn.com)
Raymond B. Ludwiszewski (+1 202-955-8665, rludwiszewski@gibsondunn.com)
Daniel W. Nelson – Co-Chair (+1 202-887-3687, dnelson@gibsondunn.com)
Rachel Levick Corley (+1 202-887-3574, rcorley@gibsondunn.com)
Veronica Till Goodson (+1 202-887-3719, vtillgoodson@gibsondunn.com)

© 2021 Gibson, Dunn & Crutcher LLP

Attorney Advertising: The enclosed materials have been prepared for general informational purposes only and are not intended as legal advice.

Regulatory and policy developments during the first quarter of 2021 reflect a global tipping point toward serious regulation of artificial intelligence (“AI”) in the U.S. and European Union (“EU”), with far-reaching consequences for technology companies and government agencies.[1] In late April 2021, the EU released its long-anticipated draft regulation for the use of AI, banning some “unacceptable” uses altogether and mandating strict guardrails such as documentary “proof” of safety and human oversight to ensure AI technology is “trustworthy.”

While these efforts to aggressively police the use of AI will surprise no one who has followed policy developments over the past several years, the EU is no longer alone in pushing for tougher oversight at this juncture. As the United States’ national AI policy continues to take shape, it has thus far focused on ensuring international competitiveness and bolstering national security capabilities. However, as the states move ahead with regulations seeking accountability for unfair or biased algorithms, it also appears that federal regulators—spearheaded by the Federal Trade Commission (“FTC”)—are positioning themselves as enforcers in the field of algorithmic fairness and bias.

Our 1Q21 Artificial Intelligence and Automated Systems Legal Update focuses on these critical regulatory efforts, and also examines other key developments within the U.S. and Europe that may be of interest to domestic and international companies alike. As a result of several significant developments in April, and to avoid the need for multiple alerts, this 1Q21 update also include a number of matters from April, the beginning of 2Q21.

________________________

Table of Contents

I.         U.S. NATIONAL POLICY & REGULATORY DEVELOPMENTS

A.         U.S. National AI Strategy
B.         National Security & Trade
C.         Algorithmic Accountability & Consumer Safety
D.         FDA’s Action Plan for AI Medical Devices
E.         Intellectual Property Updates
F.         U.S. Regulators Seek Input on Use of AI in Financial Services

II.         EU POLICY & REGULATORY DEVELOPMENTS

A.         EC Publishes Draft Legislation for EU-wide AI Regulation
B.         CAHAI Feasibility Study on AI Legal Standards
C.         EU Council Proposes ePrivacy Regulation
D.         Cybersecurity Report on the Use of AI in Autonomous Vehicles
E.         Proposed German Legislation on Autonomous Driving

________________________

I.  U.S. NATIONAL POLICY & REGULATORY DEVELOPMENTS

A.   U.S. National AI Strategy

The U.S. federal government’s national AI strategy continues to take shape, bridging the old and new administrations. Pursuant to the National AI Initiative Act of 2020, which was passed on January 1 as part of the National Defense Authorization Act of 2021 (“NDAA”),[2] the White House Office of Science and Technology Policy (“OSTP”) formally established the National AI Initiative Office (the “Office”) on January 12. The Office—one of several new federal offices mandated by the NDAA—will be responsible for overseeing and implementing a national AI strategy and acting as a central hub for coordination and collaboration by federal agencies and outside stakeholders across government, industry and academia in AI research and policymaking.[3]

Further, on January 27, President Biden signed a memorandum titled “Restoring trust in government through science and integrity and evidence-based policy making,” setting in motion a broad review of federal scientific integrity policies and directing agencies to bolster their efforts to support evidence-based decision making. The President designated the OSTP to constitute an interagency Task Force to carry out the review,[4] which must be completed within 120 days of appointment of the Task Force members[5] and is expected to “generate important insights and best practices including transparency and accountability….”[6] On the same day, the President also signed an executive order to formally reconstitute the President’s Council of Advisors on Science and Technology.[7]

B.   National Security & Trade

1.   New House Subcommittee on Cyber, Innovative Technologies, and Information Systems

In February 2021, the House Armed Services Committee created a new Subcommittee on Cyber, Innovative Technologies, and Information Systems (“CITI”) out of the former Intelligence and Emerging Threats and Capabilities Subcommittee.[8] CITI will provide focused oversight on technology matters, including cybersecurity, IT policy, AI, electronic warfare and software acquisition, and shift non-technical topics, such as special operations and counter-proliferation of weapons of mass destruction, to other lawmakers. On March 12, the Subcommittee held a joint hearing with the House Committee on Oversight and Reform’s Subcommittee on National Security to receive testimony from the National Security Commission on Artificial Intelligence on the Commission’s final report (discussed in more detail below).[9]

2.   NSCAI Final Report

The National Defense Authorization Act of 2019 created a 15-member National Security Commission on Artificial Intelligence (“NSCAI”), and directed that the NSCAI “review and advise on the competitiveness of the United States in artificial intelligence, machine learning, and other associated technologies, including matters related to national security, defense, public-private partnerships, and investments.”[10] Over the past two years, NSCAI has issued multiple reports, including interim reports in November 2019 and October 2020, two additional quarterly memorandums, and a series of special reports in response to the COVID-19 pandemic.[11]

On March 1, 2021, the NSCAI submitted its Final Report to Congress and to the President. At the outset, the report makes an urgent call to action, warning that the U.S. government is presently not sufficiently organized or resourced to compete successfully with other nations with respect to emerging technologies, nor prepared to defend against AI-enabled threats or to rapidly adopt AI applications for national security purposes. Against that backdrop, the report outlines a strategy to get the United States “AI-ready” by 2025.[12] The Commission explains:

The United States should invest what it takes to maintain its innovation leadership, to responsibly use AI to defend free people and free societies, and to advance the frontiers of science for the benefit of all humanity. AI is going to reorganize the world.

America must lead the charge.

The more than 700-page report consists of two parts: Part I, “Defending America in the AI Era,” makes recommendations on how the U.S. government can responsibly develop and use AI technologies to address emerging national security threats, focusing on AI in warfare and the use of autonomous weapons, AI in intelligence gathering, and “upholding democratic values in AI.” The report’s recommendations identify specific steps to improve public transparency and protect privacy, civil liberties and civil rights when the government is deploying AI systems. NSCAI specifically endorses the use of tools to improve transparency and explainability: AI risk and impact assessments; audits and testing of AI systems; and mechanisms for providing due process and redress to individuals adversely affected by AI systems used in government. The report also recommends establishing governance and oversight policies for AI development, which should include “auditing and reporting requirements,” a review system for “high-risk” AI systems, and an appeals process for those affected.

Part II, “Winning the Technology Competition,” outlines urgent actions the government must take to promote AI innovation to improve national competitiveness, secure talent, and protect critical U.S. advantages, including IP rights. The report highlights how stringent patent eligibility requirements in U.S. courts, particularly with respect to computer-implemented and biotech-related inventions, and a lack of explicit legal protections for data have created uncertainty in IP protection for AI innovations, discouraging the pursuit of AI inventions and hindering innovation and collaboration. NSCAI also notes that China’s significant number of patent application filings have created a vast reservoir of “prior art” and caused the USPTO’s patent examination process increasingly difficult. As such, the report recommends that the President issue an executive order to recognize IP as a national priority, and develop a comprehensive plan to reform IP policies to incentivize and protect AI and other emerging technologies.[13]

The NSCAI report may provide opportunity for legislative reform, which would spur investments in AI technologies and accelerate government adoption of AI technologies in national security. The report’s recommendations with respect to transparency and explainability may also have significant implications for potential oversight and regulation of AI in the private sector.

3.   Executive Order on U.S. Supply Chains

At the end of February, the Biden Administration issued a sweeping executive order launching a year-long, multi-agency review of several sectors, including several that will be critical to maintaining U.S. leadership in the development of AI and associated technologies. The purpose of the “America’s Supply Chains” Executive Order 14017, as President Biden puts it, is to “help address the vulnerabilities in our supply chains across . . . critical sectors of our economy so that the American people are prepared to withstand any crisis.” The Executive Order has put into motion 100-day reviews of four types of products by four different federal agencies: (1) semiconductors (Commerce); (2) high-capacity batteries, including electric-vehicle batteries (Energy); (3) critical minerals and strategic materials, such as rare earth elements (Defense); and (4) pharmaceuticals and their active ingredients (Health and Human Services). Executive Branch work to implement the E.O. is being coordinated by the Assistant to the President for National Security Affairs (APNSA) and the Assistant to the President for Economic Policy (APEP). By February 24, 2022, the Secretaries of Defense, Health and Human Services, Commerce and Homeland Security, Energy, Transportation, and Agriculture are to provide the President with broader and deeper assessments of the defense industrial base, the public health and biological preparedness industrial base, the information and communications industrial base, energy sector industrial base, transportation industrial base, and agricultural commodities and food products industrial base, respectively.

The Biden Administration’s prioritization of semiconductors and critical minerals and strategic materials in the 100-day review was expected; they are critical links in many supply chains and either already are or could be in short supply to the United States for a range of reasons. Both are of specific relevance to the raw materials and manufacturing supply chains that support AI development and applications. Especially in light of ongoing geopolitical and economic tensions between the United States and China, the potential inability of the U.S. to access supply of critical minerals from China and many U.S. companies’ dependence on only a small handful of advanced semiconductor manufacturers based in Austria, Germany, Japan, The Netherlands, South Korea, Taiwan and the United States for critical links in their supply chains makes the advanced semiconductor supply chain especially prone to disruption.

Agency action has already begun with respect to the 100-day review of semiconductors. On March 11, the Commerce Department’s Bureau of Industry and Security (BIS) issued a notice seeking public comment on risks in the semiconductor manufacturing and advanced packaging supply chains. The notice requested information on a range of supply issues including the critical and essential goods and materials required for semiconductor manufacturing and advanced packaging support chain, manufacturing capabilities, and key skill sets and personnel necessary to sustain the U.S. semiconductor ecosystem. BIS also sought comments on how a failure to sustain the semiconductor supply chain might impact “key downstream capabilities,” including artificial intelligence applications. BIS received 34 comments by the comment due date of April 5 from a range of private sector companies, trade associations, universities, and individuals. In addition to the written comments, BIS also convened a virtual public forum inviting speakers to provide further input on the questions presented in its notice on April 8.

Although the focus of the America’s Supply Chain EO is on executive agency reporting, we expect the EO to provide U.S. private and non-governmental sectors significant opportunities for agency engagement. To state the obvious, the U.S. does not have a centralized planned economy, and U.S. Executive Branch agencies often lack the visibility required to produce reports that accurately reflect the state of play in many international supply chains. Especially because identified gaps and weak links in strategic supply chains are likely to be a focus of targeted infrastructure spending, tax incentives, export controls, immigration reform, and other regulatory action during the Biden Administration, many of our clients could find it well worth the effort to participate in agency information gathering like BIS’s public comment process, either directly or indirectly through trade associations.

Scrutiny on semiconductor supply chains has not been limited to the Executive Branch, however, and a recent request from Congress illustrates how even individual transactions involving specific links in the semiconductor supply may become subject to regulatory action as Commerce and other U.S. agencies develop a deeper understanding of supply chain dynamics. On March 19, 2021, two Republican lawmakers sent a letter to the Commerce Secretary to prevent ASML Holdings NV, a Dutch technology firm, from supplying critical systems to Semiconductor Manufacturing International Corp. (“SMIC”), a Chinese chipmaker. Sen. Marco Rubio (R-FL) and Rep. Michael McCaul (R-Texas) said that the U.S. should exercise its diplomatic leverage to weaken China’s foothold in the semiconductor industry. The lawmakers also asked Commerce Secretary Raimondo to add SMIC to the Commerce Department’s Entity List, which would limit SMIC’s ability to source materials even for those that are not manufactured in the United States. The two lawmakers proposed that a presumption of denial apply in the export licensing process to any China-facing export “capable of producing” chips smaller than 16 nanometers, which would broaden the scope of the products subject to the presumption of denial. The Commerce Secretary has not responded to the letter or issued any statement regarding the letter to date.

4.   Interim Final Rule “Securing the Information and Communications Technology and Services (“ICTS”) Supply Chain”

The Department of Commerce also has taken the next step in implementing another Executive Order, this time from the Trump Administration, focused on the ICTS Supply Chain. An Interim Final Rule implementing the EO became effective on March 22, 2021.[14] The ICTS EO is an effort to protect against threats posed on the use of hardware, software and services designed, developed, manufactured or supplied by companies owned by, controlled by, or subject to the direction or control of China and other “foreign adversary” countries, but has been the target of consternation by commentators since its issuance on May 15, 2019.

The Interim Final Rule implements the Secretary of Commerce’s new power to prohibit transactions which involve the acquisition, importation, transfer, installation, dealing in, or usage of certain ICTS.[15] Transactions subject to the Secretary of Commerce’s review and prohibition include those involving managed services, data transmission, software updates, repairs, or the platforming or data hosting of applications for consumer download. Any of these actions can be prohibited or subject to licensing driven mitigation when the services, equipment, or software is designed, developed, manufactured, or supplied by companies owned by, controlled by, or subject to the jurisdiction or direction of a foreign adversary, and poses an undue or unacceptable risk.[16]

Many different AI-related transactions could be impacted by the ICTS transaction review. Not only does the Interim Final Rule specifically include ICTS infrastructure that is integral to AI and machine learning technologies among the transactions it deems ICTS transactions, but it also includes other kinds of transactions that are necessary to support AI development or deployment, including certain software, hardware, or any other product or services integral to data hosting or computing services, and certain ICTS products, such as internet-enabled sensors, webcams, routers, modems, drones, or any other end-point surveillance or monitoring device, home networking device, or aerial system. Thus, companies in the U.S. seeking to store training data or use the processing power of cloud services to develop or host AI applications could see their access to China-based or China company-owned or controlled cloud service providers now subject to Department of Commerce licensing. Similarly, companies already deploying devices that make use of AI could find their ability to source cheap parts and components from foreign advisory companies limited by a transaction review.

C.   Algorithmic Accountability and Consumer Safety

Companies using algorithms, automated processes, and/or AI-enabled applications are now squarely on the radar of both federal and state regulators and lawmakers. In 2020, a number of draft federal bills and policy measures addressing algorithmic accountability and transparency had hinted at a sea change amid growing public awareness of AI’s potential to pose a risk to consumers, including by creating harmful bias. While no AI-specific federal legislation has been enacted to date, federal regulators, including the FTC, have now signaled that they will not wait to bring enforcement actions. Moreover, a steady increase in state privacy laws has placed increasing focus on governance of the biometric data utilized by facial recognition technologies. The past quarter saw a number of developments that suggest companies using facial recognition technology may be subject to stricter regulation and enforcement with respect to the use and retention of biometric identifiers extracted from facial images at both federal and state level.[17]

1.   Algorithmic Fairness

a)   FTC Statement Announces Intent to Take Enforcement Action Against “Biased” Algorithms

On April 19, the FTC published a blog post, “Aiming for truth, fairness, and equity in your company’s use of AI,” announcing the Commission’s intent to bring enforcement actions related to “biased algorithms” under section 5 of the FTC Act, the Fair Credit Reporting Act, and the Equal Credit Opportunity Act.[18] Notably, the statement expressly notes that “ the sale or use of – for example – racially biased algorithms” falls within the scope of the prohibition of unfair or deceptive business practices.

The FTC also provides some concrete guidance on  “using AI truthfully, fairly, and equitably,” indicating that it expects companies to “do more good than harm” by auditing its training data and, if necessary, “limit[ing] where or how [they] use the model;” testing its algorithms for improper bias before and during deployment; employing transparency frameworks and independent standards; and being transparent with consumers and seeking appropriate consent to use consumer data. The guidance also warns companies against making statements to consumers that “overpromise” or misrepresent the capabilities of a product, noting that biased outcomes may be considered deceptive and lead to FTC enforcement actions.

This statement of intent comes on the heels of remarks by Acting FTC Chairwoman Rebecca Kelly Slaughter on February 10 at the Future of Privacy Forum, previewing enforcement priorities under the Biden Administration and specifically tying the FTC’s role in addressing systemic racism to the digital divide, exacerbated by COVID-19, AI and algorithmic decision-making, facial recognition technology, and use of location data from mobile apps.[19]  It also follows the FTC’s informal guidance last year outlining principles and best practices surrounding transparency, explainability, bias, and robust data models.[20]

The FTC’s stance has bipartisan support in the Senate, where FTC Commissioner Rohit Chopra provided a statement on April 20, noting that “Congress and the Commission must implement major changes when it comes to stopping repeat offenders” and that “since the Commission has shown it often lacks the will to enforce agency orders, Congress should allow victims and state attorneys general to seek injunctive relief in court to halt violations of FTC orders.”[21]

We recommend that companies developing or deploying automated decision-making adopt an “ethics by design” approach and review and strengthen internal governance, diligence and compliance policies. Companies should also stay abreast of developments concerning the FTC’s ability to seek restitution and monetary penalties[22] and impose obligations to delete algorithms, models or data (a potential new remedial obligation that is addressed in more detail below).

b)   Bipartisan U.S. Lawmakers Introduce Bill Banning Law Enforcement Agencies from Accessing Illegally Obtained User Data

On April 21, a bipartisan group of lawmakers introduced a bill banning law enforcement agencies from buying access to user data from “data brokers,” including companies that “illegitimately obtained” their records.[23] The bill, titled “The Fourth Amendment Is Not For Sale Act,” is sponsored by a bipartisan group including Sen. Ron Wyden (D-OR), Sen. Rand Paul (R-KY) and 18 other members of the Senate, and purports to close “major loopholes in federal privacy law.”[24] The bill would force law enforcement agencies to obtain a court order before accessing users’ personal information through third-party brokers—companies that aggregate and sell personal data like detailed user location—and prevents law enforcement and intelligence agencies buying data that was “obtained from a user’s account or device, or via deception, hacking, violations of a contract, privacy policy, or terms of service.”[25]  Reps. Jerry Nadler (D-NY) and Zoe Lofgren (D-CA) introduced a companion bill in the House.

c)   Washington State Lawmakers Introduce a Bill to Regulate AI, S.B. 5116

On the heels of Washington’s landmark facial recognition bill (S.B. 6280) enacted last year,[26] state lawmakers and civil rights advocates proposed new rules to prohibit discrimination arising out of automated decision-making by public agencies.[27] The bill, which is sponsored by Sen. Bob Hasegawa (D-Beacon Hill), would establish new regulations for government departments that use “automated decisions systems,” a category that includes any algorithm that analyzes data to make or support government decisions.[28] If enacted, public agencies in Washington state would be prohibited from using automated decision systems that discriminate against different groups or make final decisions that impact the constitutional or legal rights of a Washington resident. The bill also bans government agencies from using AI-enabled profiling in public spaces. Publicly available accountability reports ensuring that the technology is not discriminatory would be required before an agency can use an automated decision system. The bill has been referred to Ways & Means.

2.   Facial Recognition

a)   FTC Enforcement

In January 2021, the Federal Trade Commission (“FTC”) announced its settlement with Everalbum, Inc. in relation to its “Ever App,” a photo and video storage app that used facial recognition technology to automatically sort and “tag” users’ photographs.[29] The FTC alleged that Everalbum made misrepresentations to consumers about its use of facial recognition technology and its retention of the photos and videos of users who deactivated their accounts in violation of Section 5(a) of the FTC Act. Pursuant to the settlement agreement, Everalbum must delete models and algorithms that it developed using users’ uploaded photos and videos and obtain express consent from its users prior to applying facial recognition technology, underscoring the emergence of deletion as a potential enforcement measure. A requirement to delete data, models and algorithms developed by using data collected without express consent could represent a significant remedial obligation with broader implications for AI developers.

Signaling the potential for increasing regulation and enforcement in this area, FTC Commissioner Rohit Chopra issued an accompanying statement describing the settlement as a “course correction,” commenting that facial recognition technology is “fundamentally flawed and reinforces harmful biases” while highlighting the importance of  “efforts to enact moratoria or otherwise severely restrict its use.” However, the Commissioner also cautioned against “broad federal preemption” on data protection and noted that the authority to regulate data rights should remain at state-level.[30] We will carefully monitor any further enforcement action by the FTC (and other regulators), and recommend that companies developing or using facial recognition technologies seek specific legal advice with respect to consent requirements around biometric data as well as robust AI diligence and risk-assessment process for third-party AI applications.

b)   Virginia Passes Ban on Law Enforcement Use of Facial Recognition Technology, H.B. 2031

The legislation, which won broad bipartisan support, prohibits all local law enforcement agencies and campus police departments from purchasing or using facial recognition technology unless it is expressly authorized by the state legislature.[31] The law will take effect on July 1, 2021. Virginia joins California, as well as numerous cities across the U.S., in restricting the use of facial recognition technology by law enforcement.[32]

c)   BIPA

i.   Litigation

On March 15, 2021, Judge James L. Robart of the U.S. District Court for the Western District of Washington declined to dismiss two putative class action suits accusing two technology companies of violating Illinois residents’ privacy rights under BIPA.[33] The nearly identical complaints alleged that the companies violated BIPA by using a data set compiled by IBM containing geometric scans of their faces without their permission. The court found that plaintiffs’ claims could proceed under Sections 15(b) and 15(c) of BIPA.

On March 16, 2021, Illinois District Judge Sara L. Ellis dismissed proposed class claims against Clarifai, Inc., a facial recognition software maker, under BIPA.[34] The Complaint alleged that Clarifai was harvesting facial data from OkCupid dating profile photos without obtaining consent from users or making disclosures required under BIPA. The Court found that the plaintiff failed to allege sufficient contacts to show that Clarifai directly targeted Illinois and to establish personal jurisdiction.

ii.   Illinois Bill Seeks to Limit BIPA

On March 22, the Illinois state legislature sent proposed amendments to BIPA (H.B. 559) to the chamber floor.[35] The draft bill contains provisions that would impose significant limitations on the scope and impact of BIPA, including a 30-day cure period, a one-year deadline to sue, and a proposal to replace statutory damages with actual damages.[36] BIPA suits have proliferated after the Illinois Supreme Court and some federal courts allowed plaintiffs to sue based on statutory violations.

D.   FDA’s Action Plan for AI Medical Devices

On January 12, 2021, the U.S. Food and Drug Administration (“FDA”) released the agency’s first “Artificial Intelligence/Machine Learning (“AI/ML”)-Based Software as a Medical Device (SaMD) Action Plan,” which describes a multi-pronged approach to advance the FDA’s oversight of AI/ML-based medical software.[37] The AI/ML Action Plan is a response to stakeholder feedback received in relation to the April 2019 discussion paper, “Proposed Regulatory Framework for Modifications to Artificial Intelligence/Machine Learning-Based Software as a Medical Device (SaMD),” which described the foundation for a potential approach to premarket review for AI and ML software modifications.[38]  For a detailed analysis of the discussion paper and proposed regulatory approach, please see our previous 2Q19 Legal Update.[39]

The FDA’s “Action Plan” outlines five next steps:

  1. Further developing the proposed regulatory framework, including through issuance of draft guidance on a predetermined change control plan (for software’s learning over time). The SaMD Pre-Specifications (“SPS”) describe “what” aspects the manufacturer intends to change through learning, and the Algorithm Change Protocol (“ACP”) explains “how” the algorithm will learn and change while remaining safe and effective. The FDA intends to draft guidance which includes include a proposal of what should be included in an SPS and ACP to support the safety and effectiveness of AI/ML SaMD algorithms;
  2. Supporting the development of good machine learning practices to evaluate and improve machine learning algorithms;
  3. Fostering a patient-centered approach, including device transparency to users. Promoting transparency is a key aspect of a patient-centered approach, and numerous stakeholders have expressed the unique challenges of labeling for AI/ML-based devices and the need for manufacturers to clearly describe, for example, the data that were used to train the algorithm or “the role intended to be served by its output.”[40] The FDA intends to identify types of information a manufacturer should include in the labeling of AI/ML based medical devices to support transparency to users.
  4. Developing methods to evaluate and improve machine learning algorithms, which includes methods for the identification and elimination of bias; and
  5. Advancing real-world performance monitoring pilots on a voluntary basis.

The FDA welcomes continued feedback in this area and intends to hold public workshops to share learnings and elicit additional input from stakeholders. While the FDA has not yet expressed a substantive view on the specific contents of a draft regulation, it seems clear that it will involve a commitment from manufacturers on transparency and real-world performance monitoring for AI and machine learning-based software as a medical device, as well as periodic updates to the FDA on what changes were implemented as part of approved pre-specifications and the ACP. Depending on the scope of the draft regulatory framework, some of the proposed requirements could be highly significant and onerous: for example, requiring a manufacturer to include in the labeling of AI/ML-based devices a “description” of training data. We will continue to monitor developments, and expect that companies operating in this space will want to have a voice in the process leading up to the regulations, particularly with respect to implementing transparency requirements.

E.   Intellectual Property Updates

1.   USPTO Files Motion for Summary Judgment Arguing that AI Machines Can’t Invent

On February 24, the U.S. Patent and Trademark Office (“USPTO”) filed a motion for summary judgment in Virginia federal court with respect to a lawsuit challenging its finding that patents cannot cover inventions by AI machines, arguing that the Patent Act defines an inventor as an “individual” who must be human.[41]

The plaintiff, Stephen Thaler, is a physicist who created the AI, called DABUS, behind potential patents for a beverage container and a flashing beacon for search-and-rescue missions. The USPTO had denied the patent applications as incomplete because they were missing an inventor’s name, and it refused a petition to reconsider in April 2020, noting that the courts and the law have made clear that only humans can be inventors. Thaler then sued the USPTO in August 2020, alleging it violated the Administrative Procedure Act when it added a patentability requirement that is “contrary to existing law and at odds with the policy underlying the patent system,” and that by refusing to let AI machines be inventors, the agency is undermining the patent system.

In January 2021, Thaler filed a motion for summary judgment, arguing that the USPTO’s finding was arbitrary, capricious, an abuse of discretion and not supported by the law or substantial evidence, and that all of the cases the USPTO cites to support its finding involve inventions that courts concluded humans could do, but not creations that only a machine could invent. At a motion hearing on April 6, U.S. District Judge Leonie Brinkema did not make a bench ruling, but indicated that current legislation restricts the definition of “inventor” in the Patent Act to humans.[42] As previously reported, the European Patent Office has also denied Thaler’s patent applications with respect to DABUS.[43]

2.   Google LLC v. Oracle America, Inc. — Supreme Court Rules for Google in Oracle Copyright Dispute

On April 5, the U.S. Supreme Court ruled in favor of Google in a multibillion-dollar copyright lawsuit filed by Oracle, holding that Google did not infringe Oracle’s copyrights under the fair use doctrine when it used material from Oracle’s API’s to build its Android smartphone platform.[44]  Notably, the Court did not rule on whether Oracle’s API’s declaring code could be copyrighted, but held that, assuming for argument’s sake the material was copyrightable, “the copying here at issue nonetheless constituted a fair use.”[45] Specifically, the Court stated that “where Google reimplemented a user interface, taking only what was needed to allow users to put their accrued talents to work in a new and transformative program, Google’s copying of the Sun Java API was a fair use of that material as a matter of law.”[46] The Court focused on Google’s transformative use of the Sun Java API and distinguished declaring code from other types of computer code in finding that all four guiding factors set forth in the Copyright Act’s fair use provision weighed in favor of fair use.[47]

While the ruling appears to turn on this particular case, it will likely have repercussions for AI and platform creators.[48]  The Court’s application of fair use could offer an avenue for companies to argue for the copying of organizational labels without a license. Notably, the Court stated that commercial use does not necessarily tip the scales against fair use, particularly when the use of the copied material is transformative. This could assist companies looking to use content to train their algorithms at a lower cost, putting aside potential privacy considerations (such as under BIPA). Meanwhile, companies may also find it more challenging to govern and oversee competitive programs that use their API code for compatibility with their platforms.

F.   U.S. Regulators Seek Input on AI Use in Financial Services

Five federal agencies, including the Federal Reserve Board and the Consumer Financial Protection Bureau, are seeking public input on financial institutions’ use of AI. The notice “Request for Information and Comment on Financial Institutions’ Use of Artificial Intelligence, including Machine Learning” (“RFI”) was published in the Federal Register on March 31.[49]

The federal agencies are aiming to better understand the use of AI and its governance, risk management and controls as well as challenges in developing, implementing and managing the technology. The RFI also solicits respondents’ views on “the use of AI in financial services to assist in determining whether any clarifications from the agencies would be helpful for financial institutions’ use of AI in a safe and sound manner and in compliance with applicable laws and regulations, including those related to consumer protection.” Financial institutions, trade associations, consumer groups and other stakeholders have until June 1, 2021 to submit their comments.

 III.   EU POLICY & REGULATORY DEVELOPMENTS

A.   EC Publishes Draft Legislation for EU-wide AI Regulation

On April 21, 2021, the European Commission (“EC”) presented its much anticipated comprehensive draft of an AI Regulation (also referred to as the “Artificial Intelligence Act”).[50] As highlighted in our client alert “EU Proposal on Artificial Intelligence Regulation Released” and in our “3Q20 Artificial Intelligence and Automated Systems Legal Update”, the draft comes on the heels of a variety of publications and policy efforts in the field of AI with the aim of placing the EU at the forefront of both AI regulation and innovation. The proposed Artificial Intelligence Act delivers on the EC president’s promise to put forward legislation for a coordinated European approach on the human and ethical implications of AI[51] and would be applicable and binding in all 27 EU Member States.

In order to “achieve the twin objective of promoting the uptake of AI and of addressing the risks associated with certain uses of such technology”[52], the EC generally opts for a risk-based approach rather than a blanket technology ban. However, the Artificial Intelligence Act also contains outright prohibitions of certain “AI practices” and some very far-reaching provisions aimed at “high-risk AI systems”, which are somewhat reminiscent of the regulatory approach under the EU’s General Data Protection Regulation (“GDPR”); i.e. broad extra-territorial reach and hefty penalties, and will likely give rise to controversy and debate in the upcoming legislative procedure.

As the EC writes in its explanatory memorandum to the Artificial Intelligence Act, the proposed framework covers the following specific objectives:

  • Ensuring that AI systems available in the EU are safe and respect EU laws and values;
  • Ensuring legal certainty to facilitate investment and innovation in AI;
  • Enhancing governance and effective enforcement of existing laws applicable to AI (such as product safety legislation); and
  • Facilitating the development of a single market for AI and prevent market fragmentation within the EU.

1.   Summary of Key Provisions

The most relevant and noteworthy provisions contained in the Artificial Intelligence Act include:

  1. Scope of the Artificial Intelligence Act – The proposed Artificial Intelligence Act not only covers “providers”[53] based in the EU, but also “providers” of AI systems based in third countries, placing on the market or putting into service AI systems in the EU, and also “users”[54] of AI systems located within the EU.[55] However, the proposed scope of the Artificial Intelligence Act goes even further to include also “providers” and “users” of AI systems located in third countries, where the output produced by the AI system is used in the EU.[56] The EC does not provide concrete examples for these use cases, but explains that the logic behind this is to prevent the circumvention of the Artificial Intelligence Act by transferring data lawfully collected in the EU to a third country and subject it to an AI system, which is located there.[57] Conversely, the Artificial Intelligence Act would not apply to AI systems developed or used exclusively for military purposes.[58]
  2. Definition of an AI system – While the Artificial Intelligence Act provides a definition of an AI system[59], the EC emphasizes that the definition aims to be as technology neutral and future-proof as possible. Thus, the definition can and likely will be adapted by the EC as needed.
  3. Prohibition of certain AI practices – Following a risk-based approach, which differentiates between uses of AI that create (i) an unacceptable risk, (ii) a high risk and (iii) low or minimal risk, the EC proposes to enact a strict ban on AI systems that are considered to create an “unacceptable risk.” The Artificial Intelligence Act lists four types of AI systems bearing an unacceptable risk, including AI systems that deploy “subliminal techniques beyond a person’s consciousness in order to materially distort a person’s behavior in a manner that causes or is likely to cause that person or another person physical or psychological harm.”[60] Since the draft legislation itself and the accompanying materials do not offer any further definitions or explanations for key terms, the exact application and impact of this prohibition in practice remains unclear. Further prohibited practices include the use of “social scoring” AI systems by public authorities[61] and the deployment of “real-time remote biometric identification systems” in publicly available spaces for the purpose of law enforcement (unless certain narrowly defined exceptions apply).[62]
  4. Mandatory requirements for “high-risk AI systems” – The Artificial Intelligence Act contains specific requirements for so-called “high-risk AI systems”. AI systems are considered “high-risk” if they are either (i) intended to be used as a safety component of a product (embedded AI) or are themselves a product, which is covered by certain EU product safety legislation (g. medical devices, personal protective equipment, toys or machinery)[63] or (ii) listed in an enumerative catalogue,[64] which may be expanded by the EC through the application of a specific risk assessment methodology. The latter includes, inter alia, biometric identification and categorization of natural persons, management and operation of critical infrastructure (e.g. supply of water, gas, heating and electricity), employment (e.g. AI systems for screening applications), access to and enjoyment of essential private services and public services and benefits (e.g. AI systems for evaluating credit scores), law enforcement (e.g. predictive AI systems intended for the evaluation of occurrence or reoccurrence of a criminal offence) and administration of justice and democratic processes (e.g. AI systems for researching and interpreting facts and the law). Conspicuously, the health-care sector is missing from that list. General requirements for the development and deployment of such “high-risk AI systems” include the establishment and maintenance of a risk management system, the use of appropriate training, validation and testing data in the development phase, the achievement of an appropriate level of accuracy, robustness and cybersecurity in light of the intended use, the drawing up of specific technical documentation, designing of logging capabilities within the AI system, providing of comprehensive instructions for use and enabling human oversight of the AI system.[65] Notably, Article 10 of the draft regulation requires that the training, validation and testing data sets are “relevant, representative, free of errors and complete” and take into account the characteristics or elements particular to the specific geographical, behavioral or functional setting of the system’s intended use; the draft regulation carves out higher penalties for non-compliance with these data and data governance requirements in comparison to other cases of infringement.[66] Providers of “high-risk AI systems” also have specific obligations, which include ensuring that high-risk AI systems undergo a “conformity assessment procedure” prior to placing on the market or putting into service.[67] This “conformity assessment procedure” is modelled after the procedures, which are required before introducing other products, such as medical devices, into the EU market. For certain “high-risk AI systems” the provider only needs to perform internal controls. However, for AI systems which enable biometric identification and categorization of natural persons, the providers must involve an outside entity in the assessment procedure (a so-called “notified body”).[68] For “high-risk AI systems” covered by existing EU product safety legislation, already applicable conformity assessment procedures should be followed. Further, providers of “high-risk AI systems” must register the system in a publicly available EU database that is provided for under the Act.[69]
  5. Post-market monitoring obligations for “high-risk AI systems” – In addition to the provisions relating to the development and placing on the market of “high-risk AI systems”, the proposed Artificial Intelligence Act also provides for mandatory post-market monitoring obligations for providers of such systems.[70] This includes obligations to report any serious incident or any malfunctioning of the AI system, which would constitute a breach of obligations under EU laws intended to protect fundamental rights. “High-risk AI systems” also have to be withdrawn or recalled, if the provider considers that an AI system that was placed on the market or put into service violates the Artificial Intelligence Act.
  6. Provisions relating to “non-high-risk AI systems” – Other AI systems which do not qualify as prohibited or “high-risk AI systems” are not subject to any specific requirements. In order to facilitate the development of “trustworthy AI”, the EC stipulates that providers of “non-high-risk AI systems” should be encouraged to develop codes of conduct intended to foster the voluntary application of the mandatory requirements applicable to “high-risk AI systems”.[71] However, AI systems which are intended to interact with natural persons must be designed and developed in such a way that users are informed they are interacting with an AI system, unless it is “obvious from the circumstances and the context of use.”[72] The EC also proposes a disclosure obligation for so-called “deep fakes”.[73] In addition, the EC points out that such “non-high-risk AI systems” nevertheless have to comply with general product safety requirements.[74]
  7. Enforcement and penalties for non-compliance – The draft Artificial Intelligence Act creates a governance and enforcement structure within which EU Member States would designate one or more national competent authorities at the national level, as well as a top-level national supervisory authority. At the EU level, the EC proposes establishing a European Artificial Intelligence Board, which would be responsible for providing advice and assistance to the EC. Finally, the proposal also includes various enforcement instruments and hefty penalties for non-compliance. In case of non-compliance with regards to the prohibitions on specific AI systems under Article 5 and AI system requirements relating to data and data governance under Article 10, companies would face fines of up to EUR 30 million (approx. $36 million total global annual turnover, whichever is higher.[75] Cases of non-compliance with the remaining requirements and obligations under the draft regulation would subject the company to administrative fines of up to EUR 20 million (approx. $24 million) or up to 4% of the company’s total worldwide annual turnover for the preceding financial year, whichever is higher.[76] Additionally, the supply of incorrect, incomplete or misleading information to notified bodies and national competent authorities in reply to a request may result in administrative fines of up to EUR 10 million (approx. $12 million) or up to 2% of the company’s total worldwide annual turnover for the preceding financial year, whichever is higher.[77]

2.   Comparison with U.S. Legislative Proposals

Although the draft EC regulation is more comprehensive than existing legal frameworks that govern AI, there are marked similarities to recent legislation introduced in the U.S. For example, as noted above, a growing number of legislative bodies in the U.S. have passed laws restricting or banning the use of facial recognition technology, sharing the EC’s concerns regarding remote biometric identification systems, especially in the context of law enforcement.[78]

Additionally, like the draft regulation, state legislation relating to AI systems has called for increased transparency and stronger oversight. For example, the California Privacy Rights Act of 2020 requires that responses to access requests regarding automated decision-making technology “include meaningful information about the logic involved in such decision-making processes, as well as a description of the likely outcome of the process with respect to the consumer”, similar to the technical documentation requirements in the draft regulation which require providers to report “the general logic of the AI system and of the algorithms” along with the “main classification choices” with regards to the persons on which the system is to be used.[79] Also, like the supervising authority access requirements in the Artificial Intelligence Act, Washington state’s “Act Relating to the use of facial recognition services”, requires that providers of facial recognition services to state or local agencies “make available an application programming interface or other technical capability, chosen by the provider, to enable legitimate, independent, and reasonable tests of those facial recognition services for accuracy and unfair performance differences across distinct subpopulations.”[80]

Notably, the transparency and technical documentation requirements in the EC’s Artificial Intelligence Act are far more extensive than those outlined in existing legislation within the U.S. Specifically, under the EC regulation, authorities would be granted full access to the AI system provider’s training, validation and testing datasets, and upon reasoned request, the source code itself.[81] While a court in New Jersey recently granted a criminal defendant access to the source code of a probabilistic genotyping software used to match the defendant’s DNA to a crime scene, access to source code is generally not required by legislation or demanded by courts with respect to automated-decisions in the United States.[82] The extensive required disclosures may cause concern over intellectual property protection; information and data would be protected by confidentiality requirements, however the Commission and Member States are permitted to exchange confidential information with regulatory authorities of third countries where confidentiality agreements are in place.[83] Currently, United States legislative and regulatory bodies are not asking for the same degree of transparency, but are still taking steps to curb the potential discriminatory impact of AI systems, as discussed above.[84]

3.   Next Steps

While it is uncertain when and in which form the Artificial Intelligence Act will come into force, the EC has set the tone for upcoming policy debates with this ambitious new proposal. While certain provisions and obligations may not be carried over to the final legislation, it is worth noting that the EU Parliament has already urged the EC to prioritize ethical principles in its regulatory framework.[85] Therefore, we expect that the proposed rules will not be significantly diluted, and could even be further tightened, as some advocacy groups have called for.[86] Companies developing or using AI systems, whether based in the EU or abroad, should keep a close eye on further developments with regard to the Artificial Intelligence Act, and in particular the scope of the prohibited “unacceptable” and “high-risk” use cases, which, as drafted, could potentially apply to a very wide range of products and applications.

We stand ready to assist clients with navigating the potential issues raised by the proposed EU regulations as we continue to closely monitoring developments in that regard, as well as public reaction. We can and will help advise any clients desiring to have a voice in the process.

On December 17, 2020, the Ad Hoc Committee on Artificial Intelligence (“CAHAI”) of the Council of Europe (the “CoE”), adopted a feasibility study on a legal framework on AI design, development and application based on the CoE’s standards.[87] CAHAI was mandated by the CoE in 2019 to examine, on the basis of broad multi-stakeholder consultations, the feasibility of such a legal framework and take into account the CoE’s relevant standards in the fields of human rights, democracy and the rule of law as well as the relevant existing universal and regional international legal instruments.

At the outset, CAHAI points out that there is no single definition of AI and that the term “AI” is used as a blanket term for “various computer applications based on different techniques, which exhibit capabilities commonly and currently associated with human intelligence.” Accordingly, CAHAI highlights the need to approach AI systems in a technologically neutral way.

CAHAI expressly recognizes the opportunities and benefits arising from AI—such as contributing to achieving the UN Sustainable Development Goals and helping to mitigate the effect of climate change—but also addresses the potential challenges of certain AI use cases, such as the use of AI systems to predict recidivism and AI-based tracking techniques, as well as the risks arising out of biased training data.  In light of these concerns, CAHAI recommends that a potential CoE legal framework on AI should pursue a risk-based approach that targets the specific application context. In its concluding comments, CAHAI notes that “no international legal instrument specifically tailored to the challenges posed by AI exists, and that there are gaps in the current level of protection provided by existing international and national instruments.”

On March 30, 2021, the CoE announced that CAHAI is now preparing a legal framework on AI.[88] CAHAI has launched a multi-stakeholder consultation until April 29, 2021.[89]

C.   EU Council Proposes ePrivacy Regulation

On February 10, 2021, the Council of the European Union (the “EU Council”), the institution representing EU Member States’ governments, provided a negotiating mandate with regard to a revision of the ePrivacy Directive [90] and published an updated proposal for a new ePrivacy Regulation.[91] Contrary to the current ePrivacy Directive, the new ePrivacy Regulation would not have to be implemented into national law, but would apply directly in all EU Member States without transposition.

The ePrivacy Directive[92] contains rules related to the privacy and confidentiality in connection with the use of electronic communications services. However, an update of these rules is seen as critical given the sweeping and rapid technological advancement that has taken place since it was adopted in 2002. The new ePrivacy Regulation, which would repeal and replace the ePrivacy Directive, has been under discussion for several years now.[93]

Pursuant to the EU Council’s proposal, the ePrivacy Regulation will also cover machine-to-machine data transmitted via a public network, which might create restrictions on the use of data by companies developing AI-based products and other data-driven technologies. As a general rule, all electronic communications data will be considered confidential, except when processing or other usage is expressly permitted by the ePrivacy Regulation. Similar to the European General Data Protection Regulation (“GDPR”), the ePrivacy Regulation would also apply to processing that takes place outside of the EU and/or to service providers established outside the EU, provided that the end users of the electronic communications services, whose data is being processed, are located in the EU.

However, unlike GDPR, the ePrivacy Regulation would cover all communications content transmitted using publicly available electronic communications services and networks, and not only personal data. Further, metadata (such as location and time of receipt of the communication) also falls within the scope of the ePrivacy Regulation.

It is expected that the draft proposal will undergo further changes during negotiations with the European Parliament. Therefore, it remains to be seen whether the particular needs of highly innovative data-driven technologies will be taken into account—by creating clear and unambiguous legal grounds other than user consent for processing of communications content and metadata for the purpose of developing, improving and offering AI-based products and applications. If the negotiations between the EU Council and the EU Parliament proceed without any further delays, the new ePrivacy Regulation could enter into force in 2023, at the earliest.

D.   Cybersecurity Report on the Use of AI in Autonomous Vehicles

On February 11, 2021, the European Union Agency for Cybersecurity (“ENISA”) and the European Commission’s Joint Research Centre (“JRC”) published a joint report on cybersecurity risks connected to the use of AI in autonomous vehicles and provided recommendations for mitigating them (the “Cybersecurity Report”).[94]

The Cybersecurity Report emphasized the vulnerability of AI systems in autonomous vehicles with respect to intentional attacks that aim to interfere with the AI system. Even simple measures, such as paint markings on the road, could interfere with system navigation tools using AI technologies and could have a significant impact on safety and reliability.

In order to prevent or mitigate such risks, the Cybersecurity Report recommends several measures, such as the systematic security validation of AI models and data early on in the development process of AI systems used in autonomous vehicles. Further, the automotive industry should adopt a holistic “security by design” approach, creating an “AI cybersecurity culture” across the production ecosystem. The Cybersecurity Report identifies the absence of sufficient security knowledge and expertise among developers and system designers as a major roadblock towards cybersecurity awareness in the industry.

E.   Proposed German Legislation on Autonomous Driving

On March 15, 2021, the German Federal Government (“Bundesregierung”) submitted a draft law on fully automated driving (SAE level 4) to the German Parliament (“Bundestag”) for legislative debate.[95]  The draft law aims to establish uniform conditions for testing new technologies, such as driverless cars with SAE level 4, throughout Germany. Pursuant to the draft law, autonomous vehicles will be permitted to drive in regular operation without a driver being physically present, limited—for now—to certain locally defined operating areas, for the time being. If the draft law is passed by the Bundestag, Germany expects to be the first country in the world to permit fully automated vehicles in regular operation across the country by 2022 (subject to local operating areas to be defined by the respective German state authorities). As an example of fields of operation for such automated vehicles, the Bundesregierung mentions shuttle services, Hub2Hub and Dual-Mode-Vehicles, such as “automated valet parking.”  Currently, autonomous vehicles can only be operated in Germany with special permits granted by state authorities.

The draft law also includes framework provisions on liability, which reflect the status quo under German liability law: if a person is injured or an object damaged while operating a car, the motor insurance of the car’s owner compensates for the damage. However, the draft law also introduces a new concept: “technical supervision,” defined as the ability to deactivate the autonomous vehicle during operation and enable driving maneuvers for the autonomous vehicle. In principle, the owner of the car is responsible for “technical supervision,” but can also entrust another person with the performance of these tasks. Nonetheless, the owner is still liable for any possible liability of the person entrusted with “technical supervision.”

There remains disagreement within the Bundesregierung regarding the provisions on data protection contained in the draft law.[96] Open items will be discussed in the upcoming legislative procedure. The Bundesregierung is aiming to adopt the new law before the parliamentary summer break (and before the German Federal Elections in September 2021).[97]

__________________________

   [1]   This Legal Update focuses on recent U.S. and EU regulatory efforts, but we note that there are numerous other examples of increasingly stringent worldwide regulation of algorithmic accountability and fairness. For example, on February 22, the UK Government published its response to the December 2020 Report by the House of Lords Select Committee on Artificial Intelligence, “AI in the UK: No Room for Complacency,” discussed in more detail in our Fourth Quarter and 2020 Annual Review of Artificial Intelligence and Automated Systems. The House of Lords’ report recommended action by the Government and called for it to “better coordinate its [AI] policy and the use of data and technology” on a national and local level, and “lead the way on making ethical AI a reality.” In its response, the UK Government acknowledged that it is crucial to develop the public’s understanding and trust in AI, stating that the National Data Strategy is actively ensuring members of the public become “responsible data citizens”. Moreover, the Centre for Data Ethics and Innovation’s (“CDEI”) future role will include AI monitoring and testing potential interventions in the tech landscape.

   [2]   For more detail, see our Fourth Quarter and 2020 Annual Review of Artificial Intelligence and Automated Systems.

   [3]   The White House, Press Release (Archived), The White House Launches the National Artificial Intelligence Initiative Office (Jan. 12, 2021), available at https://trumpwhitehouse.archives.gov/briefings-statements/white-house-launches-national-artificial-intelligence-initiative-office/.

   [4]   The White House, Memorandum on Restoring Trust in Government Through Scientific Integrity and Evidence-Based Policymaking (Jan. 27, 2021), available at https://www.whitehouse.gov/briefing-room/presidential-actions/2021/01/27/memorandum-on-restoring-trust-in-government-through-scientific-integrity-and-evidence-based-policymaking/.

   [5]   Government Executive, New Task Force Will Conduct Sweeping Review of Scientific Integrity Policies (March 30, 2021), available at https://www.govexec.com/management/2021/03/new-task-force-will-conduct-sweeping-review-scientific-integrity-policies/173020/.

   [6]   Letter from Deputy Director Jane Lubchenco and Deputy Director Alondra Nelson, OSTP to all federal agencies (March 29, 2021), available at https://int.nyt.com/data/documenttools/si-task-force-nomination-cover-letter-and-call-for-nominations-ostp/ecb33203eb5b175b/full.pdf.

   [7]   The White House, Executive Order on the President’s Council of Advisors on Science and Technology (Jan. 27, 2021), available at https://www.whitehouse.gov/briefing-room/presidential-actions/2021/01/27/executive-order-on-presidents-council-of-advisors-on-science-and-technology/.

   [8]   House Armed Services Committee, Subcommittee on Cyber, Innovative Technologies, and Information Systems, available at https://armedservices.house.gov/cyber-innovative-technologies-and-information-systems.

   [9]   House Armed Services Committee, Subcommittee on Cyber, Innovative Technologies, and Information Systems and the House Committee on Oversight & Reform’s Subcommittee on National Security Joint Hearing: “Final Recommendations of the National Security Commission on Artificial Intelligence” (Mar. 12, 2021), available at https://armedservices.house.gov/hearings?ID=32A667CD-578C-4F65-9F4F-1E26EE8F389A.

  [10]   H.R. 5515, 115th Congress (2017-18).

  [11]   The National Security Commission on Artificial Intelligence, Previous Reports, available at https://www.nscai.gov/previous-reports/.

  [12]   NSCAI, The Final Report (March 1, 2021), available at https://www.nscai.gov/wp-content/uploads/2021/03/Full-Report-Digital-1.pdf.

  [13]   Some of these concerns echo prior actions by the USPTO. For example, the USPTO issued the 2019 Revised Patent-Eligibility Guidance, which reportedly resulted in a 44% decrease in uncertainty of patent examination subject matter. However, the guidance has not been broadly applied by courts and leads to mixed results. Additionally, the USPTO in October 2020 issued a report on Public Views on Artificial Intelligence and Intellectual Property Policy, observing that commentators “were nearly equally divided between the view that new intellectual property rights were necessary to address AI inventions and the belief that the current U.S. IP framework was adequate to address AI inventions.” As discussed below, however, the USPTO continues to hold the view that an inventor to a patent must be a natural person.

[14]    Securing the Information and Communications Technology and Services Supply Chain, 86 FR 4909 (Jan. 19, 2021), available at https://www.federalregister.gov/documents/2021/01/19/2021-01234/securing-the-information-and-communications-technology-and-services-supply-chain.

[15]    Securing the Information and Communications Technology and Services Supply Chain, U.S. Department of Commerce, 86 Fed. Reg. 4923 (Jan. 19, 2021) (hereinafter “Interim Final Rule”).

[16]    Interim Final Rule, § 7.1.

  [17]   Further, on February 3, Canada’s Privacy Commissioners stated that Clearview AI’s app—which has been used widely by law enforcement agencies across Canada—was “illegal” and akin to putting all of society “continually in a police lineup.”) (Link to PIPEDA report)

  [18]   FTC, Business Blog, Elisa Jillson, Aiming for truth, fairness, and equity in your company’s use of AI (April 19, 2021), available at https://www.ftc.gov/news-events/blogs/business-blog/2021/04/aiming-truth-fairness-equity-your-companys-use-ai.

  [19]   FTC, Protecting Consumer Privacy in a Time of Crisis, Remarks of Acting Chairwoman Rebecca Kelly Slaughter, Future of Privacy Forum (Feb. 10, 2021), available at https://www.ftc.gov/system/files/documents/public_statements/1587283/fpf_opening_remarks_210_.pdf.

  [20]   FTC, Using Artificial Intelligence and Algorithms (April 8, 2020), available at https://www.ftc.gov/news-events/blogs/business-blog/2020/04/using-artificial-intelligence-algorithms.

  [21]   FTC, Prepared Opening Statement of Commissioner Rohit Chopra, U.S. Senate Committee on Commerce, Science, and Transportation Hearing on “Strengthening the Federal Trade Commission’s Authority to Protect Consumers,” (April 20, 2021), available at https://www.ftc.gov/system/files/documents/public_statements/1589172/final_chopra_opening_statement_for_senate_commerce_committee_20210420.pdf.

  [22]   While a recent Supreme Court ruling curtailed the FTC’s ability to seek equitable monetary penalties such as restitution or disgorgement (AMG Capital Management, LLC, et al. v. Federal Trade Commission, No. 19-508 (U.S. April 22, 2021), Congress is considering legislation to remedy the decision. The House Energy and Commerce Committee has scheduled a hearing on whether the FTC needs new authority to seek consumer redress. See further Christopher Cole, Supreme Court Rolls Back FTC Restitution Power, Law360 (April 22, 2021), available at https://www.law360.com/articles/1377854.

  [23]   S. ___, 117th Congress (2021), available at https://www.wyden.senate.gov/imo/media/doc/The%20Fourth%20Amendment%20Is%20Not%20For%20Sale%20Act%20of%202021%20Bill%20Text.pdf.

  [24]   Statement of Sen. Ron Wyden (D-OR), The Fourth Amendment Is Not For Sale Act (April 21, 2021), available at https://www.wyden.senate.gov/imo/media/doc/The%20Fourth%20Amendment%20Is%20Not%20For%20Sale%20Act%20of%202021%20One%20Pager.pdf.

  [25]   Id.

  [26]   For more details, see our Fourth Quarter and 2020 Annual Review of Artificial Intelligence and Automated Systems.

  [27]   S.B. 5116, Reg. Session (2021-22).

  [28]   Monica Nickelsburg, Washington state lawmakers seek to ban government from using discriminatory AI tech, GeewWire (Feb. 13, 2021), available at https://www.geekwire.com/2021/washington-state-lawmakers-seek-ban-government-using-ai-tech-discriminates/.

  [29]   FTC, In the Matter of Everalbum, Inc. and Paravision, Commission File No. 1923172  (Jan. 11, 2021), available at https://www.ftc.gov/enforcement/cases-proceedings/1923172/everalbum-inc-matter.

  [30]   FTC, Statement of Commissioner Rohit Chopra, In the Matter of Everalbum and Paravision, Commission File No. 1923172 (Jan. 8, 2021), available at https://www.ftc.gov/system/files/documents/public_statements/1585858/updated_final_chopra_statement_on_everalbum_for_circulation.pdf.

  [31]   H.B. 2031, Reg. Session (2020-2021).

  [32]   For more details, see our Fourth Quarter and 2020 Annual Review of Artificial Intelligence and Automated Systems.

  [33]   Order, Steven Vance et al. v. Microsoft Corp., No. 2:20-cv-01082, (W.D. Wash. March 15, 2021, ) 2021 WL 963485

  [34]   Order, Stein et al. v. Clarifai Inc., No. 1:20-cv-01937, (N.D. Ill. March 16, 2021), 2021 WL 1020997

  [35]   H.B. 559, 102nd Gen. Assembly, available at https://www.ilga.gov/legislation/BillStatus.asp?DocNum=559&GAID=16&DocTypeID=HB&SessionID=110&GA=102.

  [36]   Lauraann Wood, Illinois Bill Seeks To File Down Biometric Law’s Sharp Teeth, Law360 (March 22, 2021), available at https://www.law360.com/cybersecurity-privacy/articles/1367329/illinois-bill-seeks-to-file-down-biometric-law-s-sharp-teeth?nl_pk=4e5e4fee-ca5f-4d2e-90db-5680f7e17547&utm_source=newsletter&utm_medium=email&utm_campaign=cybersecurity-privacy.

  [37]   U.S. Food & Drug Administration, News Release, FDA Releases Artificial Intelligence/Machine Learning Action Plan (Jan. 12, 2021), available at https://www.fda.gov/news-events/press-announcements/fda-releases-artificial-intelligencemachine-learning-action-plan.

  [38]   U.S. Food & Drug Administration, Proposed Regulatory Framework for Modifications to Artificial Intelligence/Machine Learning-Based Software as a Medical Device (SaMD) (April 2019), available at https://www.fda.gov/media/122535/download.

  [39]   2Q19 Artificial Intelligence and Autonomous Systems Legal Update, III.A. FDA Releases White Paper Outlining a Potential Regulatory Framework for Software as a Medical Device (SaMD) That Leverages AI.

  [40]   Supra, n.16 at 5.

  [41]   Stephen Thaler v. Andrew Hirshfeld et al., No. 1:20-cv-00903 (E.D. Va. Feb. 24, 2021).

  [42]   Cara Salvatore, Giving AI Inventorship Would Be A Bridge Too Far, Judge Says, Law360 (April 6, 2021), available at https://www.law360.com/articles/1354993.

  [43]   For more detail, see our Fourth Quarter and 2020 Annual Review of Artificial Intelligence and Automated Systems.

  [44]   Google LLC v. Oracle Am., Inc., No. 18-956, 2021 WL 1240906, (U.S. Apr. 5, 2021).

  [45]   Id., at *3.

  [46]   Id. at *20.

  [47]   See id.

  [48]   Bill Donahue, Supreme Court Rules For Google In Oracle Copyright Fight, Law360 (April 5, 2021), available at https://www.law360.com/ip/articles/1336521.

  [49]   86 Fed. Reg. 16837.

  [50]   EC, Proposal for a Regulation of the European Parliament and of the Council laying down Harmonised Rules on Artificial Intelligence and amending certain Union Legislative Acts (Artificial Intelligence Act), COM(2021) 206 (April 21, 2021), available at https://digital-strategy.ec.europa.eu/en/library/proposal-regulation-european-approach-artificial-intelligence.

  [51]   Ursula von der Leyen, A Union that strives for more: My agenda for Europe, available at https://ec.europa.eu/commission/sites/beta-political/files/political-guidelines-next-commission_en.pdf.

  [52]   Supra, note 39, p. 1.

  [53]   “Providers” are defined as a natural or legal person, public authority, agency or other body that develops an AI system or that has an AI system developed with a view to placing it on the market or putting it into service under its own name or trademark, whether for payment or free of charge (see Art. 3 no. 2 of the Artificial Intelligence Act).

  [54]   “Users” are defined as any natural or legal person, public authority, agency or other body using an AI system under its authority, except where the AI system is used in the course of a personal non-professional activity (see Art. 3 no. 4 of the Artificial Intelligence Act).

  [55]   Certain obligations also apply to “importers” and “distributors”.

  [56]   See Art. 2 para. 1 point (c) of the Artificial Intelligence Act.

  [57]   See Recital (11) of the Artificial Intelligence Act.

  [58]   See Art. 2 para. 3 of the Artificial Intelligence Act.

  [59]   “AI system” is defined as software that is developed with one or more of the techniques and approaches listed in an Annex (such as machine learning approaches incl. deep learning, logic- and knowledge-based approaches and statistical approaches) and can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environment the interact with (see Art. 3 no. 1 of the Artificial Intelligence Act).

  [60]   See Art. 5 para. 1 point (a) of the Artificial Intelligence Act.

  [61]   See Art. 5 para. 1 point (c) of the Artificial Intelligence Act.

  [62]   See Art. 5 para. 1 point (d) of the Artificial Intelligence Act.

  [63]   See Art. 6 para. 1 of the Artificial Intelligence Act.

  [64]   See Art. 6 para. 2 in connection with Annex III of the Artificial Intelligence Act.

  [65]   See Art. 8 et seqq. of the Artificial Intelligence Act.

  [66]   See Art. 10 and 71 of the Artificial Intelligence Act.

  [67]   See Art. 16 points (a) and (e) of the Artificial Intelligence Act.

  [68]   See Art. 43 of the Artificial Intelligence Act.

  [69]   See Art. 16 point (f), 51 and 60 of the Artificial Intelligence Act.

  [70]   See Art. 61 et seq. of the Artificial Intelligence Act.

  [71]   See Recital (81) and Art. 69 of the Artificial Intelligence Act.

  [72]   See Art. 52 para. 1 of the Artificial Intelligence Act.

  [73]   See Art. 52 para. 3 of the Artificial Intelligence Act.

  [74]   See Recital (82) of the Artificial Intelligence Act.

  [75]   See Art. 71 of the Artificial Intelligence Act.

  [76]   See Id.

  [77]   See Id.

  [78]   See e.g., Portland Ordinance No. 190114, “Prohibit the use of Face Recognition Technologies by private entities in places of public accommodation in the City”, effective Jan. 1, 2021 (banning private entities from using Face Recognition Technologies in Places of Public Accommodation within the boundaries of the City of Portland); San Francisco Ordinance No. 103-19, the “Stop Secret Surveillance” ordinance, effective 31 May 2019 (banning the use of facial recognition software by public departments within San Francisco, California); Somerville Ordinance No. 2019-16, the “Face Surveillance Full Ban Ordinance,” effective 27 June 2019 (banning use of facial recognition by the City of Somerville, Massachusetts or any of its officials); Oakland Ordinance No. 18-1891, “Ordinance Amending Oakland Municipal Code Chapter 9.65 to Prohibit the City of Oakland from Acquiring and/or Using Real-Time Face Recognition Technology”, preliminary approval 16 July 2019, final approval 17 September 2019 (bans use by city of Oakland, California and public officials of real-time facial recognition). For more information, see our U.S. Cybersecurity and Data Privacy Outlook and Review – 2021 and Fourth Quarter and 2020 Annual Review of Artificial Intelligence and Automated Systems.

  [79]   See Art. 6 para. 1 in connection with Annex IV of the Artificial Intelligence Act; CPRA Section 14, adding Cal. Civ. Code § 1798.140(z). For more detail see our alert regarding “The Potential Impact of the Upcoming Voter Initiative, the California Privacy Rights Act”.

  [80]   See Art. 64 of the Artificial Intelligence Act; An Act Relating to the use of facial recognition services, S.B. 6280, 66th Leg., Reg. Sess. (Wash. 2020), available at http://lawfilesext.leg.wa.gov/biennium/2019-20/Pdf/Bills/Session%20Laws/Senate/6280-S.SL.pdf?q=20201214093740.

  [81]   See Art. 64 of the Artificial Intelligence Act.

  [82]   See State v. Pickett, No. A-4207-19T4, 2021 WL 357765, at *2 (N.J. Super. Ct. App. Div. Feb. 3, 2021); see e.g., Houston Fed’n of Tchrs., Loc. 2415 v. Houston Indep. Sch. Dist., 251 F. Supp. 3d 1168, 1179 (S.D. Tex. 2017) (stating that “[w]hen a public agency adopts a policy of making high stakes employment decisions based on secret algorithms incompatible with minimum due process, the proper remedy is to overturn the policy, while leaving the trade secrets intact”); An Act Relating to the use of facial recognition services, S.B. 6280, 66th Leg., Reg. Sess. (Wash. 2020), available at http://lawfilesext.leg.wa.gov/biennium/2019-20/Pdf/Bills/Session%20Laws/Senate/6280-S.SL.pdf?q=20201214093740 (stating that “[m]aking an application programming interface or other technical capability [to enable review] does not require providers to do so in a manner that would increase the risk of cyberattacks or to disclose proprietary data.”).

  [83]   See Art. 70 of the Artificial Intelligence Act.

  [84]   Supra at I.C.

  [85]   European Parliament, Resolution of 20 October 2020 with recommendations to the Commission on a framework of ethical aspects of artificial intelligence, robotics and related technologies (2020/2012 (INL)) (Oct. 20, 2020), available at https://www.europarl.europa.eu/doceo/document/TA-9-2020-0275_EN.pdf.  For more detail, see our “3Q20 Artificial Intelligence and Automated Systems Legal Update”.

  [86]   The New York Times, Europe Proposes Strict Rules for Artificial Intelligence (April 21, 2021), available at https://www.nytimes.com/2021/04/16/business/artificial-intelligence-regulation.html.

  [87]   Council of Europe – Ad Hoc Committee on Artificial Intelligence, Feasibility Study (Dec. 17, 2020), available at https://rm.coe.int/cahai-2020-23-final-eng-feasibility-study-/1680a0c6da.

  [88]   Press release, Launch of the CAHAI Multi-stakeholder Consultation (March 30, 2021), available at https://www.coe.int/en/web/artificial-intelligence/-/lauch-of-the-cahai-multi-stakeholder-consultation.

  [89]   The CAHAI consultation is accessible here: https://www.coe.int/en/web/artificial-intelligence/cahai-multi-stakeholder-consultation.

  [90]   Press release, Confidentiality of electronic communications: Council agrees its position on ePrivacy rules (Feb. 10, 2021), available at https://www.consilium.europa.eu/en/press/press-releases/2021/02/10/confidentiality-of-electronic-communications-council-agrees-its-position-on-eprivacy-rules/.

  [91]   EU Council, Proposal for a Regulation of the European Parliament and of the Council concerning the respect for private life and the protection of personal data in electronic communications and repealing Directive 2002/58/EC (Regulation on Privacy and Electronic Communications) (Feb. 10, 2021), available at https://data.consilium.europa.eu/doc/document/ST-6087-2021-INIT/en/pdf.

  [92]   Directive 2002/58/EC of the European Parliament and of the Council of 12 July 2002 concerning the processing of personal data and the protection of privacy in the electronic communications sector, available at https://eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri=CELEX:32002L0058&from=EN.

  [93]   See EU Commission, Proposal for a Regulation on Privacy and Electronic Communications (Jan. 10, 2017), available at https://digital-strategy.ec.europa.eu/en/library/proposal-regulation-privacy-and-electronic-communications.

  [94]   Press release, Cybersecurity Challenges in the Uptake of Artificial Intelligence in Autonomous Driving (Feb. 11, 2021), available at https://www.enisa.europa.eu/news/enisa-news/cybersecurity-challenges-in-the-uptake-of-artificial-intelligence-in-autonomous-driving. The Cybersecurity Report is available for download at https://www.enisa.europa.eu/publications/enisa-jrc-cybersecurity-challenges-in-the-uptake-of-artificial-intelligence-in-autonomous-driving/.

  [95]   Draft law of the Bundesregierung, Entwurf eines Gesetzes zur Änderung des Straßenverkehrsgesetzes und des Pflichtversicherungsgesetzes – Gesetz zum autonomen Fahren, Drucksache 19/27439 (March 15, 2021), available at https://dip21.bundestag.de/dip21/btd/19/274/1927439.pdf.

  [96]   For example, it has been reported that the Federal Ministry of Justice has raised concerns in relation to the question whether data such as driving routes can be transmitted to the Federal Criminal Police Office (the German equivalent to the FBI) upon request.

  [97]   Bundesregierung, Antwort der Bundesregierung auf die Kleine Anfrage der Abgeordneten Oliver Luksic, Frank Sitta, Bernd Reuther, weiterer Abgeordneter und der Fraktion der FDP, Drucksache 19/24851 (Dec. 28, 2020), available at https://dip21.bundestag.de/dip21/btd/19/256/1925626.pdf.


The following Gibson Dunn lawyers prepared this client update: H. Mark Lyon, Michael Walther, Kai Gesing, Christopher Timura, Frances Waldmann, Selina Grün, Prachi Mistry, and Derik Rao.

Gibson Dunn’s lawyers are available to assist in addressing any questions you may have regarding these developments. Please contact the Gibson Dunn lawyer with whom you usually work, any member of the firm’s Artificial Intelligence and Automated Systems Group, or the following authors:

H. Mark Lyon – Palo Alto (+1 650-849-5307, mlyon@gibsondunn.com)
Frances A. Waldmann – Los Angeles (+1 213-229-7914,fwaldmann@gibsondunn.com)

Please also feel free to contact any of the following practice group members:

Artificial Intelligence and Automated Systems Group:
H. Mark Lyon – Chair, Palo Alto (+1 650-849-5307, mlyon@gibsondunn.com)
J. Alan Bannister – New York (+1 212-351-2310, abannister@gibsondunn.com)
Patrick Doris – London (+44 (0)20 7071 4276, pdoris@gibsondunn.com)
Kai Gesing – Munich (+49 89 189 33 180, kgesing@gibsondunn.com)
Ari Lanin – Los Angeles (+1 310-552-8581, alanin@gibsondunn.com)
Robson Lee – Singapore (+65 6507 3684, rlee@gibsondunn.com)
Carrie M. LeRoy – Palo Alto (+1 650-849-5337, cleroy@gibsondunn.com)
Alexander H. Southwell – New York (+1 212-351-3981, asouthwell@gibsondunn.com)
Christopher T. Timura – Washington, D.C. (+1 202-887-3690, ctimura@gibsondunn.com)
Eric D. Vandevelde – Los Angeles (+1 213-229-7186, evandevelde@gibsondunn.com)
Michael Walther – Munich (+49 89 189 33 180, mwalther@gibsondunn.com)

© 2021 Gibson, Dunn & Crutcher LLP

Attorney Advertising: The enclosed materials have been prepared for general informational purposes only and are not intended as legal advice.

Decided April 22, 2021

AMG Capital Management v. FTC, No. 19-508

Today, the Supreme Court held 9-0 that Section 13(b) of the Federal Trade Commission Act, which authorizes federal courts to issue “permanent injunction[s]” in FTC enforcement actions, does not include the power to award equitable monetary relief such as restitution.

Background:
Scott Tucker owned several businesses that provided high-interest, short-term loans over the Internet. The Federal Trade Commission sued Tucker and his businesses under Section 5 of the Federal Trade Commission Act, which prohibits “unfair or deceptive acts or practices.” The FTC sought a “permanent injunction” under Section 13(b) of the Act, as well restitution and disgorgement of Tucker’s monetary gains. The district court granted the FTC’s requested relief, and the Ninth Circuit affirmed, relying on its precedent holding that Section 13(b) “empowers district courts to grant any ancillary relief necessary to accomplish complete justice, including restitution.”

Issue:
Whether the authorization of a “permanent injunction” in Section 13(b) of the Act also authorizes federal courts to award equitable monetary relief such as restitution and disgorgement.

Court’s Holding:
Section 13(b) does not authorize federal courts to award equitable monetary relief, because a “permanent injunction” is distinct from equitable monetary relief and other sections of the Act expressly authorize the FTC to seek monetary relief if it follows certain procedures not required under Section 13(b)
.

“The question presented is whether th[e] statutory language authorizes the Commission to seek, and a court to award, equitable monetary relief such as restitution or disgorgement. We conclude that it does not.”

Justice Breyer, writing for the Court

What It Means:

  • The Court’s decision significantly cabins the FTC’s historically broad authority under Section 13(b) in consumer protection and antitrust matters. The FTC has used Section 13(b) “to win equitable monetary relief directly in court with great frequency.” Until the Seventh Circuit rejected the FTC’s authority to seek such relief in a 2019 decision, all eight federal courts of appeals to address the issue had upheld the FTC’s authority to seek such relief under the Act.
  • The Court’s decision does not preclude the FTC from seeking monetary relief in all cases. Under Sections 5 and 19 of the Act, the FTC may seek monetary relief on behalf of consumers when the FTC has engaged in administrative proceedings and issued cease and desist orders.
  • The Court explained that the FTC is “free to ask Congress to grant it further remedial authority” if Sections 5 and 19 are “too cumbersome or otherwise inadequate.” In fact, the FTC has recently asked Congress for broader authority, and it remains to be seen whether Congress will grant the FTC’s request in light of the Court’s decision.

The Court’s opinion is available here.

Gibson Dunn’s lawyers are available to assist in addressing any questions you may have regarding developments at the Supreme Court. Please feel free to contact the following practice leaders:

Appellate and Constitutional Law Practice

Allyson N. Ho
+1 214.698.3233
aho@gibsondunn.com
Mark A. Perry
+1 202.887.3667
mperry@gibsondunn.com
Lucas C. Townsend
+1 202.887.3731
ltownsend@gibsondunn.com
Bradley J. Hamburger
+1 213.229.7658
bhamburger@gibsondunn.com
  

Related Practice: Antitrust and Competition

Stephen Weissman
+1 202.955.8678
sweissman@gibsondunn.com
Rachel S. Brass
+1 415.393.8293
rbrass@gibsondunn.com
Scott D. Hammond
+1 202.887.3684
shammond@gibsondunn.com
Daniel G. Swanson
+1 213.229.7430
dswanson@gibsondunn.com
  

Related Practice: Privacy, Cybersecurity and Data Innovation

Alexander H. Southwell
+1 212.351.3981
asouthwell@gibsondunn.com
S. Ashlie Beringer
+1 650.849.5327
aberinger@gibsondunn.com
Ashley Rogers
+1 214.698.3316
arogers@gibsondunn.com
Ryan T. Bergsieker
+1 303.298.5774
rbergsieker@gibsondunn.com
  

Washington, D.C. partner David Fotouhi and associate Trenton Van Oss are the authors of “What DC Circ.’s Finality Test Means For Biden Enviro Policies,” [PDF] published by Law360 on April 20, 2021.

On April 7, 2021, the new regulation of the New York Department of Financial Services (NYDFS) governing confidential supervisory information (CSI) became effective in final form. NYDFS has thus joined the Board of Governors of the Federal Reserve System (Federal Reserve) in making recent amendments to its approach to CSI.[1] The final regulation (Final Rule) makes certain improvements over the rule proposal most recently put out by NYDFS in September 2020. New York now has, for the first time, a CSI regulation in addition to the pre-existing statutory provision, Section 36.10 of the Banking Law.

A. Scope of CSI

The Final Rule defines CSI as “any information that is covered by Section 36.10 of the [New York] Banking Law.”[2] Section 36.10, in turn, refers to “reports of examinations and investigations [of any NYDFS-supervised institution and affiliates], correspondence and memoranda concerning or arising out of such examination and investigations, including any duly authenticated copy or copies thereof,” and includes any confidential materials shared by NYDFS with any governmental agency or unit.[3]

B. Disclosure to Affiliates

Under Section 36.10 and the Final Rule, the default standard for disclosure of any CSI is the prior written approval of NYDFS.[4] The Final Rule contains an exception to the prior written approval requirement for disclosure by a NYDFS-regulated entity of CSI to the regulated entity’s affiliates and their directors, officers and employees when “necessary and appropriate for business purposes” and “on the condition that such persons maintain the confidentiality of such information.”[5]

The Final Rule eases current restrictions on NYDFS-regulated entities’ disclosure of CSI to certain advisors. It provides a “limited exception” for disclosure by such entities to “legal counsel or an independent auditor that has been retained or engaged by such regulated entity pursuant to an engagement letter or written agreement.”[6]

In an improvement from the September 2020 proposal, there is no requirement that the applicable engagement letter or written agreement contain burdensome acknowledgements by the legal counsel or independent auditor, including that the information will be used solely to provide “legal representation or auditing services,” that the information will be disclosed to legal counsel’s or the auditor’s employees, directors, or officers only “to the extent necessary and appropriate for business purposes,” and that legal counsel or the auditor agree “to return or certify the destruction of the confidential supervisory information or, in the case of electronic files, render the files effectively inaccessible through access control measures or other means, at the conclusion of the engagement.”[7]

Rather, all that the Final Rule requires is that legal counsel or independent auditor acknowledge, “in writing,” that any disclosed information is CSI under Section 36.10 of the Banking Law, and agree, “in writing,” to abide by the prohibition on the dissemination of CSI contained in the Final Rule.[8]

Unlike the September 2020 proposal, there is also an exception for “Client Acceptance of New or Continuing Engagement of Independent Auditors.” Under this exception, a NYDFS-regulated entity may disclose CSI to independent auditors “as part of the independent auditor’s acceptance of a new client engagement or the continuation of an existing annual audit engagement.” The condition to this exception is that the regulated entity receive the written acknowledgement and agreement from the independent auditor described above.[9]

Unlike the Federal Reserve’s regulation, the Final Rule does not contain an exception for third-party vendors to legal counsel and external auditors, which NYDFS had previously characterized as “broad” and not contained in the OCC’s regulation.[10]

D. Disclosure to Other Regulators

With respect to the disclosure by NYDFS-regulated entities of CSI to other state and federal regulators “having direct supervisory authority over” such regulated entities, the Final Rule requires the prior written approval of both the Senior Deputy Superintendent of NYDFS for Banking and the NYDFS General Counsel, or their respective delegates, prior to disclosure.[11]

E. Duty to Notify NYDFS of Requests for CSI

The Final Rule requires each NYDFS-regulated entity, affiliate of a NYDFS-regulated entity, legal counsel, and independent auditor that is served with a request, subpoena, motion to compel or other judicial or administrative process to provide CSI to notify the NYDFS Office of the General Counsel of the request immediately so that NYDFS will be able to intervene in the action as appropriate.[12] In addition, the Final Rule mandates that a CSI holder both inform the requester of the substance of the New York regulation and the holder’s obligation to maintain the confidentiality of the CSI, and, “at the appropriate time,” inform the relevant tribunal of the substance of Section 36.10 of the New York Banking Law and the New York regulation.[13]

Conclusion

The Final Rule is a welcome development. It largely harmonizes the New York CSI rules with federal analogues and should reduce the inefficiencies created by Section 36.10 of the New York Banking Law, particularly for legal counsel and independent auditors. Outside of the Final Rule’s exceptions, however, the overriding traditional principle of CSI law and regulation – that the regulators consider CSI their property, to be disclosed only upon their specific consent – remains a key feature of the NYDFS regime, and one that can result in severe sanctions if it is ignored.

______________________

   [1]   See https://www.gibsondunn.com/wp-content/uploads/2020/09/developments-in-us-banking-regulators-treatment-of-confidential-supervisory-information.pdf.

   [2]   3 N.Y.C.R.R. § 7.1(a).

   [3]   New York Banking Law, Section 36.10.

   [4]   Id.; 3 N.Y.C.R.R. § 7.2(a).

   [5]   3 N.Y.C.R.R. § 7.2(d).

   [6]   Id. § 7.2(b).

   [7]   3 N.Y.C.R.R. § 7.2(b) (proposed 2020).

   [8]   3 N.Y.C.R.R. § 7.2(b).

   [9]   Id. § 7.2(c).

  [10]   NYS Register, page 12 (Sept. 9, 2020), available at https://www.dos.ny.gov/info/register/2020/090920.pdf.

  [11]   3 N.Y.C.R.R. § 7.2(g).

  [12]   Id. § 7.2(e).

  [13]   Id.


The following Gibson Dunn lawyers assisted in preparing this client update: Arthur Long and Matthew Biben.

Gibson Dunn’s lawyers are available to assist in addressing any questions you may have regarding these developments. Please contact the Gibson Dunn lawyer with whom you usually work, the authors, or any of the following members of the firm’s Financial Institutions practice group:

Matthew L. Biben – New York (+1 212-351-6300, mbiben@gibsondunn.com)
Michael D. Bopp – Washington, D.C. (+1 202-955-8256, mbopp@gibsondunn.com)
Stephanie Brooker – Washington, D.C. (+1 202-887-3502, sbrooker@gibsondunn.com)
M. Kendall Day – Washington, D.C. (+1 202-955-8220, kday@gibsondunn.com)
Mylan L. Denerstein – New York (+1 212-351- 3850, mdenerstein@gibsondunn.com)
Michelle M. Kirschner – London (+44 (0) 20 7071 4212, mkirschner@gibsondunn.com)
Arthur S. Long – New York (+1 212-351-2426, along@gibsondunn.com)
Matthew Nunan – London (+44 (0) 20 7071 4201, mnunan@gibsondunn.com)
Jeffrey L. Steiner – Washington, D.C. (+1 202-887-3632, jsteiner@gibsondunn.com)

© 2021 Gibson, Dunn & Crutcher LLP

Attorney Advertising: The enclosed materials have been prepared for general informational purposes only and are not intended as legal advice.