Artificial Intelligence and Automated Systems Legal Update (2Q22)

August 10, 2022

Click for PDF

The second quarter of 2022 saw U.S. federal lawmakers and agencies focus on draft legislation and guidance aimed at closing the gap to the EU with respect to addressing risks in the development and use of AI systems, in particular risks related to algorithmic bias and discrimination.  The American Data Privacy and Protection Act (“ADPPA”), the bipartisan federal privacy bill introduced to the U.S. House in June 2022, marks a major step towards a comprehensive national privacy framework, and companies should take particular note of its inclusion of mandated algorithmic impact assessments.  Meanwhile, the E.U.’s regulatory scheme for AI continues to wind its way through the EU legislative process.  Though it is unlikely to become binding law until late 2023 at the earliest, the EU policy landscape remains dynamic.

Our 2Q22 Artificial Intelligence and Automated Systems Legal Update focuses on these key efforts, and also examines other policy developments within the U.S. and EU that may be of interest to domestic and international companies alike.

I.  U.S. ENFORCEMENT, REGULATORY & POLICY DEVELOPMENTS

A.   U.S. National AI Strategy

1.   U.S. Department of Commerce: U.S. Department of Commerce Appoints 27 Members to National AI Advisory Committee

On April 14, 2022, the U.S. Department of Commerce announced the appointment of 27 experts to the National Artificial Intelligence Advisory Committee (“NAIAC”), which will advise the President and the National AI Initiative Office on a range of issues related to AI.[1]  The appointments are the first for the recently established committee, created in response to the National AI Initiative Act of 2020.  The initiative directs the NAIAC to provide recommendations on topics like the current state of U.S. AI competitiveness, the state of science around AI, and AI workforce issues.  The committee also is responsible for advice regarding the management and coordination of the initiative itself, including its balance of activities and funding.

 2.   NIST AI Risk Management Framework

As noted in our 1Q22 Legal Update,[2] in March 2022, the National Institute of Standards and Technology (“NIST”) released for public comment an initial draft of its AI Risk Management Framework (“AI RMF”), which provides guidance for managing risks in the design, development, use, and evaluation of AI systems.  NIST separately released a document titled, “Towards a Standard for Identifying and Managing Bias within Artificial Intelligence,” which aims to provide guidance for mitigating harmful bias in AI systems.

Subsequently, on March 29-31, 2022, NIST held its second broad stakeholder workshop on its draft AI RMF titled, “Building the NIST AI Risk Management Framework: Workshop #2.”[3]  The workshop extensively discussed the AI RMF as well as international trends and standards and mitigating harmful AI bias.  NIST is seeking stakeholder feedback on the draft framework as part of a process over the next year to release a full version 1.0 of the AI RMF, which NIST intends to be a critical tool for organizations to identify and manage risks related to AI, including in areas like potential bias.  We stand ready to assist clients who wish to participate in this process.

B.   Algorithmic Accountability and Consumer Protection

1.   FTC

The Federal Trade Commission (“FTC”) continues to position itself as a key regulator of AI technology.  In December 2020, as part of the 2021 Appropriations Act, Congress tasked the FTC with conducting a study and reporting on whether and how AI could be used to identify, remove, or take other appropriate action to address a variety of online harms (scams, deepfakes, child sexual abuse, terrorism, hate crimes and harassment, election-related disinformation, and the traffic in illegal drugs and counterfeit goods).  Congress also required the FTC to recommend reasonable policies and procedures for using AI to combat these online harms, and any legislation to “advance the adoption and use of [AI]” for these purposes.

In its June 16, 2022 report,[4] the FTC advised that, while AI can be used as a tool to detect and remove harmful material online, there are significant risks associated with its use.  In particular, the FTC cautioned that because AI systems rely on algorithms and inputs created by humans, and often have built-in motivations geared more towards consumer engagement rather than content moderation, even supposedly neutral systems can disproportionately harm minorities while threatening privacy and free speech.  Additionally, the FTC stated that while many companies currently use AI tools to moderate content, they “share little information about how these systems work, or how useful they are in actually combating harmful content.”[5]  The FTC therefore advised that there needs to be more transparency before the government can understand how AI tools work in the real world.  Although the Commission acknowledged that major tech platforms and others are already using AI tools to address online harms, the report’s final recommendation is that Congress should avoid laws that would mandate or overly rely on the use of AI to combat online harms and instead conduct additional investigation into other tools that might also be helpful in moderating online content.  In his dissenting statement, Commissioner Phillips noted that the report “has no information gleaned directly from individuals and companies actually using AI to try to identify and remove harmful online content, precisely what Congress asked us to evaluate.”[6]

Further, on June 22, 2022, Senators Ed Markey (D-MA), Elizabeth Warren (D-MA), Brian Schatz (D-HI), Cory Booker (D-NJ), Ron Wyden (D-OR), Tina Smith (D-MN), and Bernie Sanders (VT) sent a letter to FTC chair Lina Khan urging the FTC to “build on its guidance regarding biased algorithms and use its full enforcement and rulemaking authority to stop damaging practices involving online data and artificial intelligence.”[7]  The letter cites the National Institute of Standards and Technology’s study that Black and Asian individuals “were up to 100 times more likely to be misidentified” by biometric surveillance tools than white individuals, and asks the FTC to use its authority to combat “invasive and discriminatory biometric surveillance tools,” including facial recognition tools.

a)   Digital Platform Commission Act of 2022 (S. 4201)

On May 12, 2022, Senator Michael Bennet (D-CO) introduced the Digital Platform Commission Act of 2022 (S. 4201), which would empower a new federal agency, the Federal Digital Platform Commission, to promulgate rules, impose civil penalties, hold hearings, conduct investigations, and support research with respect to online platforms that facilitate interactions between consumers, as well as between consumers and entities offering goods and services.[8]  The Commission would have a broad mandate to promote the public interest, with specific directives to protect consumers, promote competition, and assure the fairness and safety of algorithms on digital platforms, among other areas.  Regulations contemplated by the bill include requirements that algorithms used by online platforms “[be] fair, transparent, and without harmful, abusive, anticompetitive, or deceptive bias.”  The bill has been referred to the Committee on Commerce, Science, and Transportation.

b) Health Equity and Accountability Act of 2022 (H.R. 7585)

Introduced in the House on April 26, 2022, the Health Equity and Accountability Act of 2022 (H.R. 7585) aims to address algorithmic bias in the context of healthcare.[9]  The Bill would require the Secretary of Health and Human Services to establish a “Task Force on Preventing AI and Algorithmic Bias in Healthcare” to develop guidance “on how to ensure that the development and [use] of artificial intelligence and algorithmic technologies” in delivering care “does not exacerbate health disparities” and help ensure broader access to care.  Additionally, the Task Force would be charged with identifying the risks posed by a healthcare system’s use of such technologies to individuals’ “civil rights, civil liberties, and discriminatory bias in health care access, quality, and outcomes.”  The bill has been referred to the Committee on Energy and Commerce.

c)   California Department of Insurance Issues Bulletin Addressing Racial Bias and Unfair Discrimination

On June 30, 2022, the California Department of Insurance issued a bulletin addressing racial bias and unfair discrimination in the context of consumer data.[10]  The bulletin notes that insurance companies and other licensees “must avoid both conscious and unconscious bias or discrimination that can and often does result from the use of artificial intelligence, as well as other forms of ‘Big Data’ … when marketing, rating, underwriting, processing claims, or investigating suspected fraud.”[11]  To that end, the Department now requires that insurers and licensees conduct their own due diligence to ensure full compliance with all applicable law “before utilizing any data collection method, fraud algorithm, rating/underwriting or marketing tool, insurers and licensees must conduct their own due diligence to ensure full compliance with all applicable laws.”  In addition, insurers and licensees “must provide transparency to Californians by informing consumers of the specific reasons for any adverse underwriting decisions.”[12]

d)   EEOC and DOJ Guidance on the Americans with Disabilities Act and the Use of AI to Assess Job Applicants and Employees

On May 12, 2022, more than six months after the Equal Employment Opportunity Commission (“EEOC”) announced its Initiative on Artificial Intelligence and Algorithmic Fairness,[13] the agency issued its first guidance regarding employers’ use of AI.[14]  The EEOC’s non-binding, technical guidance provides suggested guardrails for employers for the use of AI technologies in their hiring and workforce management systems.

The EEOC’s guidance outlines best practices and key considerations that, in the EEOC’s view, help ensure that employment tools do not disadvantage applicants or employees with disabilities in violation of the Americans with Disabilities Act (“ADA”).  The guidance provides three ways in which an employer’s tools could be found to violate the ADA: (1) by relying on the tool, the employer fails to provide a reasonable accommodation; (2) the tool screens out an individual with a disability that is able to perform the essential functions of the job with or without an accommodation; and (3) the tool makes a disability-related inquiry or otherwise constitutes a medical examination.

e)   EEOC Brings Age Discrimination Action Against Tutoring Software Company

On May 5, 2022, the EEOC filed a complaint in the Eastern District of New York alleging that a software company providing online English-language tutoring to adults and children violated the Age Discrimination in Employment Act (“ADEA”) by denying employment as tutors to a class of plaintiffs because of their age.”[15]  Specifically, the EEOC alleges that the company’s application software automatically denied older, qualified applicants by soliciting applicant birthdates and automatically rejecting female applicants age 55 or older and male applicants age 60 or older.  The EEOC seeks a range of damages, including back wages, liquidated damages, a permanent injunction enjoining the challenged hiring practice, and the implementation of policies, practices, and programs providing equal employment opportunities for individuals 40 years of age and older.

C.   Data Privacy

1.   Legislation and Regulation

a)   American Data Privacy and Protection Act (H.R. 8152)

On June 21, 2022, members of Congress introduced a bipartisan federal privacy bill, H.R. 8152, the American Data Privacy and Protection Act (“ADPPA”).[16]  The ADPPA aims to create a national framework that would preempt many, but not all, state privacy laws.  The bill passed the U.S. House Energy and Commerce Committee on July 20, but is now increasingly unlikely to be passed during this Congressional session.[17]  While ADPPA shares similarities with current state privacy laws, companies should pay attention to several proposed requirements that are particularly relevant to AI technologies.

i.   Overview of ADPPA

The ADPPA proposes broad limitations on the kind of data processing that covered entities are allowed to engage in, [18] and also requires companies to provide certain rights to consumers, including a right to notice, a right to ownership or control (a right to access data, correct data, or have data deleted), and a right to opt out or object.[19]

The bill defines “covered entities” as entities subject to the FTC Act, common carriers under federal law, or nonprofits, that “alone or jointly with others” determine the purposes and means of collecting, processing, and transferring covered data.[20]  The ADPPA covers a wide variety of personal data, including any data “linked” or “linkable” to an individual or a device, which is similar to the EU’s General Data Protection Regulation (“GDPR”) as well as state privacy laws such as the California Consumer Privacy Act (“CCPA”) or the Virginia Consumer Data Protection Act (“VCDPA”).  “Covered data” under the ADPPA includes data that is linkable to a device, not just an individual.  Additionally, the definition of “biometric information” does not include photographs or recordings, but does include fingerprints, voice prints, iris or retina scans, “facial mapping or hand mapping,” and gait.  De-identified data, employee data, and publicly available information are among the enumerated exemptions.[21]

ii.   Algorithmic Assessments

The bill contains new AI assessment obligations that would directly impact companies developing AI technologies.  ADPPA would require covered entities and service providers that knowingly develop an algorithm to collect, process, or transfer covered data to produce an algorithm design evaluation (including training data), which must specifically consider any data used to develop the algorithm to reduce the risk of potential harms.[22]

Large data holders must conduct an additional annual impact assessment of any algorithm that is used to collect, process, or transfer covered data, and may cause potential harm to an individual.  The assessments must describe the algorithm’s design process, purpose, foreseeable uses, data inputs and the outputs the algorithms generate, as well as steps taken to mitigate potential harms.[23]  In particular, harms related to the following areas must be addressed: (1) individuals under the age of 17; (2) advertising for housing, education, employment, healthcare, insurance, or credit opportunities; (3) access to, or restrictions on the use of, a place of public accommodation; or (4) a disparate impact on the basis of protected characteristics.[24]  Entities must use an external, independent researcher or auditor to the extent possible and both design evaluations and impact assessments must be submitted to the FTC within 30 days of completion.

Mirroring the risk-based approach adopted by the EU’s draft AI Act, the ADPPA contemplates that the FTC will promulgate regulations that would allow entities to exclude from their design evaluations and impact assessments any algorithms that present low or minimal risk for the enumerated harms.[25]

      b)   CPRA Draft Regulations

The California Privacy Protection Agency (“CPPA”) released its CPRA draft regulations on May 27, 2022.[26]  The regulations were intended to be finalized by July 1, 2022, but public participation in the rulemaking process is still ongoing, with additional public hearings now scheduled for August 24 and 25, 2022.

In August 2020, the California Attorney General released the final regulations for the CCPA, the comprehensive state privacy law that will be replaced by the CPRA in January 2023.  The May 2022 draft CPRA regulations redline the August 2020 CCPA regulations and mostly focus on the CPRA’s changes to the preexisting CCPA concepts.  Key regulations addressed by this initial draft include those relating to dark patterns, expanded rules for service providers, third-party contracts, third-party notifications, requests to correct, opt-out preference signals, data minimization, privacy policy rules, revised definitions, and enforcement considerations.

One of the most conspicuous omissions concerns the lack of parameters for automated decision-making.  The CPRA defines “profiling” as “any form of automated processing of personal information, as further defined by regulations pursuant to paragraph (16) of subdivision (a) of Section 1798.185 [of the CCPA], to evaluate certain personal aspects relating to a natural person and in particular to analyze or predict aspects concerning that natural person’s performance at work, economic situation, health, personal preferences, interests, reliability, behavior, location, or movements,” leaving the contours relatively amorphous in scope.[27]  Contrary to the scope defined by other comprehensive state privacy laws and GDPR, commenters have pointed out that the CPRA’s language casts an incredibly wide net that could be argued to cover everything from invasive facial recognition in public places to routine automated processes like calculators and spellcheckers that may process personal information.  As expressed in many CPPA public record comments, numerous stakeholders hoped the initial set of regulations would at least clarify this definition, for example, by limiting it to automated technologies that could create a material impact on a person, similar to the GDPR.[28]

2.   Cases

On April 18, 2022, the Ninth Circuit reaffirmed that scraping data in bulk from public websites, like LinkedIn profiles, likely does not breach the federal Computer Fraud and Abuse Act (“CFAA”).[29]  In coming to its conclusion, the Ninth Circuit relied on 2021 U.S. Supreme Court precedent, Van Buren v. United States,[30] which narrowed what constitutes a CFAA violation to include only situations in which there is unauthorized access to a computer system—in other words where authorization is required and has not been given.  The Ninth Circuit found that because there are no rules or access permissions to prevent access on a publicly available website, accessing that publicly available data cannot violate the CFAA.

a)   Illinois’ Biometric Information Privacy Act (“BIPA”)

In the second quarter of 2022, we again observed a stream of lawsuits alleging claims under state biometrics laws, in particular, claims relating to the use of facial recognition technology under BIPA.  Some notable developments:

The Northern District of Illinois determined that BIPA does not exclude photograph-derived facial information from its scope.[31]  In Sosa et al. v. Onfido Inc., plaintiffs alleged that a biometric software company violated BIPA through software that scans uploaded photographs, extracts biometric identifiers, and determines if those photographs match uploaded identification cards.  While the scans were of photographs, the court found that scanning face geography on photographs was effectively obtaining biometric identifiers because nothing in BIPA specifies that the scan must be “in person.”

Several technology companies settled BIPA lawsuits relating to the use of facial recognition software.  On May 11, 2022, Clearview AI, Inc. settled a BIPA lawsuit filed in 2020 by the ACLU.[32]  Under the settlement agreement, Clearview agreed to not sell its software to most private companies or individuals in the U.S.—a decision that will largely restrict its use in the U.S. to law-enforcement agencies.  On May 25, 2022, seven plaintiffs in the consolidated class action filed against the insurance technology company, Lemonade Inc., were granted preliminary approval of a $4 million settlement.  The lawsuit alleged that the company collected users’ facial data between June 2019 and May 2021 without first obtaining written consent or making mandatory disclosures required by BIPA.[33]  As part of the settlement, Lemonade agreed to delete previously collected biometric data.

II.   EU ENFORCEMENT, REGULATORY & POLICY DEVELOPMENTS

1.   Digital Services Act

In July 2022, the new Digital Services Act (“DSA”), which would require major marketplace and social media platforms to provide insight into their algorithms to the government and to provide users with avenues to remove abusive content and disinformation, was adopted in the first reading by the European Parliament.  The DSA must now go through the final stages of adoption before being finalized with an effective date of January 2024 at the earliest.[34]  The DSA will impose different obligations on four categories of online intermediaries. The most stringent requirements apply to platforms and search engines with at least 45 million monthly active users in the EU – whether they are established inside or outside the EU – and require them to conduct risk assessments and independent audits, adopt certain crisis response mechanisms and heightened transparency requirements, provide access, upon request, to data for monitoring and assessing compliance, and establish a dedicated DSA compliance function.  Accordingly, the DSA – which will be directly applicable in all 27 EU member states – will bring with it significant compliance obligations for large online businesses as well as increased accountability to relevant authorities.

2.   The EU Parliament Adopts Special Report on AI

The European Parliament adopted a special report on AI, which sets out a list of demands to secure the EU’s position in AI, and points to research as one of the key means to achieving that goal.[35]  The report was developed by the Parliament’s special committee on AI and will underpin work on the upcoming AI Act.  The European Parliament’s aim is to support AI research in the EU by increasing public and private investment to €20 billion by 2030.  Policymakers believe that “with clear regulations and an investment push,” the EU can catch up with the U.S. and China in terms of AI investment, technology development, research, and attracting talent.

_______________________________

[1] U.S. Dep’t of Commerce, Press Release, U.S. Department of Commerce Appoints 27 Members to National AI Advisory Committee (Apr. 14, 2022), available at https://www.commerce.gov/news/press-releases/2022/04/us-department-commerce-appoints-27-members-national-ai-advisory.

[2] Artificial Intelligence and Automated Systems Legal Update (1Q22), available at https://www.gibsondunn.com/artificial-intelligence-and-automated-systems-legal-update-1q22/.

[3] NIST, Building the NIST AI Risk Management Framework: Workshop #2 (Apr. 19, 2022), available at https://www.nist.gov/news-events/events/2022/03/building-nist-ai-risk-management-framework-workshop-2.

[4] Fed. Trade Comm’n, FTC Report Warns About Using Artificial Intelligence to Combat Online Problems (June 16, 2022), available at https://www.ftc.gov/news-events/news/press-releases/2022/06/ftc-report-warns-about-using-artificial-intelligence-combat-online-problems.

[5] Id.

[6] Fed. Trade Comm’n, Dissenting Statement of Commissioner Noah Joshua Phillips Regarding the Combatting Online Harms Through Innovation Report to Congress (June 16, 2022), available at https://www.ftc.gov/system/files/ftc_gov/pdf/Commissioner%20Phillips%20Dissent%20to%20AI%20Report%20%28FINAL%206.16.22%20noon%29_0.pdf.

[7] Letter to Hon. Lina Khan, Chair FTC (June 22, 2022), available at https://www.politico.com/f/?id=00000181-8b25-d86b-afc1-8b2d11e00000.

[8] S. 4201, 117th Cong. (2021-2022); see also Press Release, Bennet Introduces Landmark Legislation to Establish Federal Commission to Oversee Digital Platforms (May 12, 2022), available at https://www.bennet.senate.gov/public/index.cfm/2022/5/bennet-introduces-landmark-legislation-to-establish-federal-commission-to-oversee-digital-platforms.

[9] H.R. 7585, 117th Cong. (2021-2022).

[10] Cal. Ins. Comm’r, Bulletin 2022-5 (June 30, 2022), available at https://www.insurance.ca.gov/0250-insurers/0300-insurers/0200-bulletins/bulletin-notices-commiss-opinion/upload/BULLETIN-2022-5-Allegations-of-Racial-Bias-and-Unfair-Discrimination-in-Marketing-Rating-Underwriting-and-Claims-Practices-by-the-Insurance-Industry.pdf.

[11] Id.

[12] Id.

[13] EEOC, EEOC Launches Initiative on Artificial Intelligence and Algorithmic Fairness (Oct. 28, 2021), available at https://www.eeoc.gov/newsroom/eeoc-launches-initiative-artificial-intelligence-and-algorithmic-fairness.

[14] EEOC, The Americans with Disabilities Act and the Use of Software, Algorithms, and Artificial Intelligence to Assess Job Applicants and Employees (May 12, 2022), available at https://www.eeoc.gov/laws/guidance/americans-disabilities-act-and-use-software-algorithms-and-artificial-intelligence?utm_content=&utm_medium=email&utm_name=&utm_source=govdelivery&utm_term.

[15] EEOC v. iTutorGroup, Inc., No. 1:22-cv-02565 (E.D.N.Y. May 5, 2022).

[16] American Data Privacy and Protection Act, H.R. 8152, 117th Cong. (2022).

[17] The full text of the proposed statute is available here.

[18] Specifically, the legislation limits covered entities to collecting, processing, or transferring data based on what is “reasonably necessary and proportionate” to (1) provide or maintain a specific product or service requested by the individual to whom the data pertains, (2) deliver a communication that is reasonably anticipated by the individual recipient in the context of the individual recipient’s interactions with the covered entity, or (3) for one of the “permissible purposes” enumerated in the bill’s text.  The bill would further prohibit the collection and processing of sensitive data “except where such collection or processing is strictly necessary to provide or maintain a specific product or service requested by an individual to whom the covered data pertains” or to effectuate one of the permitted purposes.

[19] The ADPPA would also impose requirements on relationships between covered entities and services providers and third parties, including requirements for contractual terms, and requires covered entities to implement certain accountability measures, like the appointment of data privacy and security officers.

[20] ADPPA § 2(9).

[21] ADPPA § 2(8).

[22] ADPPA § 207(c)(2).

[23] ADPPA § 207(c)(1).

[24] ADPPA § 207(c)(1)(B)(vi)(I)–(IV).

[25] ADPPA § 207(c)(5)(B).

[26] The full text of the proposed regulations is available here.

[27] Cal. Civ. Code § 1798.140(z) (emphasis added).

[28] The GDPR uses an impact to risk–based approach—only governing processing “which produces legal effects concerning him or her or similarly significantly affects him or her.”  GDPR at Art. 22(1) (emphasis added).  For example, this may include loan or employment applications.

[29] hiQ Labs Inc. v. LinkedIn Corp., No. 17-16783 (9th Cir. 2022).

[30] 141 S. Ct. 1648 (2021).

[31] Sosa et al. v. Onfido Inc., No. 20-cv-4247 (N.D. Ill. 2022).

[32] ACLU v. Clearview AI, Inc., 2020 CH 04353 (Cir. Ct. Cook Cty., Ill. 2022).

[33] Prudent v. Lemonade Inc. et al., 1:21-cv-07070 (S.D.N.Y. 2022).

[34] European Commission, The Digital Services Act package, available at https://digital-strategy.ec.europa.eu/en/policies/digital-services-act-package.

[35] European Parliament, Report – A9-0088/2022, REPORT on artificial intelligence in a digital age (Apr. 5, 2022), available at https://www.europarl.europa.eu/doceo/document/A-9-2022-0088_EN.html; see further Goda Naujokaityte, Parliament gives EU a push to move faster on artificial intelligence, Science Business (May 5, 2022), available at https://sciencebusiness.net/news/parliament-gives-eu-push-move-faster-artificial-intelligence.


The following Gibson Dunn lawyers prepared this client update: H. Mark Lyon, Frances Waldmann, Emily Lamm, and Samantha Abrams-Widdicombe.

Gibson Dunn’s lawyers are available to assist in addressing any questions you may have regarding these developments.  Please contact the Gibson Dunn lawyer with whom you usually work, any member of the firm’s Artificial Intelligence and Automated Systems Group, or the following authors:

H. Mark Lyon – Palo Alto (+1 650-849-5307, [email protected])
Frances A. Waldmann – Los Angeles (+1 213-229-7914,[email protected])

Please also feel free to contact any of the following practice leaders and members:

Artificial Intelligence and Automated Systems Group:

J. Alan Bannister – New York (+1 212-351-2310, [email protected])
Patrick Doris – London (+44 (0)20 7071 4276, [email protected])
Cassandra L. Gaedt-Sheckter – Co-Chair, Palo Alto (+1 650-849-5203, [email protected])
Kai Gesing – Munich (+49 89 189 33 180, [email protected])
Ari Lanin – Los Angeles (+1 310-552-8581, [email protected])
Carrie M. LeRoy – Palo Alto (+1 650-849-5337, [email protected])
H. Mark Lyon – Co-Chair, Palo Alto (+1 650-849-5307, [email protected])
Vivek Mohan – Co-Chair, Palo Alto (+1 650-849-5345, [email protected])
Alexander H. Southwell – New York (+1 212-351-3981, [email protected])
Christopher T. Timura – Washington, D.C. (+1 202-887-3690, [email protected])
Eric D. Vandevelde – Los Angeles (+1 213-229-7186, [email protected])
Michael Walther – Munich (+49 89 189 33 180, [email protected])

© 2022 Gibson, Dunn & Crutcher LLP

Attorney Advertising: The enclosed materials have been prepared for general informational purposes only and are not intended as legal advice.