Artificial Intelligence and Autonomous Systems Legal Update (2Q19)

July 23, 2019

Click for PDF

The second quarter of 2019 saw a surge in debate about the role of governance in the AI ecosystem and the gap between technological change and regulatory response. This trend was manifested in particular by calls for regulation of certain “controversial” AI technologies or use cases, in turn increasingly empowering lawmakers to take fledgling steps to control the scope of AI and automated systems in the public and private sectors. While it remains too soon to herald the arrival of a comprehensive federal regulatory strategy in the U.S., there have been a number of recent high-profile draft bills addressing the role of AI and how it should be governed at the federal level, while state and local governments are already pressing forward with concrete legislative proposals regulating the use of AI.

As we have previously observed, over the past year lawmakers and government agencies have sought to develop AI strategies and policy with the aim of balancing the tension between protecting the public from the potentially harmful effects of AI technologies while encouraging positive innovation and competitiveness.[1] Now, for the first time, we are seeing federal, state and local government agencies show a willingness to take concrete positions on that spectrum, resulting in a variety of policy approaches to AI regulation—many of which eschew informal guidance and voluntary standards and favor outright technology bans. We should expect that high-profile or contentious AI use cases or failures will continue to generate similar public support for, and ultimately trigger, accelerated federal and state action.[2] For the most part, the trend in favor of more individual and nuanced assessments of how best to regulate AI systems specific to their end uses of regulators in the U.S. has been welcome. However, even so there is an inherent risk that reactionary legislative responses will result in a disharmonious, fragmented national regulatory framework. In any event, from a regulatory perspective, these developments will undoubtedly yield important insights into what it means to govern and regulate AI—and whether “some regulation” is better than “no regulation”—over the coming months.

Table of Contents

I.        Key U.S. Legislative and Regulatory Developments

II.      Bias and Technology Bans

III.     Healthcare

IV.     Autonomous Vehicles

I.    Key U.S. Legislative and Regulatory Developments

As we reported in our Artificial Intelligence and Autonomous Systems Legal Update (1Q19), the House introduced Resolution 153 in February 2019, with the intent of “[s]upporting the development of guidelines for ethical development of artificial intelligence” and emphasizing the “far-reaching societal impacts of AI” as well as the need for AI’s “safe, responsible, and democratic development.”[3] Similar to California’s adoption last year of the Asilomar Principles[4] and the OECD’s recent adoption of five “democratic” AI principles,[5] the House Resolution provides that the guidelines must be consonant with certain specified goals, including “transparency and explainability,” “information privacy and the protection of one’s personal data,” “accountability and oversight for all automated decisionmaking,” and “access and fairness.”

Moreover, on April 10, 2019, U.S. Senators Cory Booker (D-NJ) and Ron Wyden (D-OR) introduced the “Algorithmic Accountability Act,” which “requires companies to study and fix flawed computer algorithms that result in inaccurate, unfair, biased or discriminatory decisions impacting Americans.”[6] Rep. Yvette D. Clarke (D-NY) introduced a companion bill in the House.[7] The bill stands to be the United States Congress’s first serious foray into the regulation of AI and the first legislative attempt in the United States to impose regulation on AI systems in general, as opposed to regulating a specific activity, such as the use of autonomous vehicles. While observers have noted congressional reticence to regulate AI in past years, the bill hints at a dramatic shift in Washington’s stance amid growing public awareness of AI’s potential to create bias or harm certain groups. Although the bill still faces an uncertain future, if it is enacted, businesses would face a number of challenges, not least significant uncertainty in defining and, ultimately, seeking to comply with the proposed requirements for implementing “high risk” AI systems and utilizing consumer data, as well as the challenges of sufficiently explaining to the FTC the operation of their AI systems. Moreover, the bill expressly states that it does not preempt state law—and states that have already been developing their own consumer privacy protection laws would likely object to any attempts at federal preemption—potentially creating a complex patchwork of federal and state rules.[8]

In the wake of HR 153 and the Algorithmic Accountability Act, several strategy announcements and federal bills have been introduced, focusing on AI strategy, investment and fair use and accountability.[9] While the proposed legislation remains in its early stages, the recent flurry of activity is indicative of the government’s increasingly bold engagement with technological innovation and the regulation of AI, and companies operating in this space should remain alert to both opportunities and risks arising out of federal legislative and policy developments—particularly the increasing availability of public-private partnerships—during the second half of 2019.

A.    The National Artificial Intelligence Research and Development Strategic Plan: 2019 Update

Three years after the release of the initial National Artificial Intelligence Research and Development Strategic Plan, in June 2019 the Trump administration issued an update—previewed in the administration’s February 2019 executive order[10]—bringing forward the original seven focus areas and adding an eighth: public-private partnerships.[11] Highlighting the benefits of strategically leveraging resources, including facilities, datasets, and expertise, to advance science and engineering innovations, the update notes:

Government-university-industry R&D partnerships bring pressing real-world challenges faced by industry to university researchers, enabling “use-inspired research”; leverage industry expertise to accelerate the transition of open and published research results into viable products and services in the marketplace for economic growth; and grow research and workforce capacity by linking university faculty and students with industry representatives, industry settings, and industry jobs.

Companies interested in exploring the possibility of individual collaborations or joint programs advancing precompetitive research should consider whether they have relevant expertise in any of the areas in which federal agencies are actively pursuing public-private partnerships, including the DoD’s Defense Innovation Unit and the Department of Health and Human Services.[12] The updated plan also highlights what progress federal agencies have made with respect to the original seven focus areas:

  • Make long-term investments in AI research.
  • Develop effective methods for human-AI collaboration.
  • Understand and address the ethical, legal and societal implications of AI.
  • Ensure the safety and security of AI systems.
  • Develop shared public datasets and environments for AI training and testing.
  • Measure and evaluate AI technologies through standards and benchmarks.
  • Better understand the national AI R&D workforce needs.

B.    NIST Federal Engagement in AI Standards

The U.S. Department of Commerce’s National Institute of Standards and Technology (“NIST”) is seeking public comment on a draft plan for federal government engagement in advancing AI standards for U.S. economic and national security needs (“U.S. Leadership in AI: Plan for Federal Engagement in Developing Technical Standards and Related Tools”). The plan recommends four actions: bolster AI standards-related knowledge, leadership and coordination among federal agencies; promote focused research on the “trustworthiness” of AI; support and expand public-private partnerships; and engage with international parties.[13] The draft was published on July 2, 2019 in response to the February 2019 Executive Order that directed federal agencies to take steps to ensure that the U.S. maintains its leadership position in AI.[14] The draft plan was developed with input from various stakeholders through a May 1 Request for Information,[15] a May 30 workshop[16] and federal agency review.

C.    AI in Government Act

House Bill 2575 and its corresponding bi-partisan Senate Bill 3502 (the “AI in Government Act”)—which would task federal agencies with exploring the implementation of AI in their functions and establishing an “AI Center of Excellence,”—were first introduced in September 2018, and reintroduced in May 2019.[17] The center would be directed to “study economic, policy, legal, and ethical challenges and implications related to the use of artificial intelligence by the Federal Government” and “establish best practices for identifying, assessing, and mitigating any bias on the basis of any classification protected under Federal non-discrimination laws or other negative unintended consequence stemming from the use of artificial intelligence systems.”

One of the sponsors of the bill, Senator Brian Schatz (D-HI), stated that “[o]ur bill will bring agencies, industry, and others to the table to discuss government adoption of artificial intelligence and emerging technologies. We need a better understanding of the opportunities and challenges these technologies present for federal government use and this legislation would put us on the path to achieve that goal.”[18] Although the bill is aimed at improving the implementation of AI by the federal government, there are likely to be opportunities for industry stakeholders to participate in discussions surrounding best practices.

D.    Artificial Intelligence Initiative Act

On May 21, 2019, U.S. Senators Rob Portman (R-OH), Martin Heinrich (D-NM), and Brian Schatz (D-HI) proposed legislation to allocate $2.2 billion over the next five years to develop a comprehensive national AI strategy to accelerate research and development in order to match other global economic powers like China, Japan, and Germany.[19] S. 1558 (the “Artificial Intelligence Initiative Act”) would create three new bodies: a National AI Coordination Office (to coordinate legislative efforts), a National AI Advisory Committee (consisting of experts on a wide range of AI matters), and an Interagency Committee on AI (to coordinate federal agency activity relating to research and education on AI).[20] The bill also establishes the National AI Research and Development Initiative in order to identify and minimize “inappropriate bias and data sets algorithms.” The requirement for NIST to identify metrics used to establish standards for evaluating AI algorithms and their effectiveness, as well as the quality of training data sets, may be of particular interest to businesses. Moreover, the bill requires the Department of Energy to create an AI research program, building state-of-the-art computing facilities that will be made available to private sector users on a cost-recovery basis.[21]

The draft legislation complements the formation of the bipartisan Senate AI Caucus in March 2019 by Senators Heinrich and Portman to address transformative technology with implications spanning a number of fields including transportation, health care, agriculture, manufacturing, and national security.[22]

E.    FinTech

We have reported previously on the rapid adoption of AI by government agencies in relation to financial services.[23] On May 9, 2019, Rep. Maxine Waters (D-CA) announced that the House Committee on Financial Services would launch two task forces focused on financial technology (“fintech”) and AI:[24] a task force on financial intelligence that will focus on the topics of regulating the fintech sector, and an AI task force that will focus on machine learning in financial services and regulation, emerging risks in algorithms and big data, combatting fraud and digital identification technologies, and the impact of automation on jobs in financial services.[25]

II.    Bias and Technology Bans

As we reported in our Artificial Intelligence and Autonomous Systems Legal Update (1Q19), the topic of bias in AI decision-making has been at the forefront of policy discussions relating to the private sector for some time, and the deep learning community has responded with a wave of investments and initiatives focusing on processes designed to assess and mitigate bias and disenfranchisement[26] at risk of becoming “baked in and scaled” by AI systems.[27] Such discussions are now becoming more urgent and nuanced with the increased availability of AI decision-making tools allowing government decisions to be delegated to algorithms to improve accuracy and drive objectivity, directly impacting democracy and governance.[28] Over the past several months, we have seen those discussions evolve into tangible and impactful regulations in the data privacy space and, notably, several outright technology bans.[29] At a recent hearing of the Task Force, participants discussed selected issues facing certain U.S. and international regulatory agencies as well as certain regulator’s efforts to engage with stakeholders in the fintech industry in order to consolidate and clarify communications, and inform policy.[30]

A.    Slew of New Regulations Proposed on Facial Recognition Technology by Government Agencies

State and Local Regulation. Biometric surveillance, or “facial recognition technology,” has emerged as a lightning rod for public debate regarding the risk of improper algorithmic bias and data privacy concerns. Amid widespread fears that the current state of the technology is not sufficiently accurate or reliable to avoid discrimination, particularly in law enforcement, regulators have seized the opportunity to act in the AI space—proposing and passing outright bans on the use of facial recognition technology with no margin for discretion or use case testing.

There is gathering momentum to ban facial recognition technology in the Bay Area. On May 14, by a vote of 8-1, the San Francisco Board of Supervisors passed a first-of-its-kind regulation to ban the use of facial recognition technology by city police and other government departments—one of the strictest U.S. laws to date governing machine learning and AI technologies.[31] The “Stop Secret Surveillance” ordinance amends San Francisco’s Administrative Code to require that city departments seeking to acquire or use surveillance technologies must first submit certain reports and seek certain purchase-and-use approvals from the San Francisco Board of Supervisors before any purchase or use of such technologies. The purpose behind the ordinance was to eliminate the possibility that law enforcement or other public officials would secretly utilize surveillance technology, without having first run it by the Board and its available public hearing mechanisms. However, with regard to one machine learning-driven technology, facial recognition, the ordinance goes one step further and completely bans any use of any form of the technology by all city and county departments, including the San Francisco Police Department. In imposing the ban on facial recognition technology, the Board of Supervisors appears to have been influenced particularly by reports of recent repressive actions by other governments in the AI space, as well as more general concerns regarding bias and discrimination.[32] The ordinance’s supporter, Supervisor Aaron Peskin, was quoted as saying that it is a “fact” that facial recognition technology “has the biases of the people who developed it.”[33]

Most recently, the city of Oakland also passed a similar bill which prohibits police from both acquiring the software or using it at all, including if used by other police agencies—with city officials citing as reasons for the ban the limitations of the technology, the lack of standards around its implementation, and its potential use in the persecution of minorities. The Berkeley City Council is currently considering a similar measure. Oakland police had argued that the council should continue to allow that use of facial recognition software, and ban it only in real-time applications, such as scanning an active surveillance camera for wanted suspects—but the council opted to pass the more restrictive language.[34]

However, it is not just the Bay Area that is at cross-purposes with facial recognition software. In June 2019, the city of Somerville, Massachusetts, joined San Francisco by voting unanimously to ban its municipal government from using and retaining “face surveillance” technology, which is defined as being able to automatically detect someone’s identity based on their face.[35] The New York State legislature[36] is also considering a ban on facial recognition technology.  In Massachusetts, a bill pending in the state legislature would put a moratorium on facial recognition and other remote biometric surveillance systems.[37] Meanwhile, back in the Golden State, the California State Senate is considering a similar ban on biometric surveillance software for police body cameras.[38] Assembly bill AB-1215 characterizes the use of facial recognition as the “functional equivalent of requiring every person to show a personal photo identification card at all times in violation of recognized constitutional rights,” posing “unique and significant threats” to civil rights and liberties and potentially chilling the exercise of free speech in public places. There is also a companion bill, AB-1281, that requires certain specific “ clear and conspicuous” signage and disclosures by California businesses of any use of facial recognition.[39] Both bills have passed the Assembly and Senate Committee, and are now waiting for a floor vote in the Senate. Given their relative ease of passage through the Assembly and Senate to date, and the increasing momentum in favor of stringent regulations on facial recognition technology in California, both bills are likely to become law in due course.

Federal Regulation. As state and local government activity intensifies, the federal government has also indicated a willingness to consider a nationwide ban on facial recognition technology. A bill introduced in Congress in March (S. 847, “Commercial Facial Recognition Privacy Act of 2019”) would ban users of commercial face recognition technology from collecting and sharing data for identifying or tracking consumers without their consent, although it does not address the government’s uses of the technology.[40] With few exceptions, the bill would require facial recognition technology available online to be made accessible for independent third-party testing “for accuracy and bias.” The bill remains pending and has been referred to the Committee on Commerce, Science, and Transportation. In the meantime, the House Committee on Oversight and Reform has also held several hearings on transparency regarding government use cases, at which Committee members voiced strong bipartisan support for providing transparency and accountability to the use of facial recognition technology.[41]

Increasingly, therefore, there is support at federal, state and local levels for stringent regulation of facial recognition technology, with a growing number of lawmakers and governing bodies in favor of prohibiting the use of the technology while a broader regulatory approach develops and the technology evolves.[42] This tentative consensus, which blunts the positive use cases of such technology and discounts the possibility of enacting certain limitations on scope while the technology improves to eliminate any existing flaws and biases, stands in stark contrast to the generally permissive approach to the development of AI systems in the private sector to date. We will continue to carefully monitor legislative and policy developments relating to technology bans at state and (possibly) federal levels, and stand ready to assist companies in assessing the impact of these and similar regulations on biometric surveillance technologies in the private sector.

B.    California State Assembly Passes “Anti-Eavesdropping Act” Seeking to Regulate Smart Home Speakers

On May 29, the California State Assembly passed a bill (A.B. 1395) requiring manufacturers of ambient listening devices like smart speakers to receive consent from users before retaining voice recordings, and banning manufacturers from sharing command recordings with third parties.  The bill is currently being considered by the State Senate.[43]  Companies that manufacture smart devices which record commands by default and which use the data to train their automated systems should pay close attention to developments in this space.

C.    Utah Warrant Bill Limits Police Authority to Access Electronic Data

A new Utah law, House Bill 57, forces police to obtain a warrant before they can gain access to any person’s electronic data and could have implications far beyond law enforcement, including for how employers and big tech companies respond to police demands for data.[44]  The law makes individuals the owners of their data, not the companies they work for or the digital platforms to which they entrust data, and additionally requires law enforcement to “destroy in an unrecoverable manner” the data it obtains “as soon as reasonably possible after the electronic information or data is collected.”  H.B. 57 went into effect in May 2019.[45] The law reflects a legislative recognition of individual privacy rights, and we will continue to closely watch this space and the extent to which its approach may be replicated in other state legislatures.

D.    Illinois passes Artificial Intelligence Video Interview Act

In May 2019, the Illinois legislature unanimously passed H.B. 2557 (the “Artificial Intelligence Video Interview Act”), which requires employers to notify candidates in writing that AI will be used to assess their interview, explain what elements the AI system will look for (such as analyzing the applicant’s facial expressions and fitness for the position), and secure the applicant’s written consent to proceed. Illinois employers using such software will need to carefully consider how they are addressing the risk of AI-driven bias in their current operations.[46] It remains to be seen whether other states will adopt similar requirements, but as noted above, a major challenge for companies subject to such laws will be explaining to regulators how their AI assessments work and how the criteria are ultimately used in any decision-making processes.

III.    Healthcare

A.    FDA Releases White Paper Outlining a Potential Regulatory Framework for Software As A Medical Device (SaMD) That Leverages AI

The rapidly increasing use of artificial intelligence in the healthcare industry has not gone unnoticed by regulators. On April 2, 2019, the Food and Drug Administration (“FDA”) published an exploratory white paper proposing a framework for regulating artificial intelligence/machine learning (“AI/ML”)-based software as a medical device (“SaMD”).[47]

Although not a regulation in and of itself, the FDA white paper previews what future regulation might look like with regard to SaMD. The FDA’s white paper highlights the challenges that AI/ML-based software poses to the traditional medical device regulatory framework. As the FDA has acknowledged, AI products with algorithms that continually adapt based on new data are not well suited to the current regulatory paradigm, under which significant software modifications require a new pre-market submission prior to marketing. Through the application of learning algorithms, SaMD may undergo rapid, if not constant, change. The white paper distinguishes three types of modifications to AI/ML-based SaMD, and describes how these types of changes might fit within the framework for evaluation of device modifications. In particular, the three types are: (1) clinical and analytical performance modifications, (2) modification of inputs used by the algorithm and their clinical association with the SaMD output, and (3) modification of the intended use of the SaMD. The FDA also identifies four principles for AI/ML “learning algorithms,” including the use of an algorithm change protocol that might serve as an alternative to the nascent “Pre-Certification” pilot.

In response to the FDA’s invitation for comments, the American Medical Informatics Association stated that “[p]roperly regulating AI and machine learning-based SaMD will require ongoing dialogue between FDA and stakeholders.”[48] The group also recommended that the FDA develop guidance on how and how often developers of SaMD-based products should test those products for algorithm-driven biases. However, it remains to be seen to what extent the FDA will be influenced by calls for more specific guidance and demands for stakeholder input. Companies operating in this space should consider seeking out advice regarding the evaluation of SaMD devices by the FDA.

In practice, the FDA is already regulating medical devices that rely on AI. In 2017, the FDA began a precertification pilot program for makers of software applications that function as medical devices. Under this pilot program, the FDA cleared Apple Watch’s Series 4 devices for two new applications which allow the device to perform an electrocardiogram on the user and to detect and alert users to irregular heartbeats.[49] The FDA has also recently approved other medical devices that rely on AI software. For example, in mid-May, the FDA approved an AI-based chest X-ray triage product.[50] And in early June, it was reported that the FDA cleared an AI tool used by radiologists for triage of cervical spine fractures.[51]

IV.    Autonomous Vehicles

Federal regulation of autonomous vehicles continues to falter in the new Congress, as measures like the Safely Ensuring Lives Future Deployment and Research in Vehicle Evolution (“SELF DRIVE”) Act (H.R. 3388) and the American Vision for Safer Transportation through Advancement of Revolutionary Technologies (“AV START”) Act (S. 1885) have not been re-introduced since expiring with the close of the 115th Congress last December.[52] On the other hand, several federal agencies have announced proposed rulemaking to facilitate the integration of autonomous vehicles onto public roads. And while federal regulations are lagging behind, legislative activity at the state and local levels is stepping up to advance integration of autonomous vehicles in the national transportation system and local infrastructure.

A.    Federal Regulatory Activity

On May 28, in the wake of a petition filed by General Motors requesting temporary exemption from Federal Motor Vehicle Safety Standards (“FMVSSs”) which require manual controls or have requirements that are specific to a human driver,[53] the National Highway Traffic Safety Administration (“NHTSA”) announced that it was seeking comments about the possibility of removing “regulatory barriers” relating to the introduction of automated vehicles in the United States (Docket No. NHTSA-2019-0036).[54] In addition, the Federal Motor Carrier Safety Administration also issued a request for comments on proposed rulemaking for Federal Motor Carrier Safety Regulations that may need to be reconsidered for Automated Driving System-Dedicated Vehicles (“ADS-DVs”) (Docket No. FMCSA-2018-0037).[55]

Thus far, the comments submitted generally support GM’s petition for temporary exemption and the removal of regulatory barriers to the compliance certification of ADS-DVs. Some commenters have raised concerns that there is insufficient information in the petition to establish safety equivalence between traditionally operated vehicles and ADS-DVs, and regarding the ability of ADS-DVs to safely operate in unexpected and emergency situations. However, it is likely that NHTSA will grant petitions for temporary exemption to facilitate the development of ADS technology, contingent on extensive data-sharing requirements and a narrow geographic scope of operation.

We anticipate that regulatory changes to testing procedures (including pre-programmed execution, simulation, use of external controls, use of a surrogate vehicle with human controls, and technical documentation) and modifications to current FMVSSs (such as crashworthiness, crash avoidance, and indicator standards) will likely be finalized in 2021. In the meantime, we encourage our clients to contact us if they would like further information or assistance in developing and submitting comments, which will be accepted by the NHTSA until July 29, 2019, and by the FMCSA until August 26, 2019.

B.    State Regulatory Activity

State regulatory activity has continued to accelerate, adding to the already complex patchwork of regulations that apply to companies manufacturing and testing autonomous vehicles. We encourage our clients to contact us for assistance in considering the impact of state and local laws with regard to testing, standards and certification.

On April 12, 2019, the California DMV published proposed autonomous vehicle regulations that allow the testing and deployment of autonomous motor trucks (delivery vehicles) weighing less than 10,001 pounds on California’s public roads.[56] The DMV held a public hearing on May 30, 2019, at its headquarters in Sacramento to gather input and discuss the regulations.[57] The DMV’s regulations continue to exclude the autonomous testing or deployment of vehicles weighing more than 10,001 pounds.[58] In the California legislature, two new bills related to autonomous vehicles have been introduced: S.B. 59[59] would establish a working group on autonomous passenger vehicle policy development while S.B. 336[60] would require transit operators to ensure certain automated transit vehicles are staffed by employees.

On June 13, 2019, Florida Governor Ron DeSantis signed C.S./H.B. 311: Autonomous vehicles into law, which went into effect on July 1.[61] C.S./H.B. 311 establishes a statewide statutory framework, permits fully automated vehicles to operate on public roads, and removes obstacles that hinder the development of self-driving cars.[62]

In Washington, Governor Jay Inslee signed into law H.B. 1325, a measure that will create a regulatory framework for personal delivery devices (“PDDs”) that deliver property via sidewalks and crosswalks (e.g., wheeled robots).[63] Washington is now the eighth U.S. state to permit the use of delivery bots in public locations. The other states are Virginia, Idaho, Wisconsin, Florida, Ohio, Utah, and Arizona. In Oklahoma, Governor Kevin Stitt recently signed legislation (S.B. 365) restricting city and county governments from legislating autonomous vehicles, ensuring that such legislation would be entirely in the hands of state and federal lawmakers.[64] Pennsylvania, which last year passed legislation creating a commission on “highly automated vehicles,” has proposed a bill that would authorize the use of an autonomous shuttle vehicle on a route approved by the Pennsylvania Department of Transportation (H.B. 1078).[65]

C.    Task Forces and Organization Committees

Outside of government, organizations are working with industry leaders to help set rules for this constantly innovating industry. One such recent partnership was a panel of experts, hosted by State Farm and the Governors Highway Safety Association (“GHSA”), focused on developing recommendations and guidelines on state safety programs.[66] Following the meeting, GHSA will develop a white paper to inform its members and all traffic safety stakeholders of the outcomes of the expert panel and present its recommendations at GHSA’s 2019 Annual Meeting in Anaheim, California, August 24–28.

In New York, the New York State Bar Association recently appointed Gibson Dunn partner Alex Southwell to its Task Force on Autonomous Vehicles and the Law.[67] “This ‘groundbreaking initiative’ will ‘study and understand the seismic impact that autonomous vehicles will have on our legal system and society, and make recommendations on how New York State and its legal institutions can prepare for this revolutionary technological change.’”[68]

______________________

[1] For more information, please see our Artificial Intelligence and Autonomous Systems Legal Update (4Q18); see also Ahmed Baladi, Gibson, Dunn & Crutcher LLP, Can GDPR Hinder AI Made in Europe? Cybersecurity Law Report (July 10, 2019), available at https://www.gibsondunn.com/wp-content/uploads/2019/07/Baladi-Can-GDPR-Hinder-AI-Made-in-Europe-Cybersecurity-Law-Report-10-07-19.pdf.

[2] See, for example, the House Intelligence Committee’s hearing on Deepfakes and AI on June 13, 2019 (U.S. House of Representatives, Permanent Select Committee on Intelligence, Press Release: House Intelligence Committee To Hold Open Hearing on Deepfakes and AI (June 7, 2019)); see also Makena Kelly, Congress grapples with how to regulate deepfakes, The Verge (June 13, 2019), available at https://www.theverge.com/2019/6/13/18677847/deep-fakes-regulation-facebook-adam-schiff-congress-artificial-intelligence.

[3] H.R. Res. 153, 116th Cong. (1st Sess. 2019).

[4] Assemb. Con. Res. 215, Reg. Sess. 2018-2019 (Cal. 2018) (enacted) (expressing the support of the legislature for the “Asilomar AI Principles”—a set of 23 principles developed through a collaboration between AI researchers, economists, legal scholars, ethicists and philosophers that met in Asilomar, California, in January 2017 and categorized into “research issues,” “ethics and values,” and “longer-term issues” designed to promote the safe and beneficial development of AI—as “guiding values for the development of artificial intelligence and of related public policy”).

[5] OECD Principles on AI (May 22, 2019) (stating that AI systems should benefit people, be inclusive, transparent, and safe, and their creators should be accountable), available at http://www.oecd.org/going-digital/ai/principles/.

[6] Press Release, Cory Booker, Booker, Wyden, Clarke Introduce Bill Requiring Companies To Target Bias In Corporate Algorithms (Apr. 10, 2019), available at https://www.booker.senate.gov/?p=press_release&id=903; see also S. Res. __, 116th Cong. (2019).

[7] H.R. Res. 2231, 116th Cong. (1st Sess. 2019).

[8] See Byungkwon Lim et al., A Glimpse into the Potential Future of AI Regulation, Law360 (April 10, 2019), available at https://www.law360.com/articles/1158677/a-glimpse-into-the-potential-future-of-ai-regulation.

[9] State legislatures have also recently weighed in on AI policy. In March 2019, Senator Ling Chang (R-CA) introduced Senate Joint Resolution 6, urging the president and Congress to develop a comprehensive AI advisory committee and to adopt a comprehensive AI policy, S.J. Res. 6, Reg. Sess. 2019–2020 (Cal. 2019); in Washington State, House Bill 1655 was introduced in February 2019, seeking to “protect consumers, improve transparency, and create more market predictability” by establishing guidelines for government procurement and use of automated decision systems, H.B. 1655, 66th Leg., Reg. Sess. 2019 (Wash. 2019).

[10] For more information, please see our client alert President Trump Issues Executive Order on “Maintaining American Leadership in Artificial Intelligence.

[11] Exec. Office of the U.S. President, The National Artificial Intelligence Research and Development Strategic Plan: 2019 Update (June 2019), available at https://www.whitehouse.gov/wp-content/uploads/2019/06/National-AI-Research-and-Development-Strategic-Plan-2019-Update-June-2019.pdf.

[12] Id. at 42.

[13] NIST, U.S. Leadership in AI: a Plan For Federal Engagement in Developing Technical Standards and Related Tools – Draft For Public Comment (July 2, 2019), available .

[14] Supra note 9.here

[15] NIST, NIST Requests Information on Artificial Intelligence Technical Standards and Tools (May 1, 2019), available at https://www.nist.gov/news-events/news/2019/05/nist-requests-information-artificial-intelligence-technical-standards-and.

[16] NIST, Federal Engagement in Artificial Intelligence Standards Workshop (May 30, 2019), available at https://www.nist.gov/news-events/events/2019/05/federal-engagement-artificial-intelligence-standards-workshop.

[17] H.R. 2575, 116th Cong. (2019-2020); S. 3502 – AI in Government Act of 2018, 115th Cong. (2017-2018).

[18] Press Release, Senator Brian Schatz, Schatz, Gardner Introduce Legislation To Improve Federal Government’s Use Of Artificial Intelligence (September 2019), available at https://www.schatz.senate.gov/press-releases/schatz-gardner-introduce-legislation-to-improve-federal-governments-use-of-artificial-intelligence; see also Tajha Chappellet-Lanier, Artificial Intelligence in Government Act is back, with ‘smart and effective’ use on senators’ minds (May 8, 2019), available at https://www.fedscoop.com/artificial-intelligence-in-government-act-returns.

[19] S. 1558 – Artificial Intelligence Initiative Act, 116th Cong. (2019-2010); see further Khari Johnson, U.S. Senators propose legislation to fund national AI strategy, VentureBeat (May 21, 2019) available at https://venturebeat.com/2019/05/21/u-s-senators-propose-legislation-to-fund-national-ai-strategy.

[20] Matthew U. Scherer, Michael J. Lotito & James A. Paretti, Jr., Bipartisan Bill Would Create Artificial Intelligence Strategy for U.S. Workforce, Lexology (May 30, 2019), available at https://www.lexology.com/library/detail.aspx?g=857d902f-e7a0-412b-878b-b1fb149da745.

[21] Press Release, Senator Martin Heinrich, Heinrich, Portman, Schatz Propose National Strategy For Artificial Intelligence; Call For $2.2 Billion Investment In Education, Research & Development (May 21, 2019), available at https://www.heinrich.senate.gov/press-releases/heinrich-portman-schatz-propose-national-strategy-for-artificial-intelligence-call-for-22-billion-investment-in-education-research-and-development.

[22] Press Release, Senator Martin Heinrich, Heinrich, Portman Launch Bipartisan Artificial Intelligence Caucus (Mar. 13, 2019), available at https://www.heinrich.senate.gov/press-releases/heinrich-portman-launch-bipartisan-artificial-intelligence-caucus.

[23] For more information, please see our Artificial Intelligence and Autonomous Systems Legal Update (4Q18).

[24] Katie Grzechnik Neill, Rep. Waters Announces Task Forces on Fintech and Artificial Intelligence (May 13, 2019), available at https://www.insidearm.com/news/00045030-rep-waters-announces-all-democrat-task-fo.

[25] See Scott Likens, How Artificial Intelligence Is Already Disrupting Financial Services, Barrons (May 16, 2019), available at https://www.barrons.com/articles/how-artificial-intelligence-is-already-disrupting-financial-services-51558008001.

[26] See also Kalev Leetaru, Why Do We Fix AI Bias But Ignore Accessibility Bias? Forbes (July 6, 2019), available at https://www.forbes.com/sites/kalevleetaru/2019/07/06/why-do-we-fix-ai-bias-but-ignore-accessibility-bias/#55e7c777902d; Alina Tugend, Exposing the Bias Embedded in Tech, NY Times (June 17, 2019), available at https://www.nytimes.com/2019/06/17/business/artificial-intelligence-bias-tech.html.

[27] Jake Silberg & James Manyika, Tackling Bias in Artificial Intelligence (and in Humans), McKinsey Global Institute (June 2019), available at https://www.mckinsey.com/featured-insights/artificial-intelligence/tackling-bias-in-artificial-intelligence-and-in-humans.

[28] Nicol Turner Lee, Paul Resnick & Genie Barton, Algorithmic Bias Detection and Mitigation: Best Practices and Policies to Reduce Consumer Harms, Brookings Institute (May 22, 2019), available at https://www.brookings.edu/research/algorithmic-bias-detection-and-mitigation-best-practices-and-policies-to-reduce-consumer-harms/.

[29] See also the French government’s recent law, encoded in Article 33 of the Justice Reform Act, prohibiting anyone —especially legal tech companies focused on litigation prediction and analytics—from publicly revealing the pattern of judges’ behavior in relation to court decisions, France Bans Judge Analytics, 5 Years In Prison For Rule Breakers, Artificial Lawyer (June 4, 2019), available at https://www.artificiallawyer.com/2019/06/04/france-bans-judge-analytics-5-years-in-prison-for-rule-breakers/.

[30] U.S. H.R. Comm. on Fin. Servs., Overseeing the Fintech Revolution: Domestic and International Perspectives on Fintech Regulation (June 25, 2019), at 4, available at https://docs.house.gov/meetings/BA/BA00/20190625/109733/HHRG-116-BA00-20190625-SD002.pdf.

[31] Kate Conger, Richard Fausset & Serge F. Kovaleski, San Francisco Bans Facial Recognition Technology, NY Times (May 14, 2019), available at https://www.nytimes.com/2019/05/14/us/facial-recognition-ban-san-francisco.html.

[32] For a detailed discussion of the impact of AI technology bans, see H. Mark Lyon, Gibson, Dunn & Crutcher LLP, Before We Regulate, Daily Journal (June 26, 2019), available at https://www.gibsondunn.com/wp-content/uploads/2019/07/Lyon-Before-We-Regulate-Daily-Journal-6-26-19.pdf.

[33] Sara Gaiser, San Francisco Supervisor Aaron Peskin Proposes Citywide Ban on Facial Recognition Technology, SF Examiner (Jan. 29, 2019), available at https://www.sfexaminer.com/news/san-francisco-supervisor-aaron-peskin-proposes-citywide-ban-on-facial-recognition-technology/.

[34] CBS, Oakland Officials Take Step Towards Banning City Use Of Facial Recognition Tech, SF Chronicle (July 16, 2019), available at https://sanfrancisco.cbslocal.com/2019/07/16/oakland-officials-take-step-towards-banning-city-use-of-facial-recognition-tech/.

[35] City of Somerville, Mass., Ordinance 208142 (June 27, 2019).

[36] S.B. 5687, 2019–2020 Reg. Sess. (N.Y. 2019) (proposing a ban on the use of a facial recognition system by a landlord on any residential premises).

[37] S.B. 1385, 191st Leg., 2019–2020 Reg. Sess. (Mass. 2019).

[38] A.B. 1215, 2019–2020 Reg. Sess. (Cal. 2019).

[39] A.B. 1281, 2019–2020 Reg. Sess. (Cal. 2019).

[40] S. 847, 116th Cong. (1st Sess. 2019).

[41] U.S. H.R. Comm. on Oversight and Reform, Facial Recognition Technology (Part II): Ensuring Transparency in Government Use (June 4, 2019), available at https://oversight.house.gov/legislation/hearings/facial-recognition-technology-part-ii-ensuring-transparency-in-government-use.

[42] Colin Lecher, Congress faces ‘hard questions’ on facial recognition as activists push for ban, The Verge (July 10, 2019), available at https://www.theverge.com/2019/7/10/20688932/congress-facial-recognition-hearing-ban.

[43] A.B. 1395, 2019–2010 Reg. Sess. (Cal. 2019).

[44] Allison Grande, Utah Warrant Bill Raises Stakes For Cops’ Digital Data Grabs, Law360 (Apr. 23, 2019), available at https://www.law360.com/articles/1151791/utah-warrant-bill-raises-stakes-for-cops-digital-data-grabs.

[45] H.B. 57, 2019–2020 Reg. Sess. (Utah 2019).

[46] H.B. 2557, 2019-2010 Reg. Sess. (Ill. 2019) (101st Gen. Assembly).

[47] See regulations.gov, Proposed Regulatory Framework for Modifications to Artificial Intelligence/Machine Learning (AI/ML)-Based Software as a Medical Device (SaMD) – Discussion Paper and Request for Feedback, (Apr. 2, 2019), available at https://www.regulations.gov/document?D=FDA-2019-N-1185-0001.

[48] Greg Slabodkin, AMIA calls on FDA to refine its AI regulatory framework, HealthData Management (June 11, 2019), available at https://www.healthdatamanagement.com/news/amia-calls-on-fda-to-refine-its-ai-regulatory-framework.

[49] Gopal Ratnam, FDA grapples with AI medical devices, Roll Call (May 7, 2019), available at https://www.rollcall.com/news/policy/fda-grapples-with-living-medical-devices.

[50] Tova Cohen, Israel’s Zebra Medical gets FDA ok for AI chest X-ray product, Reuters (May 13, 2019), available at https://www.reuters.com/article/us-healthcare-zebra-medical-regulator/israels-zebra-medical-gets-fda-ok-for-ai-chest-x-ray-product-idUSKCN1SJ0LI.

[51] Tova Cohen, Israel’s Aidoc gets third FDA nod for AI tools for radiologists, Reuters (June 11, 2019), available at https://www.reuters.com/article/us-tech-aidoc-regulations/israels-aidoc-gets-third-fda-nod-for-ai-tools-for-radiologists-idUSKCN1TC1IN.

[52] For more information, please see our Artificial Intelligence and Autonomous Systems Legal Update (4Q18).

[53] General Motors, LLC-Receipt of Petition for Temporary Exemption from Various Requirements of the Safety Standards for an All Electric Vehicle with an Automated Driving System, 84 Fed. Reg. 10182.

[54] Removing Regulatory Barriers for Vehicles With Automated Driving Systems, 84 Fed. Reg. 24,433 (May 28, 2019) (to be codified at 49 C.F.R. 571); see also Removing Regulatory Barriers for Vehicles with Automated Driving Systems, 83 Fed. Reg. 2607, 2607 (proposed March 5, 2018) (to be codified at 49 CFR 571).

[55] Safe Integration of Automated Driving Systems-Equipped Commercial Motor Vehicles, 84 Fed. Reg. 24,449 (May 28, 2019).

[56] State of California Department of Motor Vehicles, Autonomous Light-Duty Motor Trucks (Delivery Vehicles), available at https://www.dmv.ca.gov/portal/dmv/detail/vr/autonomous/bkgd.

[57] Id.

[58] Id.

[59] S.B. 59, 2019–2020 Reg. Sess. (Cal. 2019).

[60] S.B. 336, 2019–2020 Reg. Sess. (Cal. 2019).

[61] Governor Ron DeSantis Signs CS/HB 311: Autonomous Vehicles (June 13, 2019), available at https://www.flgov.com/2019/06/13/governor-ron-desantis-signs-cs-hb-311-autonomous-vehicles/.

[62] See, Taylor Garre, Florida Gives the Green Light to Fully Autonomous Vehicles (June 19, 2019), available at https://cheddar.com/media/florida-autonomous-vehicles-self-driving-cars-senator-brandes.

[63] 2019 Wash. Sess. Laws, Ch. 214.

[64] S.B. 365, 57th Leg., Reg. Sess. (Okla. 2019).

[65] H.B. 1078, 2019–2020 Reg. Sess. (Pa. 2019).

[66] Press Release, Governors Highway Safety Association, Traffic Safety Recommendations for Automated Vehicles Being Developed (Apr. 29, 2019), available at https://www.ghsa.org/resources/news-releases/av-panel19.

[67] Alexander Southwell Appointed to New York State Bar Association’s Task Force of Autonomous Vehicles and the Law, Gibson Dunn (June 3, 2019), available at https://www.gibsondunn.com/alexander-southwell-appointed-to-new-york-state-bar-associations-task-force-of-autonomous-vehicles-and-the-law/.

[68] Id.


The following Gibson Dunn lawyers prepared this client update: H. Mark Lyon, Frances A. Waldmann, Claudia Barrett, Oliver Fong and Arjun Rangarajan.

Gibson Dunn’s lawyers are available to assist in addressing any questions you may have regarding these developments.  Please contact the Gibson Dunn lawyer with whom you usually work, any member of the firm’s Artificial Intelligence and Automated Systems Group, or the following authors:

H. Mark Lyon – Palo Alto (+1 650-849-5307, [email protected])
Frances A. Waldmann – Los Angeles (+1 213-229-7914,[email protected])

Please also feel free to contact any of the following practice group members:

Artificial Intelligence and Automated Systems Group:
H. Mark Lyon – Chair, Palo Alto (+1 650-849-5307, [email protected])
J. Alan Bannister – New York (+1 212-351-2310, [email protected])
Lisa A. Fontenot – Palo Alto (+1 650-849-5327, [email protected])
David H. Kennedy – Palo Alto (+1 650-849-5304, [email protected])
Ari Lanin – Los Angeles (+1 310-552-8581, [email protected])
Robson Lee – Singapore (+65 6507 3684, [email protected])
Carrie M. LeRoy – Palo Alto (+1 650-849-5337, [email protected])
Alexander H. Southwell – New York (+1 212-351-3981, [email protected])
Eric D. Vandevelde – Los Angeles (+1 213-229-7186, [email protected])
Michael Walther – Munich (+49 89 189 33 180, [email protected])

© 2019 Gibson, Dunn & Crutcher LLP
Attorney Advertising:  The enclosed materials have been prepared for general informational purposes only and are not intended as legal advice.