14 Search Results

July 23, 2019 |
Artificial Intelligence and Autonomous Systems Legal Update (2Q19)

Click for PDF The second quarter of 2019 saw a surge in debate about the role of governance in the AI ecosystem and the gap between technological change and regulatory response. This trend was manifested in particular by calls for regulation of certain “controversial” AI technologies or use cases, in turn increasingly empowering lawmakers to take fledgling steps to control the scope of AI and automated systems in the public and private sectors. While it remains too soon to herald the arrival of a comprehensive federal regulatory strategy in the U.S., there have been a number of recent high-profile draft bills addressing the role of AI and how it should be governed at the federal level, while state and local governments are already pressing forward with concrete legislative proposals regulating the use of AI. As we have previously observed, over the past year lawmakers and government agencies have sought to develop AI strategies and policy with the aim of balancing the tension between protecting the public from the potentially harmful effects of AI technologies while encouraging positive innovation and competitiveness.[1] Now, for the first time, we are seeing federal, state and local government agencies show a willingness to take concrete positions on that spectrum, resulting in a variety of policy approaches to AI regulation—many of which eschew informal guidance and voluntary standards and favor outright technology bans. We should expect that high-profile or contentious AI use cases or failures will continue to generate similar public support for, and ultimately trigger, accelerated federal and state action.[2] For the most part, the trend in favor of more individual and nuanced assessments of how best to regulate AI systems specific to their end uses of regulators in the U.S. has been welcome. However, even so there is an inherent risk that reactionary legislative responses will result in a disharmonious, fragmented national regulatory framework. In any event, from a regulatory perspective, these developments will undoubtedly yield important insights into what it means to govern and regulate AI—and whether “some regulation” is better than “no regulation”—over the coming months. Table of Contents I.        Key U.S. Legislative and Regulatory Developments II.      Bias and Technology Bans III.     Healthcare IV.     Autonomous Vehicles I.    Key U.S. Legislative and Regulatory Developments As we reported in our Artificial Intelligence and Autonomous Systems Legal Update (1Q19), the House introduced Resolution 153 in February 2019, with the intent of “[s]upporting the development of guidelines for ethical development of artificial intelligence” and emphasizing the “far-reaching societal impacts of AI” as well as the need for AI’s “safe, responsible, and democratic development.”[3] Similar to California’s adoption last year of the Asilomar Principles[4] and the OECD’s recent adoption of five “democratic” AI principles,[5] the House Resolution provides that the guidelines must be consonant with certain specified goals, including “transparency and explainability,” “information privacy and the protection of one’s personal data,” “accountability and oversight for all automated decisionmaking,” and “access and fairness.” Moreover, on April 10, 2019, U.S. Senators Cory Booker (D-NJ) and Ron Wyden (D-OR) introduced the “Algorithmic Accountability Act,” which “requires companies to study and fix flawed computer algorithms that result in inaccurate, unfair, biased or discriminatory decisions impacting Americans.”[6] Rep. Yvette D. Clarke (D-NY) introduced a companion bill in the House.[7] The bill stands to be the United States Congress’s first serious foray into the regulation of AI and the first legislative attempt in the United States to impose regulation on AI systems in general, as opposed to regulating a specific activity, such as the use of autonomous vehicles. While observers have noted congressional reticence to regulate AI in past years, the bill hints at a dramatic shift in Washington’s stance amid growing public awareness of AI’s potential to create bias or harm certain groups. Although the bill still faces an uncertain future, if it is enacted, businesses would face a number of challenges, not least significant uncertainty in defining and, ultimately, seeking to comply with the proposed requirements for implementing “high risk” AI systems and utilizing consumer data, as well as the challenges of sufficiently explaining to the FTC the operation of their AI systems. Moreover, the bill expressly states that it does not preempt state law—and states that have already been developing their own consumer privacy protection laws would likely object to any attempts at federal preemption—potentially creating a complex patchwork of federal and state rules.[8] In the wake of HR 153 and the Algorithmic Accountability Act, several strategy announcements and federal bills have been introduced, focusing on AI strategy, investment and fair use and accountability.[9] While the proposed legislation remains in its early stages, the recent flurry of activity is indicative of the government’s increasingly bold engagement with technological innovation and the regulation of AI, and companies operating in this space should remain alert to both opportunities and risks arising out of federal legislative and policy developments—particularly the increasing availability of public-private partnerships—during the second half of 2019. A.    The National Artificial Intelligence Research and Development Strategic Plan: 2019 Update Three years after the release of the initial National Artificial Intelligence Research and Development Strategic Plan, in June 2019 the Trump administration issued an update—previewed in the administration’s February 2019 executive order[10]—bringing forward the original seven focus areas and adding an eighth: public-private partnerships.[11] Highlighting the benefits of strategically leveraging resources, including facilities, datasets, and expertise, to advance science and engineering innovations, the update notes: Government-university-industry R&D partnerships bring pressing real-world challenges faced by industry to university researchers, enabling “use-inspired research”; leverage industry expertise to accelerate the transition of open and published research results into viable products and services in the marketplace for economic growth; and grow research and workforce capacity by linking university faculty and students with industry representatives, industry settings, and industry jobs. Companies interested in exploring the possibility of individual collaborations or joint programs advancing precompetitive research should consider whether they have relevant expertise in any of the areas in which federal agencies are actively pursuing public-private partnerships, including the DoD’s Defense Innovation Unit and the Department of Health and Human Services.[12] The updated plan also highlights what progress federal agencies have made with respect to the original seven focus areas: Make long-term investments in AI research. Develop effective methods for human-AI collaboration. Understand and address the ethical, legal and societal implications of AI. Ensure the safety and security of AI systems. Develop shared public datasets and environments for AI training and testing. Measure and evaluate AI technologies through standards and benchmarks. Better understand the national AI R&D workforce needs. B.    NIST Federal Engagement in AI Standards The U.S. Department of Commerce’s National Institute of Standards and Technology (“NIST”) is seeking public comment on a draft plan for federal government engagement in advancing AI standards for U.S. economic and national security needs (“U.S. Leadership in AI: Plan for Federal Engagement in Developing Technical Standards and Related Tools”). The plan recommends four actions: bolster AI standards-related knowledge, leadership and coordination among federal agencies; promote focused research on the “trustworthiness” of AI; support and expand public-private partnerships; and engage with international parties.[13] The draft was published on July 2, 2019 in response to the February 2019 Executive Order that directed federal agencies to take steps to ensure that the U.S. maintains its leadership position in AI.[14] The draft plan was developed with input from various stakeholders through a May 1 Request for Information,[15] a May 30 workshop[16] and federal agency review. C.    AI in Government Act House Bill 2575 and its corresponding bi-partisan Senate Bill 3502 (the “AI in Government Act”)—which would task federal agencies with exploring the implementation of AI in their functions and establishing an “AI Center of Excellence,”—were first introduced in September 2018, and reintroduced in May 2019.[17] The center would be directed to “study economic, policy, legal, and ethical challenges and implications related to the use of artificial intelligence by the Federal Government” and “establish best practices for identifying, assessing, and mitigating any bias on the basis of any classification protected under Federal non-discrimination laws or other negative unintended consequence stemming from the use of artificial intelligence systems.” One of the sponsors of the bill, Senator Brian Schatz (D-HI), stated that “[o]ur bill will bring agencies, industry, and others to the table to discuss government adoption of artificial intelligence and emerging technologies. We need a better understanding of the opportunities and challenges these technologies present for federal government use and this legislation would put us on the path to achieve that goal.”[18] Although the bill is aimed at improving the implementation of AI by the federal government, there are likely to be opportunities for industry stakeholders to participate in discussions surrounding best practices. D.    Artificial Intelligence Initiative Act On May 21, 2019, U.S. Senators Rob Portman (R-OH), Martin Heinrich (D-NM), and Brian Schatz (D-HI) proposed legislation to allocate $2.2 billion over the next five years to develop a comprehensive national AI strategy to accelerate research and development in order to match other global economic powers like China, Japan, and Germany.[19] S. 1558 (the “Artificial Intelligence Initiative Act”) would create three new bodies: a National AI Coordination Office (to coordinate legislative efforts), a National AI Advisory Committee (consisting of experts on a wide range of AI matters), and an Interagency Committee on AI (to coordinate federal agency activity relating to research and education on AI).[20] The bill also establishes the National AI Research and Development Initiative in order to identify and minimize “inappropriate bias and data sets algorithms.” The requirement for NIST to identify metrics used to establish standards for evaluating AI algorithms and their effectiveness, as well as the quality of training data sets, may be of particular interest to businesses. Moreover, the bill requires the Department of Energy to create an AI research program, building state-of-the-art computing facilities that will be made available to private sector users on a cost-recovery basis.[21] The draft legislation complements the formation of the bipartisan Senate AI Caucus in March 2019 by Senators Heinrich and Portman to address transformative technology with implications spanning a number of fields including transportation, health care, agriculture, manufacturing, and national security.[22] E.    FinTech We have reported previously on the rapid adoption of AI by government agencies in relation to financial services.[23] On May 9, 2019, Rep. Maxine Waters (D-CA) announced that the House Committee on Financial Services would launch two task forces focused on financial technology (“fintech”) and AI:[24] a task force on financial intelligence that will focus on the topics of regulating the fintech sector, and an AI task force that will focus on machine learning in financial services and regulation, emerging risks in algorithms and big data, combatting fraud and digital identification technologies, and the impact of automation on jobs in financial services.[25] II.    Bias and Technology Bans As we reported in our Artificial Intelligence and Autonomous Systems Legal Update (1Q19), the topic of bias in AI decision-making has been at the forefront of policy discussions relating to the private sector for some time, and the deep learning community has responded with a wave of investments and initiatives focusing on processes designed to assess and mitigate bias and disenfranchisement[26] at risk of becoming “baked in and scaled” by AI systems.[27] Such discussions are now becoming more urgent and nuanced with the increased availability of AI decision-making tools allowing government decisions to be delegated to algorithms to improve accuracy and drive objectivity, directly impacting democracy and governance.[28] Over the past several months, we have seen those discussions evolve into tangible and impactful regulations in the data privacy space and, notably, several outright technology bans.[29] At a recent hearing of the Task Force, participants discussed selected issues facing certain U.S. and international regulatory agencies as well as certain regulator’s efforts to engage with stakeholders in the fintech industry in order to consolidate and clarify communications, and inform policy.[30] A.    Slew of New Regulations Proposed on Facial Recognition Technology by Government Agencies State and Local Regulation. Biometric surveillance, or “facial recognition technology,” has emerged as a lightning rod for public debate regarding the risk of improper algorithmic bias and data privacy concerns. Amid widespread fears that the current state of the technology is not sufficiently accurate or reliable to avoid discrimination, particularly in law enforcement, regulators have seized the opportunity to act in the AI space—proposing and passing outright bans on the use of facial recognition technology with no margin for discretion or use case testing. There is gathering momentum to ban facial recognition technology in the Bay Area. On May 14, by a vote of 8-1, the San Francisco Board of Supervisors passed a first-of-its-kind regulation to ban the use of facial recognition technology by city police and other government departments—one of the strictest U.S. laws to date governing machine learning and AI technologies.[31] The “Stop Secret Surveillance” ordinance amends San Francisco’s Administrative Code to require that city departments seeking to acquire or use surveillance technologies must first submit certain reports and seek certain purchase-and-use approvals from the San Francisco Board of Supervisors before any purchase or use of such technologies. The purpose behind the ordinance was to eliminate the possibility that law enforcement or other public officials would secretly utilize surveillance technology, without having first run it by the Board and its available public hearing mechanisms. However, with regard to one machine learning-driven technology, facial recognition, the ordinance goes one step further and completely bans any use of any form of the technology by all city and county departments, including the San Francisco Police Department. In imposing the ban on facial recognition technology, the Board of Supervisors appears to have been influenced particularly by reports of recent repressive actions by other governments in the AI space, as well as more general concerns regarding bias and discrimination.[32] The ordinance’s supporter, Supervisor Aaron Peskin, was quoted as saying that it is a “fact” that facial recognition technology “has the biases of the people who developed it.”[33] Most recently, the city of Oakland also passed a similar bill which prohibits police from both acquiring the software or using it at all, including if used by other police agencies—with city officials citing as reasons for the ban the limitations of the technology, the lack of standards around its implementation, and its potential use in the persecution of minorities. The Berkeley City Council is currently considering a similar measure. Oakland police had argued that the council should continue to allow that use of facial recognition software, and ban it only in real-time applications, such as scanning an active surveillance camera for wanted suspects—but the council opted to pass the more restrictive language.[34] However, it is not just the Bay Area that is at cross-purposes with facial recognition software. In June 2019, the city of Somerville, Massachusetts, joined San Francisco by voting unanimously to ban its municipal government from using and retaining “face surveillance” technology, which is defined as being able to automatically detect someone’s identity based on their face.[35] The New York State legislature[36] is also considering a ban on facial recognition technology.  In Massachusetts, a bill pending in the state legislature would put a moratorium on facial recognition and other remote biometric surveillance systems.[37] Meanwhile, back in the Golden State, the California State Senate is considering a similar ban on biometric surveillance software for police body cameras.[38] Assembly bill AB-1215 characterizes the use of facial recognition as the “functional equivalent of requiring every person to show a personal photo identification card at all times in violation of recognized constitutional rights,” posing “unique and significant threats” to civil rights and liberties and potentially chilling the exercise of free speech in public places. There is also a companion bill, AB-1281, that requires certain specific “ clear and conspicuous” signage and disclosures by California businesses of any use of facial recognition.[39] Both bills have passed the Assembly and Senate Committee, and are now waiting for a floor vote in the Senate. Given their relative ease of passage through the Assembly and Senate to date, and the increasing momentum in favor of stringent regulations on facial recognition technology in California, both bills are likely to become law in due course. Federal Regulation. As state and local government activity intensifies, the federal government has also indicated a willingness to consider a nationwide ban on facial recognition technology. A bill introduced in Congress in March (S. 847, “Commercial Facial Recognition Privacy Act of 2019”) would ban users of commercial face recognition technology from collecting and sharing data for identifying or tracking consumers without their consent, although it does not address the government’s uses of the technology.[40] With few exceptions, the bill would require facial recognition technology available online to be made accessible for independent third-party testing “for accuracy and bias.” The bill remains pending and has been referred to the Committee on Commerce, Science, and Transportation. In the meantime, the House Committee on Oversight and Reform has also held several hearings on transparency regarding government use cases, at which Committee members voiced strong bipartisan support for providing transparency and accountability to the use of facial recognition technology.[41] Increasingly, therefore, there is support at federal, state and local levels for stringent regulation of facial recognition technology, with a growing number of lawmakers and governing bodies in favor of prohibiting the use of the technology while a broader regulatory approach develops and the technology evolves.[42] This tentative consensus, which blunts the positive use cases of such technology and discounts the possibility of enacting certain limitations on scope while the technology improves to eliminate any existing flaws and biases, stands in stark contrast to the generally permissive approach to the development of AI systems in the private sector to date. We will continue to carefully monitor legislative and policy developments relating to technology bans at state and (possibly) federal levels, and stand ready to assist companies in assessing the impact of these and similar regulations on biometric surveillance technologies in the private sector. B.    California State Assembly Passes “Anti-Eavesdropping Act” Seeking to Regulate Smart Home Speakers On May 29, the California State Assembly passed a bill (A.B. 1395) requiring manufacturers of ambient listening devices like smart speakers to receive consent from users before retaining voice recordings, and banning manufacturers from sharing command recordings with third parties.  The bill is currently being considered by the State Senate.[43]  Companies that manufacture smart devices which record commands by default and which use the data to train their automated systems should pay close attention to developments in this space. C.    Utah Warrant Bill Limits Police Authority to Access Electronic Data A new Utah law, House Bill 57, forces police to obtain a warrant before they can gain access to any person’s electronic data and could have implications far beyond law enforcement, including for how employers and big tech companies respond to police demands for data.[44]  The law makes individuals the owners of their data, not the companies they work for or the digital platforms to which they entrust data, and additionally requires law enforcement to “destroy in an unrecoverable manner” the data it obtains “as soon as reasonably possible after the electronic information or data is collected.”  H.B. 57 went into effect in May 2019.[45] The law reflects a legislative recognition of individual privacy rights, and we will continue to closely watch this space and the extent to which its approach may be replicated in other state legislatures. D.    Illinois passes Artificial Intelligence Video Interview Act In May 2019, the Illinois legislature unanimously passed H.B. 2557 (the “Artificial Intelligence Video Interview Act”), which requires employers to notify candidates in writing that AI will be used to assess their interview, explain what elements the AI system will look for (such as analyzing the applicant’s facial expressions and fitness for the position), and secure the applicant’s written consent to proceed. Illinois employers using such software will need to carefully consider how they are addressing the risk of AI-driven bias in their current operations.[46] It remains to be seen whether other states will adopt similar requirements, but as noted above, a major challenge for companies subject to such laws will be explaining to regulators how their AI assessments work and how the criteria are ultimately used in any decision-making processes. III.    Healthcare A.    FDA Releases White Paper Outlining a Potential Regulatory Framework for Software As A Medical Device (SaMD) That Leverages AI The rapidly increasing use of artificial intelligence in the healthcare industry has not gone unnoticed by regulators. On April 2, 2019, the Food and Drug Administration (“FDA”) published an exploratory white paper proposing a framework for regulating artificial intelligence/machine learning (“AI/ML”)-based software as a medical device (“SaMD”).[47] Although not a regulation in and of itself, the FDA white paper previews what future regulation might look like with regard to SaMD. The FDA’s white paper highlights the challenges that AI/ML-based software poses to the traditional medical device regulatory framework. As the FDA has acknowledged, AI products with algorithms that continually adapt based on new data are not well suited to the current regulatory paradigm, under which significant software modifications require a new pre-market submission prior to marketing. Through the application of learning algorithms, SaMD may undergo rapid, if not constant, change. The white paper distinguishes three types of modifications to AI/ML-based SaMD, and describes how these types of changes might fit within the framework for evaluation of device modifications. In particular, the three types are: (1) clinical and analytical performance modifications, (2) modification of inputs used by the algorithm and their clinical association with the SaMD output, and (3) modification of the intended use of the SaMD. The FDA also identifies four principles for AI/ML “learning algorithms,” including the use of an algorithm change protocol that might serve as an alternative to the nascent “Pre-Certification” pilot. In response to the FDA’s invitation for comments, the American Medical Informatics Association stated that “[p]roperly regulating AI and machine learning-based SaMD will require ongoing dialogue between FDA and stakeholders.”[48] The group also recommended that the FDA develop guidance on how and how often developers of SaMD-based products should test those products for algorithm-driven biases. However, it remains to be seen to what extent the FDA will be influenced by calls for more specific guidance and demands for stakeholder input. Companies operating in this space should consider seeking out advice regarding the evaluation of SaMD devices by the FDA. In practice, the FDA is already regulating medical devices that rely on AI. In 2017, the FDA began a precertification pilot program for makers of software applications that function as medical devices. Under this pilot program, the FDA cleared Apple Watch’s Series 4 devices for two new applications which allow the device to perform an electrocardiogram on the user and to detect and alert users to irregular heartbeats.[49] The FDA has also recently approved other medical devices that rely on AI software. For example, in mid-May, the FDA approved an AI-based chest X-ray triage product.[50] And in early June, it was reported that the FDA cleared an AI tool used by radiologists for triage of cervical spine fractures.[51] IV.    Autonomous Vehicles Federal regulation of autonomous vehicles continues to falter in the new Congress, as measures like the Safely Ensuring Lives Future Deployment and Research in Vehicle Evolution (“SELF DRIVE”) Act (H.R. 3388) and the American Vision for Safer Transportation through Advancement of Revolutionary Technologies (“AV START”) Act (S. 1885) have not been re-introduced since expiring with the close of the 115th Congress last December.[52] On the other hand, several federal agencies have announced proposed rulemaking to facilitate the integration of autonomous vehicles onto public roads. And while federal regulations are lagging behind, legislative activity at the state and local levels is stepping up to advance integration of autonomous vehicles in the national transportation system and local infrastructure. A.    Federal Regulatory Activity On May 28, in the wake of a petition filed by General Motors requesting temporary exemption from Federal Motor Vehicle Safety Standards (“FMVSSs”) which require manual controls or have requirements that are specific to a human driver,[53] the National Highway Traffic Safety Administration (“NHTSA”) announced that it was seeking comments about the possibility of removing “regulatory barriers” relating to the introduction of automated vehicles in the United States (Docket No. NHTSA-2019-0036).[54] In addition, the Federal Motor Carrier Safety Administration also issued a request for comments on proposed rulemaking for Federal Motor Carrier Safety Regulations that may need to be reconsidered for Automated Driving System-Dedicated Vehicles (“ADS-DVs”) (Docket No. FMCSA-2018-0037).[55] Thus far, the comments submitted generally support GM’s petition for temporary exemption and the removal of regulatory barriers to the compliance certification of ADS-DVs. Some commenters have raised concerns that there is insufficient information in the petition to establish safety equivalence between traditionally operated vehicles and ADS-DVs, and regarding the ability of ADS-DVs to safely operate in unexpected and emergency situations. However, it is likely that NHTSA will grant petitions for temporary exemption to facilitate the development of ADS technology, contingent on extensive data-sharing requirements and a narrow geographic scope of operation. We anticipate that regulatory changes to testing procedures (including pre-programmed execution, simulation, use of external controls, use of a surrogate vehicle with human controls, and technical documentation) and modifications to current FMVSSs (such as crashworthiness, crash avoidance, and indicator standards) will likely be finalized in 2021. In the meantime, we encourage our clients to contact us if they would like further information or assistance in developing and submitting comments, which will be accepted by the NHTSA until July 29, 2019, and by the FMCSA until August 26, 2019. B.    State Regulatory Activity State regulatory activity has continued to accelerate, adding to the already complex patchwork of regulations that apply to companies manufacturing and testing autonomous vehicles. We encourage our clients to contact us for assistance in considering the impact of state and local laws with regard to testing, standards and certification. On April 12, 2019, the California DMV published proposed autonomous vehicle regulations that allow the testing and deployment of autonomous motor trucks (delivery vehicles) weighing less than 10,001 pounds on California’s public roads.[56] The DMV held a public hearing on May 30, 2019, at its headquarters in Sacramento to gather input and discuss the regulations.[57] The DMV’s regulations continue to exclude the autonomous testing or deployment of vehicles weighing more than 10,001 pounds.[58] In the California legislature, two new bills related to autonomous vehicles have been introduced: S.B. 59[59] would establish a working group on autonomous passenger vehicle policy development while S.B. 336[60] would require transit operators to ensure certain automated transit vehicles are staffed by employees. On June 13, 2019, Florida Governor Ron DeSantis signed C.S./H.B. 311: Autonomous vehicles into law, which went into effect on July 1.[61] C.S./H.B. 311 establishes a statewide statutory framework, permits fully automated vehicles to operate on public roads, and removes obstacles that hinder the development of self-driving cars.[62] In Washington, Governor Jay Inslee signed into law H.B. 1325, a measure that will create a regulatory framework for personal delivery devices (“PDDs”) that deliver property via sidewalks and crosswalks (e.g., wheeled robots).[63] Washington is now the eighth U.S. state to permit the use of delivery bots in public locations. The other states are Virginia, Idaho, Wisconsin, Florida, Ohio, Utah, and Arizona. In Oklahoma, Governor Kevin Stitt recently signed legislation (S.B. 365) restricting city and county governments from legislating autonomous vehicles, ensuring that such legislation would be entirely in the hands of state and federal lawmakers.[64] Pennsylvania, which last year passed legislation creating a commission on “highly automated vehicles,” has proposed a bill that would authorize the use of an autonomous shuttle vehicle on a route approved by the Pennsylvania Department of Transportation (H.B. 1078).[65] C.    Task Forces and Organization Committees Outside of government, organizations are working with industry leaders to help set rules for this constantly innovating industry. One such recent partnership was a panel of experts, hosted by State Farm and the Governors Highway Safety Association (“GHSA”), focused on developing recommendations and guidelines on state safety programs.[66] Following the meeting, GHSA will develop a white paper to inform its members and all traffic safety stakeholders of the outcomes of the expert panel and present its recommendations at GHSA’s 2019 Annual Meeting in Anaheim, California, August 24–28. In New York, the New York State Bar Association recently appointed Gibson Dunn partner Alex Southwell to its Task Force on Autonomous Vehicles and the Law.[67] “This ‘groundbreaking initiative’ will ‘study and understand the seismic impact that autonomous vehicles will have on our legal system and society, and make recommendations on how New York State and its legal institutions can prepare for this revolutionary technological change.’”[68] ______________________ [1] For more information, please see our Artificial Intelligence and Autonomous Systems Legal Update (4Q18); see also Ahmed Baladi, Gibson, Dunn & Crutcher LLP, Can GDPR Hinder AI Made in Europe? Cybersecurity Law Report (July 10, 2019), available at https://www.gibsondunn.com/wp-content/uploads/2019/07/Baladi-Can-GDPR-Hinder-AI-Made-in-Europe-Cybersecurity-Law-Report-10-07-19.pdf. [2] See, for example, the House Intelligence Committee’s hearing on Deepfakes and AI on June 13, 2019 (U.S. House of Representatives, Permanent Select Committee on Intelligence, Press Release: House Intelligence Committee To Hold Open Hearing on Deepfakes and AI (June 7, 2019)); see also Makena Kelly, Congress grapples with how to regulate deepfakes, The Verge (June 13, 2019), available at https://www.theverge.com/2019/6/13/18677847/deep-fakes-regulation-facebook-adam-schiff-congress-artificial-intelligence. [3] H.R. Res. 153, 116th Cong. (1st Sess. 2019). [4] Assemb. Con. Res. 215, Reg. Sess. 2018-2019 (Cal. 2018) (enacted) (expressing the support of the legislature for the “Asilomar AI Principles”—a set of 23 principles developed through a collaboration between AI researchers, economists, legal scholars, ethicists and philosophers that met in Asilomar, California, in January 2017 and categorized into “research issues,” “ethics and values,” and “longer-term issues” designed to promote the safe and beneficial development of AI—as “guiding values for the development of artificial intelligence and of related public policy”). [5] OECD Principles on AI (May 22, 2019) (stating that AI systems should benefit people, be inclusive, transparent, and safe, and their creators should be accountable), available at http://www.oecd.org/going-digital/ai/principles/. [6] Press Release, Cory Booker, Booker, Wyden, Clarke Introduce Bill Requiring Companies To Target Bias In Corporate Algorithms (Apr. 10, 2019), available at https://www.booker.senate.gov/?p=press_release&id=903; see also S. Res. __, 116th Cong. (2019). [7] H.R. Res. 2231, 116th Cong. (1st Sess. 2019). [8] See Byungkwon Lim et al., A Glimpse into the Potential Future of AI Regulation, Law360 (April 10, 2019), available at https://www.law360.com/articles/1158677/a-glimpse-into-the-potential-future-of-ai-regulation. [9] State legislatures have also recently weighed in on AI policy. In March 2019, Senator Ling Chang (R-CA) introduced Senate Joint Resolution 6, urging the president and Congress to develop a comprehensive AI advisory committee and to adopt a comprehensive AI policy, S.J. Res. 6, Reg. Sess. 2019–2020 (Cal. 2019); in Washington State, House Bill 1655 was introduced in February 2019, seeking to “protect consumers, improve transparency, and create more market predictability” by establishing guidelines for government procurement and use of automated decision systems, H.B. 1655, 66th Leg., Reg. Sess. 2019 (Wash. 2019). [10] For more information, please see our client alert President Trump Issues Executive Order on “Maintaining American Leadership in Artificial Intelligence.” [11] Exec. Office of the U.S. President, The National Artificial Intelligence Research and Development Strategic Plan: 2019 Update (June 2019), available at https://www.whitehouse.gov/wp-content/uploads/2019/06/National-AI-Research-and-Development-Strategic-Plan-2019-Update-June-2019.pdf. [12] Id. at 42. [13] NIST, U.S. Leadership in AI: a Plan For Federal Engagement in Developing Technical Standards and Related Tools – Draft For Public Comment (July 2, 2019), available . [14] Supra note 9.here [15] NIST, NIST Requests Information on Artificial Intelligence Technical Standards and Tools (May 1, 2019), available at https://www.nist.gov/news-events/news/2019/05/nist-requests-information-artificial-intelligence-technical-standards-and. [16] NIST, Federal Engagement in Artificial Intelligence Standards Workshop (May 30, 2019), available at https://www.nist.gov/news-events/events/2019/05/federal-engagement-artificial-intelligence-standards-workshop. [17] H.R. 2575, 116th Cong. (2019-2020); S. 3502 – AI in Government Act of 2018, 115th Cong. (2017-2018). [18] Press Release, Senator Brian Schatz, Schatz, Gardner Introduce Legislation To Improve Federal Government’s Use Of Artificial Intelligence (September 2019), available at https://www.schatz.senate.gov/press-releases/schatz-gardner-introduce-legislation-to-improve-federal-governments-use-of-artificial-intelligence; see also Tajha Chappellet-Lanier, Artificial Intelligence in Government Act is back, with ‘smart and effective’ use on senators’ minds (May 8, 2019), available at https://www.fedscoop.com/artificial-intelligence-in-government-act-returns. [19] S. 1558 – Artificial Intelligence Initiative Act, 116th Cong. (2019-2010); see further Khari Johnson, U.S. Senators propose legislation to fund national AI strategy, VentureBeat (May 21, 2019) available at https://venturebeat.com/2019/05/21/u-s-senators-propose-legislation-to-fund-national-ai-strategy. [20] Matthew U. Scherer, Michael J. Lotito & James A. Paretti, Jr., Bipartisan Bill Would Create Artificial Intelligence Strategy for U.S. Workforce, Lexology (May 30, 2019), available at https://www.lexology.com/library/detail.aspx?g=857d902f-e7a0-412b-878b-b1fb149da745. [21] Press Release, Senator Martin Heinrich, Heinrich, Portman, Schatz Propose National Strategy For Artificial Intelligence; Call For $2.2 Billion Investment In Education, Research & Development (May 21, 2019), available at https://www.heinrich.senate.gov/press-releases/heinrich-portman-schatz-propose-national-strategy-for-artificial-intelligence-call-for-22-billion-investment-in-education-research-and-development. [22] Press Release, Senator Martin Heinrich, Heinrich, Portman Launch Bipartisan Artificial Intelligence Caucus (Mar. 13, 2019), available at https://www.heinrich.senate.gov/press-releases/heinrich-portman-launch-bipartisan-artificial-intelligence-caucus. [23] For more information, please see our Artificial Intelligence and Autonomous Systems Legal Update (4Q18). [24] Katie Grzechnik Neill, Rep. Waters Announces Task Forces on Fintech and Artificial Intelligence (May 13, 2019), available at https://www.insidearm.com/news/00045030-rep-waters-announces-all-democrat-task-fo. [25] See Scott Likens, How Artificial Intelligence Is Already Disrupting Financial Services, Barrons (May 16, 2019), available at https://www.barrons.com/articles/how-artificial-intelligence-is-already-disrupting-financial-services-51558008001. [26] See also Kalev Leetaru, Why Do We Fix AI Bias But Ignore Accessibility Bias? Forbes (July 6, 2019), available at https://www.forbes.com/sites/kalevleetaru/2019/07/06/why-do-we-fix-ai-bias-but-ignore-accessibility-bias/#55e7c777902d; Alina Tugend, Exposing the Bias Embedded in Tech, NY Times (June 17, 2019), available at https://www.nytimes.com/2019/06/17/business/artificial-intelligence-bias-tech.html. [27] Jake Silberg & James Manyika, Tackling Bias in Artificial Intelligence (and in Humans), McKinsey Global Institute (June 2019), available at https://www.mckinsey.com/featured-insights/artificial-intelligence/tackling-bias-in-artificial-intelligence-and-in-humans. [28] Nicol Turner Lee, Paul Resnick & Genie Barton, Algorithmic Bias Detection and Mitigation: Best Practices and Policies to Reduce Consumer Harms, Brookings Institute (May 22, 2019), available at https://www.brookings.edu/research/algorithmic-bias-detection-and-mitigation-best-practices-and-policies-to-reduce-consumer-harms/. [29] See also the French government’s recent law, encoded in Article 33 of the Justice Reform Act, prohibiting anyone —especially legal tech companies focused on litigation prediction and analytics—from publicly revealing the pattern of judges’ behavior in relation to court decisions, France Bans Judge Analytics, 5 Years In Prison For Rule Breakers, Artificial Lawyer (June 4, 2019), available at https://www.artificiallawyer.com/2019/06/04/france-bans-judge-analytics-5-years-in-prison-for-rule-breakers/. [30] U.S. H.R. Comm. on Fin. Servs., Overseeing the Fintech Revolution: Domestic and International Perspectives on Fintech Regulation (June 25, 2019), at 4, available at https://docs.house.gov/meetings/BA/BA00/20190625/109733/HHRG-116-BA00-20190625-SD002.pdf. [31] Kate Conger, Richard Fausset & Serge F. Kovaleski, San Francisco Bans Facial Recognition Technology, NY Times (May 14, 2019), available at https://www.nytimes.com/2019/05/14/us/facial-recognition-ban-san-francisco.html. [32] For a detailed discussion of the impact of AI technology bans, see H. Mark Lyon, Gibson, Dunn & Crutcher LLP, Before We Regulate, Daily Journal (June 26, 2019), available at https://www.gibsondunn.com/wp-content/uploads/2019/07/Lyon-Before-We-Regulate-Daily-Journal-6-26-19.pdf. [33] Sara Gaiser, San Francisco Supervisor Aaron Peskin Proposes Citywide Ban on Facial Recognition Technology, SF Examiner (Jan. 29, 2019), available at https://www.sfexaminer.com/news/san-francisco-supervisor-aaron-peskin-proposes-citywide-ban-on-facial-recognition-technology/. [34] CBS, Oakland Officials Take Step Towards Banning City Use Of Facial Recognition Tech, SF Chronicle (July 16, 2019), available at https://sanfrancisco.cbslocal.com/2019/07/16/oakland-officials-take-step-towards-banning-city-use-of-facial-recognition-tech/. [35] City of Somerville, Mass., Ordinance 208142 (June 27, 2019). [36] S.B. 5687, 2019–2020 Reg. Sess. (N.Y. 2019) (proposing a ban on the use of a facial recognition system by a landlord on any residential premises). [37] S.B. 1385, 191st Leg., 2019–2020 Reg. Sess. (Mass. 2019). [38] A.B. 1215, 2019–2020 Reg. Sess. (Cal. 2019). [39] A.B. 1281, 2019–2020 Reg. Sess. (Cal. 2019). [40] S. 847, 116th Cong. (1st Sess. 2019). [41] U.S. H.R. Comm. on Oversight and Reform, Facial Recognition Technology (Part II): Ensuring Transparency in Government Use (June 4, 2019), available at https://oversight.house.gov/legislation/hearings/facial-recognition-technology-part-ii-ensuring-transparency-in-government-use. [42] Colin Lecher, Congress faces ‘hard questions’ on facial recognition as activists push for ban, The Verge (July 10, 2019), available at https://www.theverge.com/2019/7/10/20688932/congress-facial-recognition-hearing-ban. [43] A.B. 1395, 2019–2010 Reg. Sess. (Cal. 2019). [44] Allison Grande, Utah Warrant Bill Raises Stakes For Cops’ Digital Data Grabs, Law360 (Apr. 23, 2019), available at https://www.law360.com/articles/1151791/utah-warrant-bill-raises-stakes-for-cops-digital-data-grabs. [45] H.B. 57, 2019–2020 Reg. Sess. (Utah 2019). [46] H.B. 2557, 2019-2010 Reg. Sess. (Ill. 2019) (101st Gen. Assembly). [47] See regulations.gov, Proposed Regulatory Framework for Modifications to Artificial Intelligence/Machine Learning (AI/ML)-Based Software as a Medical Device (SaMD) – Discussion Paper and Request for Feedback, (Apr. 2, 2019), available at https://www.regulations.gov/document?D=FDA-2019-N-1185-0001. [48] Greg Slabodkin, AMIA calls on FDA to refine its AI regulatory framework, HealthData Management (June 11, 2019), available at https://www.healthdatamanagement.com/news/amia-calls-on-fda-to-refine-its-ai-regulatory-framework. [49] Gopal Ratnam, FDA grapples with AI medical devices, Roll Call (May 7, 2019), available at https://www.rollcall.com/news/policy/fda-grapples-with-living-medical-devices. [50] Tova Cohen, Israel’s Zebra Medical gets FDA ok for AI chest X-ray product, Reuters (May 13, 2019), available at https://www.reuters.com/article/us-healthcare-zebra-medical-regulator/israels-zebra-medical-gets-fda-ok-for-ai-chest-x-ray-product-idUSKCN1SJ0LI. [51] Tova Cohen, Israel’s Aidoc gets third FDA nod for AI tools for radiologists, Reuters (June 11, 2019), available at https://www.reuters.com/article/us-tech-aidoc-regulations/israels-aidoc-gets-third-fda-nod-for-ai-tools-for-radiologists-idUSKCN1TC1IN. [52] For more information, please see our Artificial Intelligence and Autonomous Systems Legal Update (4Q18). [53] General Motors, LLC-Receipt of Petition for Temporary Exemption from Various Requirements of the Safety Standards for an All Electric Vehicle with an Automated Driving System, 84 Fed. Reg. 10182. [54] Removing Regulatory Barriers for Vehicles With Automated Driving Systems, 84 Fed. Reg. 24,433 (May 28, 2019) (to be codified at 49 C.F.R. 571); see also Removing Regulatory Barriers for Vehicles with Automated Driving Systems, 83 Fed. Reg. 2607, 2607 (proposed March 5, 2018) (to be codified at 49 CFR 571). [55] Safe Integration of Automated Driving Systems-Equipped Commercial Motor Vehicles, 84 Fed. Reg. 24,449 (May 28, 2019). [56] State of California Department of Motor Vehicles, Autonomous Light-Duty Motor Trucks (Delivery Vehicles), available at https://www.dmv.ca.gov/portal/dmv/detail/vr/autonomous/bkgd. [57] Id. [58] Id. [59] S.B. 59, 2019–2020 Reg. Sess. (Cal. 2019). [60] S.B. 336, 2019–2020 Reg. Sess. (Cal. 2019). [61] Governor Ron DeSantis Signs CS/HB 311: Autonomous Vehicles (June 13, 2019), available at https://www.flgov.com/2019/06/13/governor-ron-desantis-signs-cs-hb-311-autonomous-vehicles/. [62] See, Taylor Garre, Florida Gives the Green Light to Fully Autonomous Vehicles (June 19, 2019), available at https://cheddar.com/media/florida-autonomous-vehicles-self-driving-cars-senator-brandes. [63] 2019 Wash. Sess. Laws, Ch. 214. [64] S.B. 365, 57th Leg., Reg. Sess. (Okla. 2019). [65] H.B. 1078, 2019–2020 Reg. Sess. (Pa. 2019). [66] Press Release, Governors Highway Safety Association, Traffic Safety Recommendations for Automated Vehicles Being Developed (Apr. 29, 2019), available at https://www.ghsa.org/resources/news-releases/av-panel19. [67] Alexander Southwell Appointed to New York State Bar Association’s Task Force of Autonomous Vehicles and the Law, Gibson Dunn (June 3, 2019), available at https://www.gibsondunn.com/alexander-southwell-appointed-to-new-york-state-bar-associations-task-force-of-autonomous-vehicles-and-the-law/. [68] Id. The following Gibson Dunn lawyers prepared this client update: H. Mark Lyon, Frances A. Waldmann, Claudia Barrett, Oliver Fong and Arjun Rangarajan. Gibson Dunn’s lawyers are available to assist in addressing any questions you may have regarding these developments.  Please contact the Gibson Dunn lawyer with whom you usually work, any member of the firm’s Artificial Intelligence and Automated Systems Group, or the following authors: H. Mark Lyon – Palo Alto (+1 650-849-5307, mlyon@gibsondunn.com) Frances A. Waldmann – Los Angeles (+1 213-229-7914,fwaldmann@gibsondunn.com) Please also feel free to contact any of the following practice group members: Artificial Intelligence and Automated Systems Group: H. Mark Lyon – Chair, Palo Alto (+1 650-849-5307, mlyon@gibsondunn.com) J. Alan Bannister – New York (+1 212-351-2310, abannister@gibsondunn.com) Lisa A. Fontenot – Palo Alto (+1 650-849-5327, lfontenot@gibsondunn.com) David H. Kennedy – Palo Alto (+1 650-849-5304, dkennedy@gibsondunn.com) Ari Lanin – Los Angeles (+1 310-552-8581, alanin@gibsondunn.com) Robson Lee – Singapore (+65 6507 3684, rlee@gibsondunn.com) Carrie M. LeRoy – Palo Alto (+1 650-849-5337, cleroy@gibsondunn.com) Alexander H. Southwell – New York (+1 212-351-3981, asouthwell@gibsondunn.com) Eric D. Vandevelde – Los Angeles (+1 213-229-7186, evandevelde@gibsondunn.com) Michael Walther – Munich (+49 89 189 33 180, mwalther@gibsondunn.com) © 2019 Gibson, Dunn & Crutcher LLP Attorney Advertising:  The enclosed materials have been prepared for general informational purposes only and are not intended as legal advice.

June 26, 2019 |
Mark Lyon Named Top AI Lawyer 2019 by Daily Journal

The Daily Journal named Palo Alto partner Mark Lyon to its 2019 list of the Top Artificial Intelligence Lawyers [PDF].  Lyon was featured for his role launching the firm’s new Artificial Intelligence and Automated Systems Practice Group and for his work advising clients in their development and use of AI-related products.  The profile was published on June 26, 2019. Mark Lyon is Chair of the firm’s Artificial Intelligence and Automated Systems Practice Group.  He brings nearly three decades of experience as a trial lawyer and trusted corporate legal advisor to companies in a wide range of technology areas. He has extensive experience representing and advising clients on the legal, ethical, regulatory, and policy issues arising from emerging technologies like artificial intelligence.  He regularly acts as a strategic advisor to clients in their development of AI-related products and services, their acquisition and sale of technology-related businesses, and in their development of appropriate legal and ethical policies and procedures pertaining to AI-focused business operations.  In the rapidly advancing area of automated and autonomous vehicles, he has guided clients through the numerous hurdles of federal and state regulations and requirements for vehicle testing and deployment, as well as advising and assisting clients in exercising their voice before key agencies and legislative bodies.  

June 28, 2019 |
Mark Lyon Named to The Recorder’s Inaugural List of California Trailblazers

Palo Alto partner Mark Lyon has been named to The Recorder’s inaugural list of California Trailblazers.  The list seeks “to spotlight a handful of individuals that are truly agents of change.” The publication considered lawyers who “helped roll out a new program, a new role or a new practice group.”  As a trailblazer, Lyon was recognized for his efforts to spearhead the firm’s new artificial intelligence and automated systems practice group. The list was published in June 2019. Mark Lyon is Chair of the firm’s Artificial Intelligence and Automated Systems Practice Group, and brings nearly three decades of experience as a trial lawyer and trusted corporate legal advisor to companies in a wide range of technology areas. He has extensive experience representing and advising clients on the legal, ethical, regulatory, and policy issues arising from emerging technologies like artificial intelligence. He regularly acts as a strategic advisor to clients in their development of AI-related products and services, their acquisition and sale of technology-related businesses, and in their development of appropriate legal and ethical policies and procedures pertaining to AI-focused business operations. In the rapidly advancing area of automated and autonomous vehicles, he has guided clients through the numerous hurdles of federal and state regulations and requirements for vehicle testing and deployment, as well as advising and assisting clients in exercising their voice before key agencies and legislative bodies.

April 23, 2019 |
Artificial Intelligence and Autonomous Systems Legal Update (1Q19)

Click for PDF We are pleased to provide the following update on recent legal developments in the areas of artificial intelligence, machine learning and autonomous systems (“AI”).  As noted in our Artificial Intelligence and Autonomous Systems Legal Update (4Q18), we witnessed few notable legislative developments in 2018, but also tentative evidence of growing federal government attention paid to AI technologies, and increasingly tangible steps taken by policy organizations and technology companies to address strategic concerns in light of the lack of a federal AI strategy and regulatory vacuum.  Meanwhile, and as we will address in a forthcoming client alert, the past year has seen a significant uptick in global AI policymaking, as numerous world economies made budgetary and policy commitments and sought to stake out a position in the absence of a clear U.S. strategy.  Notwithstanding these rapid global developments, its continued leadership position in development of AI technologies —albeit one that is increasingly coming under threat—means the United States still retains a unique opportunity to shape AI’s global impact.  In this update, we cover some of the recent developments that sketch out the beginnings of a U.S. federal AI strategy, and provide an overview of key current regulatory and policy issues. __________________________ Table of Contents I.      U.S. National Policy on AI Begins to Take Shape II.    Recent Bias Concerns for AI III.  Autonomous Vehicles IV.   Ethics and Data Privacy __________________________ I.    U.S. National Policy on AI Begins to Take Shape Under increasing pressure from the U.S. technology industry and policy organizations to present a substantive federal AI strategy on AI, in the past several months the Trump administration and congressional lawmakers have taken public actions to prioritize AI and automated systems.  Most notably, these pronouncements include President Trump’s “Maintaining American Leadership in Artificial Intelligence” Executive Order[1] and creation of AI.gov.[2]  While it may be too early to assess the impact of these executive branch efforts, other executive agencies appear to have responded to the call for action.  For example, in February, the Department of Defense (“DOD”) detailed its AI strategy and on March 6 to 7, the Pentagon’s research arm, the Defense Advanced Research Projects Agency (“DARPA”), hosted an Artificial Intelligence Colloquium to publicly discuss AI.[3]  The clear interest asserted by the Trump administration and growing traction within executive agencies should provide encouragement to stakeholders that the federal government is willing to prioritize AI, although the extent to which it will provide government expenditures to support its vision remains unclear. A.    President Trump’s Executive Order On February 11, 2019, President Trump signed an executive order (“EO”), titled “Maintaining American Leadership in Artificial Intelligence.”[4]  The purpose of the EO was to spur the development and regulation of artificial intelligence, machine learning and deep learning and fortify the United States’ global position by directing federal agencies to prioritize investments in AI,[5] interpreted by many observers to be a response to China’s recent efforts to claim a leadership position in AI research and development.[6]  Observers particularly noted that many other countries preceded the United States in rolling out national AI strategy.[7]  In an apparent response to these concerns, the Trump administration warned in rolling out the campaign that “as the pace of AI innovation increase around the world, we cannot sit idly by and presume that our leadership is guaranteed.”[8] To secure U.S. leadership, the EO prioritizes five key areas: (1) Investing in AI Research and Development (“R&D”): encouraging federal agencies to prioritize AI investments in their “R&D missions” to encourage “sustained investment in AI R&D in collaboration with industry, academia, international partners and allies, and other non-federal entities to generate technological breakthroughs in AI and related technologies and to rapidly transition those breakthroughs into capabilities that contribute to our economic and national security.”[9] (2) Unleashing AI Resources: making federal data and models more accessible to the AI research community by “improv[ing] data and model inventory documentation to enable discovery and usability” and “prioritiz[ing] improvements to access and quality of AI data and models based on the AI research community’s user feedback.”[10] (3) Setting AI Governance Standards: aiming to foster public trust in AI by using federal agencies to develop and maintain approaches for safe and trustworthy creation and adoption of new AI technologies (for example, the EO calls on the National Institute of Standards and Technology (“NIST”) to lead the development of appropriate technical standards).[11] (4) Building the AI Workforce: asking federal agencies to prioritize fellowship and training programs to prepare for changes relating to AI technologies and promoting Science, Technology, Engineering and Mathematics education.[12] (5) International Engagement and Protecting the United States’ AI Advantage: calling on agencies to collaborate with other nations but also to protect the nation’s economic security interest against competitors and adversaries.[13] AI developers will need to pay close attention to the executive branch’s response to standards setting.  The primary concern for standards sounds in safety, and the AI Initiative echoes this with a high-level directive to regulatory agencies to establish guidance for AI development and use across technologies and industrial sectors, and highlights the need for “appropriate technical standards and reduce barriers to the safe testing and deployment of AI technologies”[14]  and “foster public trust and confidence in AI technologies.”[15]  However, the AI Initiative is otherwise vague about how the program plans to ensure that responsible development and use of AI remain central throughout the process, and the extent to which AI policy researchers and stakeholders (such as academic institutions and nonprofits) will be invited to participate.  The EO announces that the NIST will take the lead in standards setting.  The Director of NIST, shall “issue a plan for Federal engagement in the development of technical standards and related tools in support of reliable, robust, and trustworthy systems that use AI technologies” with participation from relevant agencies as the Secretary of Commerce shall determine.[16]  The plan is intended to include “Federal priority needs for standardization of AI systems development and deployment,” the identification of “standards development entities in which Federal agencies should seek membership with the goal of establishing or supporting United States technical leadership roles,” and “opportunities for and challenges to United States leadership in standardization related to AI technologies.”[17] Observers have criticized the EO for its lack of actual funding commitments, precatory language, and failure to address immigration issues for AI firms looking to retain foreign students and hire AI specialists.[18]  For example, unlike the Chinese government’s commitment of $150 billion for AI prioritization, the EO adds no specific expenditures, merely encouraging certain offices to “budget” for AI research and development.[19]  To begin to close this gap, on April 11, 2019, Congressmen Dan Lipinski (IL-3) and Tom Reed (NY-23) introduced the Growing Artificial Intelligence Through Research (GrAITR) Act to establish a coordinated federal initiative aimed at accelerating AI research and development for U.S. economic and national security.  The GrAITR Act (H.R. 2202) would create a strategic plan to invest $1.6 billion over 10 years in research, development, and application of AI across the private sector, academia and government agencies, including NIST, the National Science Foundation, and the Department of Energy (DOE)—aiming to help the United States catch up to other countries, including the UK, who are “already cultivating workforces to create and use AI-enabled devices.”  The bill has been referred to the House Committee on Science, Space, and Technology. [19a] In April 2019, Dr. Lynne Parker, assistant director for artificial intelligence at the White House Office of Science and Technology Policy, noted that regulatory authority will be left to agencies to adjust to their sectors, but with high-level guidance from the Office of Management and Budget (“OMB”) on creating a balanced regulatory environment, and agency-level implementation plans.  Dr. Parker said that a draft version of OMB’s guidance likely would come out in early summer.[20] For more details, please see our recent update President Trump Issues Executive Order on “Maintaining American Leadership in Artificial Intelligence. B.    AI.gov Launch On March 19, 2019, the White House launched ai.gov as a platform to share AI initiatives from the Trump administration and federal agencies.[21]  These initiatives track along the key points of the AI EO, and ai.gov is intended to function as an ongoing press release.  Presently, the website includes five key domains for AI development: the Executive order on AI, AI for American Innovation, AI for American Industry, AI for the American Worker, and AI with American Values.[22] These initiatives highlight a number of federal government efforts under the Trump administration (and some launched during the Obama administration).  Highlights include the White House’s charting of a Select Committee on AI under the National Science and Technology Council, the Department of Energy’s efforts to develop supercomputers, the Department of Transportation’s efforts to integrate automated driving systems, and the Food and Drug Administration’s efforts to assess AI implementation in medical research.[23] C.    U.S. Senators Introduce “Algorithmic Accountability Act” to Address Bias On April 10, 2019, a number of Senate Democrats introduced the Algorithmic Accountability Act, which “requires companies to study and fix flawed computer algorithms that result in inaccurate, unfair, biased or discriminatory decisions impacting Americans.”[24]  The bill stands to be the United States Congress’s first serious foray into the regulation of AI and the first legislative attempt in the United States to impose regulation on AI systems in general, as opposed to regulating a specific activity, such as autonomous vehicles.  While observers have noted congressional reticence to regulate AI in past years, the bill hints at a dramatic shift in Washington’s stance amid growing public awareness for AI’s potential to create bias or harm certain groups.[25] The bill casts a wide net, such that many technology companies would find common practices to fall within the purview of the Act.  The Act would not only regulate AI systems but also any “automated decision system,” which is broadly defined as any “computational process, including one derived from machine learning, statistics, or other data processing or artificial intelligence techniques, that makes a decision or facilitates human decision making, that impacts consumers.”[26]  This could conceivably include crude decision tree algorithms.  For processes within the definition, companies would be required to audit for bias and discrimination and take corrective action to resolve these issues, when identified.  The bill would allow regulators to take a closer look at any “[h]igh-risk automated decision system”—those that involve “privacy or security of personal information of consumers[,]” “sensitives aspects of [consumers’] lives, such as their work performance, economic situation, health, personal preferences, interests, behavior, location, or movements[,]” “a significant number of consumers regarding race [and several other sensitive topics],” or “systematically monitors a large, publicly accessible physical place[.]”[27]  For these “high-risk” topics, regulators would be permitted to conduct an “impact assessment” and examine a host of proprietary aspects relating to the system.[28]  Additional regulations will be needed to give these key terms meaning but, for now, the bill is a harbinger for AI regulation that identifies key areas of concern for lawmakers. The bill has some teeth—it would give the Federal Trade Commission the authority to enforce and regulate these audit procedures and requirements—but does not provide for a private right of action or enforcement by state attorneys general.[29]  While the political viability of the bill is questionable, Senate Republicans have also recently renewed their scrutiny of technology companies for alleged political bias.[30]  At a minimum, companies operating in this space should certainly anticipate further congressional action on this subject in the near future, and proactively consider how their own “high-risk” systems may raise concerns related to bias.  In addition, companies may also wish to consider whether and how they ensure that their voice is heard and considered in future legislative efforts. D.    House Subcommittee Hears Testimony About How AI Can Combat Financial Crime The EO’s promised availability of governmental data may also prove beneficial for those in certain AI industries that are looking to expand their datasets beyond private data.[31]  This may be particularly relevant for agencies that have already expressed interest in data collection to ensure AI safety (e.g., in the context of the regulation of autonomous vehicles by the National Highway Traffic Safety Administration (“NHTSA”)).  Some AI businesses are now making their request for data access known.  On March 13, 2019, the National Security, International Development and Monetary Policy Subcommittee heard testimony from Gary Shiffman, founder and CEO of an AI security firm, who urged the government to implement AI to combat financial crimes, money laundering, trafficking and terrorism, noting that in order to advance this type of AI technology, the government forms an important, and perhaps necessary, part in making the AI systems by providing training data sets.[32]  In due course, companies whose products require access to public datasets may well be able to take advantage of emerging partnerships between the federal government and private sector. E.    DOD and DARPA Detail AI Efforts On February 12, 2019, the DOD unveiled its AI strategy, which builds on the recent EO.[33]  The DOD’s chief information officer explained that “[t]he [executive order] is paramount for our country to remain a leader in AI, and it will not only increase the prosperity of our nation, but also enhance our national security. . . .”[34]  To that end, the DOD’s plan announced that it will adopt AI to maintain its strategic position.[35]  To operationalize that goal, the DOD will rely on the Joint Artificial Intelligence Center and highlighted a key role for academic and industry partners.[36] In early 2019, DARPA launched a major project called Guaranteeing AI Robustness against Deception (“GARD”), aimed at studying adversarial machine learning.  Adversarial machine learning, an area of growing interest for government machine-learning researchers, involves experimentally feeding input into an algorithm to reveal the information on which it has been trained, or distorting input in a way that causes the system to misbehave.  With a growing number of military systems—including sensing and weapons systems—harnessing machine learning, there is huge potential for these techniques to be used both defensively and offensively.  Hava Siegelmann, Director of the GARD program, told MIT Technology Review recently that the goal of this project was to develop AI models that are robust in the face of a wide range of adversarial attacks, rather than simply able to defend against specific ones.[37] II.    Recent Bias Concerns for AI As noted above, the recently introduced Algorithmic Accountability Act would require that companies audit any automated decision-making for bias and discrimination.  A number of similar developments at national, state and international levels evidence the growing concern with this subject matter, and companies who currently use or are considering using AI to automate decision-making processes should track these developments closely.  We are closely monitoring the trends and developments in these areas and stand ready to assist company efforts to anticipate and navigate likely future requirements and concerns to avoid improper bias and discrimination. A.    The AI Now Institute at New York University Publishes New Report, “Discriminating Systems: Gender, Race, and Power in AI” The AI Now Institute, which examines the social implications of artificial intelligence, recently published a report that examines the scope and scale of the gender and racial diversity crisis in the AI sector and discusses how the use of AI systems for the classification, detection, and prediction of race and gender is in urgent need of re-evaluation.  The report includes recommendations for improving workplace diversity (such as publishing harassment and discrimination transparency reports, changing hiring practices to maximize diversity, and being transparent around hiring, compensation, and promotion practices) and recommendations for addressing bias and discrimination in AI systems (such as implementing rigorous testing across the lifecycle of AI systems).[38] B.    Several Government Agencies Seek to Root Out Bias in Artificial Intelligence Systems In companion bills SB-5527 and HB-1655, introduced on January 23, 2019, Washington State lawmakers drafted a comprehensive piece of legislation aimed at governing the use of automated decision systems by state agencies, including the use of automated decision-making in the triggering of automated weapon systems.[39]  In addition to addressing the fact that eliminating algorithmic-based bias requires consideration of fairness, accountability, and transparency, the bills also include a private right of action.[40]  According to the bills’ sponsors, automated decision systems are rapidly being adopted to make or assist in core decisions in a variety of government and business functions, including criminal justice, health care, education, employment, public benefits, insurance, and commerce,[41] and are often unregulated and deployed without public knowledge.[42]  Under the new law, in using an automated decision system, an agency would be prohibited from discriminating against an individual, or treating an individual less favorably than another on the basis of one or more of a list of factors such as race, national origin, sex, or age.[43] Currently, the bills remain in Committee.[44] In the UK, the world’s first Centre for Data Ethics and Innovation will partner with the UK Cabinet Office’s Race Disparity Unit to explore potential for bias in algorithms in crime and justice, financial services, recruitment and local government.[45]  The UK government explained that this investigation was necessary because of the risk that human bias will be reflected in the recommendations used in the algorithms.[46] C.    Artificial Intelligence Ethics in Policing Police departments often use predictive algorithms for various other functions, such as to help identify suspects.  While such technologies can be useful, there is increasing awareness building with regard to the risk of biases and inaccuracies.[47] In a paper released on February 13, researchers at the AI Now Institute, a research center that studies the social impact of artificial intelligence, found that police across the United States may be training crime-predicting AIs on falsified “dirty” data,[48] calling into question the validity of predictive policing systems and other criminal risk-assessment tools that use training sets consisting of historical data.[49] In some cases, police departments had a culture of purposely manipulating or falsifying data under intense political pressure to bring down official crime rates.  In New York, for example, in order to artificially deflate crime statistics, precinct commanders regularly asked victims at crime scenes not to file complaints.  In predictive policing systems that rely on machine learning to forecast crime, those corrupted data points become legitimate predictors, creating “a type of tech-washing where people who use these systems assume that they are somehow more neutral or objective, but in actual fact they have ingrained a form of unconstitutionality or illegality.”[50] III.    Autonomous Vehicles The autonomous vehicle (“AV”) industry continues to expand at a rapid pace, with incremental developments towards full autonomy.  At this juncture, most of the major automotive manufacturers are actively exploring AV programs and conducting extensive on-road testing.  As lawmakers across jurisdictions grapple with emerging risks and the challenge of building legal frameworks and rules within existing, disparate regulatory ecosystems, common challenges are beginning to emerge that have the potential to shape not only the global automotive industry over the coming years, but also broader strategies and policies relating to infrastructure, data management and safety. A.    Legislative Activity at Federal Level As we reported in our Artificial Intelligence and Autonomous Systems Legal Update (3Q18), there was a flurry of legislative activity in Congress in 2017 and early 2018 towards a national regulatory framework.  The U.S. House of Representatives passed the Safely Ensuring Lives Future Deployment and Research In Vehicle Evolution (SELF DRIVE) Act[51] by voice vote in September 2017, but its companion bill (the American Vision for Safer Transportation through Advancement of Revolutionary Technologies (AV START) Act),[52] stalled in the Senate as a result of holds from Democratic senators who expressed concerns that the proposed legislation remains immature and underdeveloped in that it “indefinitely” preempts state and local safety regulations even in the absence of federal standards.[53]  So far, there have been no attempts to reintroduce the bill in the new congressional session, and even if efforts to reintroduce it are ultimately successful, the measure may not be enough to assuage safety concerns as long as it lacks an enforceable federal safety framework. Therefore, AVs continue to operate under a complex patchwork of state and local rules, with federal oversight limited to the U.S. Department of Transportation’s (“DoT”) informal guidance.  As we reported in our Artificial Intelligence and Autonomous Systems Legal Update (4Q18), the DoT’s NHTSA released its road map on the design, testing and deployment of driverless vehicles: “Preparing for the Future of Transportation: Automated Vehicles 3.0” (commonly referred to as “AV 3.0”) on October 3, 2018.[54]  However, while AV 3.0 reinforces that federal officials are eager to take the wheel on safety standards and that any state laws on automated vehicle design and performance will be preempted, the thread running throughout is the commitment to voluntary, consensus-based technical standards and the removal of unnecessary barriers to the innovation of AV technologies. B.    Legislative Activity at State and Local Levels Recognizing that AVs and vehicles with semi-autonomous components are already being tested and deployed on roads amid legislative gridlock at federal level, thirty states and the District of Columbia have enacted autonomous vehicle legislation, while governors in at least 11 states have issued executive orders on self-driving vehicles.[55]  In 2019 alone, 75 new bills in 20 states have ‘pending’ status.[56]  Currently, ten states authorize testing, while 14 states and the District of Columbia authorize full deployment.  16 states now allow testing or deployment without a human operator in the vehicle, although some limit it to certain defined conditions.[57]  Increasingly, there are concerns that states may be racing to cement their positions as leaders in AV testing in the absence of a federal regulatory framework by introducing increasingly permissive bills that allow testing without human safety drivers.[58] Some states are explicitly tying bills to federal guidelines in anticipation of congressional action. On April 2, 2019, D.C. lawmakers proposed the Autonomous Vehicles Testing Program Amendment Act of 2019, which would set up a review and permitting process for autonomous vehicle testing within the District Department of Transportation.  Companies seeking to test self-driving cars in the city would have to provide an array of information to officials, including— for each vehicle it plans to test—safety operators in the test vehicles, testing locations, insurance, and safety strategies.[59]  Crucially, it would require testing companies to certify that their vehicles comply with federal safety policies; share with officials data on trips and any crash or cybersecurity incidents; and train operators on safety.[60] Moreover, cities—who largely control the test sites—are creating an additional layer of rules for AVs, ranging from informal agreements to structured contracts between cities and companies, as well as zoning laws.[61]  Given the fast pace of developments and tangle of applicable rules, it is essential that companies operating in this space stay abreast of legal developments in states as well as cities in which they are developing or testing autonomous vehicles, while understanding that any new federal regulations may ultimately preempt states’ authorities to determine, for example, safety policies or how they handle their passengers’ data.  We will continue to carefully monitor significant developments in this space. C.    Increasing Focus on Connectivity and Infrastructure in AV Development AVs operate by several interconnected technologies, including sensors and computer vision (e.g. radars, cameras and lasers), deep learning and other machine intelligence technologies, robotics and navigation (e.g. GPS).  As lawmakers debate how to integrate AVs into existing infrastructure, a key emerging regulatory challenge is “connectivity.”  While AV technology resides largely onboard the vehicle itself, and sensor systems are rapidly evolving to meet the demands of AV operations, fully autonomous vehicles nonetheless require sufficient network infrastructure to communicate efficiently with their surroundings (i.e. to communicate with infrastructure, such as traffic lights and signage, and vehicle-to-vehicle, collectively known as Vehicle-to-Everything communication, or “V2X”).[62] At present, there are two competing technical standards for V2X on the European market: ITS-G5 Wi-Fi standard and the alternative “C-V2X” standard (“Cellular Vehicle-to-Everything”).  C-V2X is designed to work with 5G wireless technology but is incompatible with Wi-Fi.  There is presently neither regulatory nor industry consensus on this topic.  A group of automakers, the 5G Automotive Association, now counts more than 100 members who argue that C-V2X is preferable to Wi-Fi in terms of security, reliability, range and reaction time.[63]  However, in April 2019, the European Commission proposed a legal act to regulate so-called “Cooperative-Intelligent Transport Systems (C-ITS),” backing the ITS-G5 Wi-Fi standard.[64] By contrast, in the United States, the AV 3.0 guidelines acknowledged that private sector companies were already researching and testing C-V2X technology alongside the Dedicated Short-Range Communication (“DSRC”)-based deployments, but also cautioned that while V2X is an important complementary technology that is expected to enhance the benefits of automation at all levels, “it should not be and realistically cannot be a precondition to the deployment of automated vehicles” and that DoT “does not promote any particular technology over another.”[65]  This approach appears to be in line with the DoT’s overarching desire to remain “technologically neutral” to avoid interfering with innovation.  Nonetheless, in December 2018, the DoT announced that it was seeking public comment on V2X communications,[66] noting that “there have been developments in core aspects of the communication technologies needed for V2X, which have raised questions about how the Department can best ensure that the safety and mobility benefits of connected vehicles are achieved without interfering with the rapid technological innovations occurring in both the automotive and telecommunications industries,” including in both C-V2X and “5G” communications, which “may, or may not, offer both advantages and disadvantages over DSRC.”[67] Meanwhile, AVs built in China—which has set a goal of 10% of vehicles reaching Level 4/5 autonomy by 2030—will support the C-V2X standard, and will likely be developed in an ecosystem of infrastructure, technical standards and regulatory requirements distinct from those of their European counterparts.[68]  In addition to setting a national DSRC standard, China also plans to cover 90% of the country with C-V2X sensors by 2020.[69]  In 2017, the Chinese government called for more than 100 domestic standards for AVs and other internet-connected vehicles.  Instead of GPS, AVs will support China’s BeiDou GNSS standard — which requires different receiver chips to communicate with Chinese satellites.  Major Chinese cities also enforce license plate jurisdiction and have roads and lanes dedicated to specific vehicle types, allowing for more effective geo-fencing of AV testing and operating areas.  AV companies will have to engage with forthcoming standards and development plans from China’s Ministry of Industry of Information Technology, its AV-coordinating commission (the “Internet of Vehicles Development Commission”), and quasi-private industry groups.[70] Given the lack of international (or even national) consensus and the potential burden of developing and installing different systems in vehicles for domestic markets and for export, companies operating in the AV space should remain alert to developments in this rapidly evolving landscape of technical standards and infrastructure. IV.    Ethics and Data Privacy The rapidly expanding uses for artificial intelligence, both personal and professional, raise a number of issues for governments worldwide and also for companies attempting to navigate an evolving ethics landscape, including threats to data privacy as well as calls for transparency and accountability. A.    Government Regulation of Artificial Intelligence The United States continues to be a key player and dominant force in the development of artificial intelligence, and the U.S. government continues to identify AI as a key concern when it comes to cybersecurity and data privacy.  For example, the Office of the Director of National Intelligence, recently highlighted, in its 2019 “National Intelligence Strategy” report, that U.S. adversaries benefit from that AI-created military and intelligence capabilities, and emphasized that such capabilities pose significant threats to U.S. interests.[71]  But despite this key role in the development of emerging technologies, and the threats faced by the United States, there has been little by way of public guidance or regulation of AI, at least on the federal level.[72] In contrast, the European Union (“EU”) has recently issued guidance on ethical considerations in the use of AI.  In connection with the implementation of its General Data Privacy Rules (“GDPR”) in 2018, the EU recently released a report from its “High-Level Expert Group on Artificial Intelligence”: the EU “Ethics Guidelines for Trustworthy AI” (“Guidelines”).[73]  The Guidelines lay out seven ethical principles “that must be respected in the development, deployment, and use of AI systems”: (1) Human Agency and Oversight: AI systems should enable equitable societies by supporting human agency and fundamental rights, and not decrease, limit or misguide human autonomy. (2)  Robustness and Safety: Trustworthy AI requires algorithms to be secure, reliable and robust enough to deal with errors or inconsistencies during all life cycle phases of AI systems. (3)  Privacy and Data Governance: Citizens should have full control over their own data, while data concerning them will not be used to harm or discriminate against them. (4) Transparency: The traceability of AI systems should be ensured. (5) Diversity, Non-Discrimination and Fairness: AI systems should consider the whole range of human abilities, skills and requirements, and ensure accessibility. (6) Societal and Environmental Well-Being: AI systems should be used to enhance positive social change and enhance sustainability and ecological responsibility. (7) Accountability: Mechanisms should be put in place to ensure responsibility and accountability for AI systems and their outcomes. In addition to laying out these principles, the Guidelines highlight the importance of implementing a “large-scale pilot with partners” and of “building international consensus for human-centric AI.”[74]  Specifically, the Commission will launch a pilot phase of guideline implementation in Summer 2019, working with “like-minded partners such as Japan, Canada or Singapore.”[75]  The EU also intends to “continue to play an active role in international discussions and initiatives including the G7 and G20.”[76] While the Guidelines do not appear to create any binding regulation on stakeholders in the EU, their further development and evolution will likely shape the final version of future regulation throughout the EU.  Therefore, the Summer 2019 pilot program, as well as any further international work between the EU and other partners, merits continued attention. B.    DARPA Prioritizes Ethics in AI Development DARPA hosted an Artificial Intelligence Colloquium from March 6-7, 2019 in Alexandria, Virginia, to increase awareness of DARPA’s expansive AI R&D efforts.[77]  During the weeks after the colloquium, several news sources reported on DARPA’s AI research and technology.  In an interview discussing DARPA’s AI-infused drones that would be used to map combatants and civilians in the field, the agency discussed how ethics is informing its development and implementation of AI systems.[78]  DARPA highlighted that they met with ethicists before advancing technical development of the technology.[79] C.    UN Urges Ban on Autonomous Weapons that Kill The United Nations Secretary-General António Guterres has urged restriction on the development of lethal autonomous weapons systems, or LAWS,[80] arguing that machines with the power and discretion to take lives without human involvement are politically unacceptable, morally repugnant and should be prohibited by international law.[81]  Subsequently, Japan pledged that it will not develop fully automated weapons systems.[82]  A group of member states—including the UK, United States, Russia, Israel and Australia—are reportedly opposed to a preemptive ban in the absence of any international agreement on the characteristics of autonomous weapons.[83] [1]    Donald J. Trump, Executive Order on Maintaining American Leadership in Artificial Intelligence, The White House (Feb. 11, 2019), Exec. Order No. 13859, 3 C.F.R. 3967, available at https://www.whitehouse.gov/presidential-actions/executive-order-maintaining-american-leadership-artificial-intelligence/ [2]    Jon Fingas, White House Launches Site to Highlight AI Initiatives, Endgadget (Mar. 20, 2019), available at https://www.engadget.com/2019/03/20/white-house-ai-gov-website/. [3]    Paulina Glass, Here’s the Key Innovation in DARPA AI Project: Ethics From the Start, Defense One (Mar. 15, 2019), available at https://www.defenseone.com/technology/2019/03/heres-key-innovation-darpas-ai-project-ethics-start/155589/. [4]    Supra, n.1. [5]    The White House, Accelerating America’s Leadership in Artificial Intelligence, Office of Science and Technology Policy (Feb. 11, 2019), available at https://www.whitehouse.gov/briefings-statements/president-donald-j-trump-is-accelerating-americas-leadership-in-artificial-intelligence/. [6]    See, e.g., Jamie Condliffe, In 2017, China is Doubling Down on AI, MIT Technology Review (Jan. 17, 2017), available at https://www.technologyreview.com/s/603378/in-2017-china-is-doubling-down-on-ai/; Cade Metz, As China Marches Forward on A.I., the White House Is Silent, N.Y. Times (Feb. 12, 2018), available at https://www.nytimes.com/2018/02/12/technology/china-trump-artificial-intelligence.html?module=inline. [7]    Jessica Baron, Will Trump’s New Artificial Intelligence Initiative Make The U.S. The World Leader In AI?, Forbes (Feb. 11, 2019), available at https://www.forbes.com/sites/jessicabaron/2019/02/11/will-trumps-new-artificial-intelligence-initiative-make-the-u-s-the-world-leader-in-ai/ (noting that, after Canada in March 2017, the United States will be the 19th country to announce a formal strategy for the future of AI); see also, as noted in our recent Artificial Intelligence and Autonomous Systems Legal Update (4Q18), the German government’s new AI strategy, published in November 2018, which promises an investment of €3 billion before 2025 with the aim of promoting AI research, protecting data privacy and digitalizing businesses (available at https://www.bundesregierung.de/breg-en/chancellor/ai-a-brand-for-germany-1551432). [8]    Supra, n.5. [9]    Supra, n.1 [10]   Supra, n.1 at 3969. [11]   Id. at 3970. [12]   Id. at 3971. [13]   Id. [14]   Id. at 3967. [15]   Id. [16]   Id. at 3970. [17]   Id. [18]   See, e.g., James Vincent, Trump Signs Executive Order to Spur US Investment in Artificial Intelligence, The Verge (Feb. 11, 2019); Oren Etzioni, What Trump’s Executive Order on AI Is Missing, Wired (Feb. 13, 2019), available at https://www.wired.com/story/what-trumps-executive-order-on-ai-is-missing/. [19]   Darrell M. West, Assessing Trump’s Artificial Intelligence Executive Order, The Bookings Institute (Feb. 12, 2019) available at https://www.brookings.edu/blog/techtank/2019/02/12/assessing-trumps-artificial-intelligence-executive-order/. [19a]   H.R. 2022, 116th Cong. (2019).  For more details, see https://www.congress.gov/bill/116th-congress/house-bill/2202 or https://lipinski.house.gov/press-releases/lipinski-introduces-bipartisan-legislation-to-bolster-us-leadership-in-ai-research/. [20]   White House AI Order Emphasizes Use for Citizen Experience, Meritalk (Apr. 18, 2019), available at https://www.meritalk.com/articles/white-house-ai-order-emphasizes-use-for-citizen-experience/. [21]   Khari Johnson, The White House Launches ai.gov, VentureBeat (Mar. 19, 2019), available at https://venturebeat.com/2019/03/19/the-white-house-launches-ai-gov/. [22]   Donald J. Trump, Artificial Intelligence for the American People, the White House (2019), available at https://www.whitehouse.gov/ai/. [23]   Id.; see further our previous legal updates for more details on some of these initiatives: https://www.gibsondunn.com/?search=news&s=&practice%5B%5D=36270. [24]   Cory Booker, Booker, Wyden, Clarke Introduce Bill Requiring Companies To Target Bias In Corporate Algorithms, United States Senate (Apr. 10, 2019), available at https://www.booker.senate.gov/?p=press_release&id=903; see also S. Res. __, 116th Cong. (2019). [25]   See, e.g., Karen Hao, Congress Wants To Protect You From Biased Algorithms, Deepfakes, And Other Bad AI, MIT Review (Apr. 15, 2019), available at https://www.technologyreview.com/s/613310/congress-wants-to-protect-you-from-biased-algorithms-deepfakes-and-other-bad-ai/; Meredith Whittaker, Et Al., AI Now Report 2018, AI Now Institute, 2.2.1 (Dec. 2018), available at https://ainowinstitute.org/AI_Now_2018_Report.pdf; Russell Brandom, Congress Thinks Google Has a Bias Problem—Does It?, The Verge (Dec. 12, 2018), available at https://www.theverge.com/2018/12/12/18136619/google-bias-sundar-pichai-google-hearing. [26]   Supra, n.24. [27]   Id. [28]   Id. [29]   Id. [30]   Tony Roman, Senate Republicans Renew Their Claims that Facebook, Google and Twitter Censor Conservatives, The Washington Post (Apr. 10, 2019), available at https://www.washingtonpost.com/technology/2019/04/10/facebook-google-twitter-under-fire-senate-republicans-censoring-conservatives-online/?utm_term=.69aa442c36a5. [31]   Supra, n.1 at 3969 (stating that “[h]eads of all agencies shall review their Federal data and models to identify opportunities to increase access and use by the greater non-Federal AI research community in a manner that benefits that community, while protecting safety, security, privacy, and confidentiality.  Specifically, agencies shall improve data and model inventory documentation to enable discovery and usability, and shall prioritize improvements to access and quality of AI data and models based on the AI research community’s user feedback.”). [32]   Amanda Ziadeh, Giant Oak’s Gary Shiffman Testifies About How AI Can Combat Financial Crime, WashingtonExec (Mar. 18, 2019), available at https://washingtonexec.com/2019/03/giant-oaks-gary-shiffman-testifies-about-how-ai-can-combat-financial-crime/. [33]   Terri Moon Cronk, DOD Unveils Its Artificial Intelligence Strategy, The Department of Defense (Feb. 12, 2019) available at https://dod.defense.gov/News/Article/Article/1755942/dod-unveils-its-artificial-intelligence-strategy/. [34]   Id.. [35]   Id. [36]   Id. [37]   Will Knight, How Malevolent Machine Learning Could Derail AI, MIT Technology Review (Mar. 25, 2019), available at https://www.technologyreview.com/s/613170/emtech-digital-dawn-song-adversarial-machine-learning/ [38]   AI Now Institute, Discriminating Systems: Gender, Race, and Power in AI (Apr. 2019), available at https://ainowinstitute.org/discriminatingsystems.pdf. [39]   See Brian Higgins, Washington State Seeks to Root Out Bias in Artificial Intelligence Systems, Artificial Intelligence Technology and the Law (Feb. 6, 2019), available at http://aitechnologylaw.com/2019/02/washington-state-seeks-to-root-out-bias-in-artificial-intelligence-systems/. [40]   Id. [41]   Id. [42]   Id. [43]   Id. [44]  DJ Pangburn, Washington Could Be the First State to Rein in Automated Decision-Making, Fast Company (February 8, 2019), available at https://www.fastcompany.com/90302465/washington-introduces-landmark-algorithmic-accountability-laws; A separate privacy bill stalled in the Washington State House of Representatives, despite having been overwhelmingly approved by the state Senate in March 2019. The bill, which is intended to strengthen consumer rights by regulating how online technology companies collect, use, share and sell consumers’ personal information, likely will not be passed in 2019, see Allison Grande, Washington State Privacy Bill Likely Won’t Pass This Year, Law360 (April 19, 2019), available at https://www.law360.com/articles/1151755/washington-state-privacy-bill-likely-won-t-pass-this-year. [45]   Gov.UK, Investigation launched into potential for bias in algorithmic decision-making in society (Mar. 20, 2019), available at https://www.gov.uk/government/news/investigation-launched-into-potential-for-bias-in-algorithmic-decision-making-in-society. [46]   Id. [47]   Karen Hao, AI Is Sending People To Jail – And Getting It Wrong, MIT Technology Review (Jan. 21, 2019), available at https://www.technologyreview.com/s/612775/algorithms-criminal-justice-ai/.  See also Rod McCullom, Facial Recognition Technology is Both Biased and Understudied, UnDark (May 17, 2017), available at https://undark.org/article/facial-recognition-technology-biased-understudied/. [48]   See Karen Hao, Police Across the US Are Training Crime-Predicting AIs on Falsified Data, MIT Technology Review (Feb. 13, 2019), available at https://www.technologyreview.com/s/612957/predictive-policing-algorithms-ai-crime-dirty-data/. [49]   Richardson, Rashida and Schultz, Jason and Crawford, Kate, Dirty Data, Bad Predictions: How Civil Rights Violations Impact Police Data, Predictive Policing Systems, and Justice (Feb. 13, 2019).  New York University Law Review Online, Forthcoming, available at SSRN: https://ssrn.com/abstract=3333423. [50]   Supra, n.48. [51]   H.R. 3388, 115th Cong. (2017). [52]   U.S. Senate Committee on Commerce, Science and Transportation, Press Release, Oct. 24, 2017, available at https://www.commerce.senate.gov/public/index.cfm/pressreleases?ID=BA5E2D29-2BF3-4FC7-A79D-58B9E186412C. [53]   Letter from Democratic Senators to U.S. Senate Committee on Commerce, Science and Transportation (Mar. 14, 2018), available at https://morningconsult.com/wp-content/uploads/2018/11/2018.03.14-AV-START-Act-letter.pdf. [54]   U.S. Dept. of Transp., Preparing for the Future of Transportation: Automated Vehicles 3.0 (Sept. 2017), available at https://www.transportation.gov/sites/dot.gov/files/docs/policy-initiatives/automated-vehicles/320711/preparing-future-transportation-automated-vehicle-30.pdf; see also https://www.law360.com/articles/1113438/how-av-3-0-changed-the-autonomous-vehicle-game-in-2018. [55]   Nat’l Conference of State Legislatures, Autonomous Vehicles State Bill Tracking Database (Apr. 9, 2019), available at http://www.ncsl.org/research/transportation/autonomous-vehicles-legislative-database.aspx. [56]   Id. [57]   Insurance Institute for Highway Safety, Highway Loss Data Institute, Automation and Crash Avoidance (Apr. 2019), available at https://www.iihs.org/iihs/topics/laws/driving-automation; for an interactive map showing the status of state laws, see https://www.iihs.org/iihs/topics/laws/driving-automation/driving-automation-map?topicName=automation-and-crash-avoidance. [58]   Dan Robitzki, Florida Law Would Allow Self-Driving Cars With No Safety Drivers, Futurism (Jan. 29, 2019) available at https://futurism.com/florida-law-self-driving-cars. [59]   Andrew Glambrone, Self-Driving Cars Are Coming. D.C. Lawmakers Want To Regulate Them, Curbed (Apr. 3, 2019), available at https://dc.curbed.com/2019/4/3/18294167/autonomous-vehicles-dc-self-driving-cars-regulations. [60]   Id., see further the Autonomous Vehicles Testing Program Amendment Act of 2019, available at http://lims.dccouncil.us/Download/42211/B23-0232-Introduction.pdf. [61]   See, e.g., Rainwater & DuPuis, Cities Have Taken the Lead in Regulating Driverless Vehicles, CityLab (Oct. 3, 2018), available at https://www.citylab.com/perspective/2018/10/cities-lead-regulation-driverless-vehicles/573325/; Jamie Condliffe, Who Should Let Driverless Cars Off the Leash?, NY Times (Mar. 29, 2019), available at https://www.nytimes.com/2019/03/29/technology/lyft-autonomous-cars.html. [62]   6 Key Connectivity Requirements of Autonomous Driving, IEEE Spectrum, available at https://spectrum.ieee.org/transportation/advanced-cars/6-key-connectivity-requirements-of-autonomous-driving (“Industry leaders will need to master connectivity to deliver the V2X (vehicle-to-everything) capabilities fully autonomous driving promises.”). [63]   This position is also supported by the telecommunication lobby GMSA, see Foo Yun Chee, Telco Lobby Urges EU Lawmakers To Spurn Push For Wifi Car Standard, Reuters (Apr. 9, 2019), available at https://www.reuters.com/article/us-eu-autos-technology/telco-lobby-urges-eu-lawmakers-to-spurn-push-for-wifi-car-standard-idUSKCN1RL2MH?feedType=RSS&feedName=technologyNews. [64]   Douglas Busvine, Explainer: Betting On The Past? Europe Decides On Connected Car Standards, Reuters (Apr. 18, 2019), available at https://www.reuters.com/article/us-eu-autos-technology-explainer/explainer-betting-on-the-past-europe-decides-on-connected-car-standards-idUSKCN1RU214. [65]   Supra, n.54 at 13, 16. [66]   U.S. Dept. of Transp., U.S. Department of Transportation Releases Request for Comment (RFC) on Vehicle-to-Everything (V2X) Communications (Dec. 18, 2018), available at https://www.nhtsa.gov/press-releases/us-department-transportation-releases-request-comment-rfc-vehicle-everything-v2x. [67]   Office of the Federal Register, Notice of Request for Comments: V2X Communications (Dec. 12, 2018), available at https://www.federalregister.gov/documents/2018/12/26/2018-27785/notice-of-request-for-comments-v2x-communications. [68]   Patrick Lozada, China’s Avs Will Think And Drive Differently, Axios (Mar. 8, 2019), available at https://www.axios.com/chinas-av-will-think-and-drive-differently-e0a823b4-df60-4667-b21b-7bbe529e53cd.html. [69]   Id. [70]   Patrick Lozada, An Obscure Chinese Commission Could Change The Future Of Avs, Axios (Jan. 18, 2019), available at https://www.axios.com/an-obscure-chinese-commission-could-change-the-future-of-avs-b7520e56-90cf-4b5a-b523-7c44846e08ec.html. [71]   Office of the Director of National Security, National Intelligence Strategy of the United States of America (2019), available at https://www.intel.gov/templates/intelgov-template/custom-sections/the-nis-at-a-glance/pdf/National_Intelligence_Strategy_2019.pdf?mod=article_inline. [72]   https://www.gibsondunn.com/artificial-intelligence-and-autonomous-systems-legal-update-3q18/. [73]   European Commission, Ethics Guidelines for Trustworthy AI, April 8, 2019, available at https://privacyblogfullservice.huntonwilliamsblogs.com/wp-content/uploads/sites/28/2019/04/AIEthicsGuidelinespdf1.pdf. [74]   Artificial Intelligence: Commission takes forward its work on ethical guidelines, Press Release, April 8, 2019, available at http://europa.eu/rapid/press-release_IP-19-1893_en.htm. [75]   Id. [76]   Id. [77]   DARPA Announces 2019 AI Colloquium, available at https://www.darpa.mil/news-events/2018-11-16. [78]   Paulina Glass, Here’s the Key Innovation in DARPA AI Project: Ethics From the Start, Defense One (Mar. 15, 2019), available at https://www.defenseone.com/technology/2019/03/heres-key-innovation-darpas-ai-project-ethics-start/155589/. [79]   Id. [80]   Autonomous Weapons that Kill Must be Banned, Insists UN Chief, UN News (Mar. 25, 2019), available at https://news.un.org/en/story/2019/03/1035381. [81]   Id. [82]   Japan Pledges No AI “Killer Robots,” MeriTalk (Mar. 25, 2019), available at https://www.meritalk.com/articles/japan-pledges-no-ai-killer-robots/. [83]   Damien Gayle, UK, US and Russia among those opposing killer robot ban, The Guardian (Mar. 29, 2019), available at https://www.theguardian.com/science/2019/mar/29/uk-us-russia-opposing-killer-robot-ban-un-ai. The following Gibson Dunn lawyers prepared this client update: H. Mark Lyon, Frances A. Waldmann, Tony Bedel, Panayiota Burquier, Martie P. Kutscher, Gatsby Miller and Arjun Rangarajan. Gibson Dunn’s lawyers are available to assist in addressing any questions you may have regarding these developments.  Please contact the Gibson Dunn lawyer with whom you usually work, or any of the following lawyers in the firm’s Artificial Intelligence and Automated Systems Group: H. Mark Lyon – Chair, Palo Alto (+1 650-849-5307, mlyon@gibsondunn.com) Frances A. Waldmann – Los Angeles (+1 213-229-7914,fwaldmann@gibsondunn.com) © 2019 Gibson, Dunn & Crutcher LLP Attorney Advertising:  The enclosed materials have been prepared for general informational purposes only and are not intended as legal advice.

February 12, 2019 |
President Trump Issues Executive Order on “Maintaining American Leadership in Artificial Intelligence”

Click for PDF Yesterday, President Donald Trump signed an executive order (“EO”) creating the “American AI Initiative”[1] intended to spur the development and regulation of artificial intelligence, machine learning and deep learning (“AI”) and fortify the United States’ global position by directing federal agencies to prioritize investments in research and development of AI.[2]  Despite its position at the forefront of AI innovation, the U.S. still lacks an overall federal AI strategy and policy.  And during the past two years, observers noted other governments’ concerted efforts and considerable expenditures to strengthen their domestic AI research and development, particularly China’s plan to become a world leader in AI by 2030.[3] These developments abroad prompted many to call for a comprehensive government strategy and similar investments by the United States’ government to ensure its position as a global leader in AI development and application.[4]  Yesterday’s announcement was therefore welcome but not entirely unexpected, following several recent statements made by President Trump that indicated an understanding of the urgency of some of these needs.[5]  In announcing the AI Initiative, the Trump administration noted that “as the pace of AI innovation increase around the world, we cannot sit idly by and presume that our leadership is guaranteed.”[6] I.    Overview of the Executive Order In support of this position, the Trump administration outlined five key areas for AI prioritization:[7] (1) Investing in AI Research and Development (R&D): encouraging federal agencies to prioritize AI investments in their ‘R&D missions’ to encourage “sustained investment in AI R&D in collaboration with industry, academia, international partners and allies, and other non-Federal entities to generate technological breakthroughs in AI and related technologies and to rapidly transition those breakthroughs into capabilities that contribute to our economic and national security.”[8] (2) Unleashing AI Resources: making federal data and models more accessible to the AI research community by “improv[ing] data and model inventory documentation to enable discovery and usability” and “prioritiz[ing] improvements to access and quality of AI data and models based on the AI research community’s user feedback.”[9] (3) Setting AI Governance Standards: aiming to foster public trust in AI by using federal agencies to develop and maintain approaches for safe and trustworthy creation and adoption of new AI technologies (for example, the EO calls on the National Institute of Standards and Technology (“NIST”) to lead the development of appropriate technical standards). (4) Building the AI Workforce: asking federal agencies to prioritize fellowship and training programs to prepare for changes relating to AI technologies and promoting Science, Technology, Engineering and Mathematics (STEM) education. (5) International Engagement and Protecting the United States’ AI Advantage: calling on agencies to collaborate with other nations but also to protect the nation’s economic security interest against competitors and adversaries. The AI Initiative is set to be coordinated through the National Science and Technology Council (“NSTC”) Select Committee on Artificial Intelligence (“Select Committee”).  The full impact of the AI Initiative is not yet known: while it sets some specific deadlines for formalizing plans by agencies under the direction of the Select Committee, the EO is not self-executing and is generally thin on details.  Therefore, the long-term impact will be in the actions recommended and taken as a result of those consultations and reports, not the EO itself.  Moreover, although the AI Initiative is designed to dedicate resources and funnel investments into AI research, the EO does not set aside specific financial resources or provide details on how available resources will be structured. II.    Regulation of AI Applications For now, stakeholders should mark their calendars for August 10, 2019, the EO’s internal deadline for agencies to submit responsive plans and memoranda.  The EO directs the Office of Management and Budget (“OMB”) director, in coordination with the directors of the Office of Science and Technology Police (“OSTP”), Domestic Policy Council, and National Economic Council, and in consultation with other relevant agencies and key stakeholders (as determined by OMB), to issue a memorandum to the heads of all agencies to “inform the development of regulatory and non‑regulatory approaches” to AI that “advance American innovation while upholding civil liberties, privacy, and American values” and consider ways to reduce barriers to the use of AI technologies in order to promote their innovative application.[10]  Companies and other organizations interested in helping to shape this memorandum should note that the EO directs the OMB Director to determine key stakeholders and issue a draft version of the memorandum for public comment before it is finalized.[11] A.    Development of Technical Standards “In Support of Reliable, Robust, and Trustworthy Systems” AI developers will need to pay particular attention to future agency developments concerning standards setting.  So far, the primary concern for standards sounds in safety, and the AI Initiative echoes this with a high-level directive to regulatory agencies to establish guidance for AI development and use across technologies and industrial sectors, and highlights the need for “appropriate technical standards and reduce barriers to the safe testing and deployment of AI technologies.”[12]  Within 180 days of the EO, the Secretary of Commerce, through the Director of NIST, shall “issue a plan for Federal engagement in the development of technical standards and related tools in support of reliable, robust, and trustworthy systems that use AI technologies” with participation from relevant agencies as the Secretary of Commerce shall determine.  The plan is intended to include “Federal priority needs for standardization of AI systems development and deployment,” the identification of “standards development entities in which Federal agencies should seek membership with the goal of establishing or supporting United States technical leadership roles,” and “opportunities for and challenges to United States leadership in standardization related to AI technologies.”[13] Accordingly, within the next six months, we can expect to see proposals from the General Services Administration (“GSA”), OMB, NIST, and other agencies on topics such as data formatting and availability, standards, and other potential regulatory efforts.  NIST’s indirect participation in the development of AI-related standards through the International Organization for Standardization (“ISO”) may prove to be an early bellwether for future developments.[14] B.    AI Policy and Ethics So far, there has been relatively little substantive discussion at federal level about AI governance, its societal impact and related ethical issues—such as data privacy and job security—and while the EO notionally aims to “[r]educe barriers to the use of AI technologies to promote their innovative application while protecting American technology, economic and national security, civil liberties, privacy, and values”[15] and “foster public trust and confidence in AI technologies,”[16] it is otherwise vague about how the program plans to ensure that responsible development and use of AI remain central throughout the process, and the extent to which AI policy researchers and stakeholders (such as academic institutions and non-profits) will be invited to participate.[17] III.    Data and Computing Resources for AI Research and Development The promised availability of governmental data[18] may prove to be a boon for those in certain AI industries that have reached or are fast approaching commercial viability—particularly those for whom government agencies have already expressed interest in certain forms of data collection (e.g., in the context of the regulation of autonomous vehicles by the National Highway Traffic Safety Administration (“NHTSA”)).  Within 90 days of the EO, the OMB Director will publish a notice in the Federal Register inviting the public to identify additional requests for access or quality improvements for federal data and models that would improve AI R&D and testing and, in conjunction with the Select Committee, “investigate barriers to access or quality limitations of Federal data and models that impede AI R&D and testing.”[19]  OMB has also been directed to update implementation guidance for Enterprise Data Inventories and Source Code Inventories to “support discovery and usability in AI R&D” within 120 days.”[20] IV.    Unites States’ Leadership in AI Policy While the Trump administration’s formal recognition of the importance of federal guidance and leadership on AI policy is certainly a step in the right direction, it also remains to be seen how calls for thoughtful global AI governance and international collaboration can be reconciled with national narratives focusing on leading the “AI race.”[21]  Notably, the EO requires NIST to submit an action plan—intended to “organize the development of an action plan to protect the United States advantage in AI and AI technology critical to United States economic and national security interests against strategic competitors and adversarial nations”—to the President within 120 days.[22]  While the EO stakes an opening position for the United States government and seeks to align the Trump administration with leadership in AI innovation, it is at this point largely aspirational, and interested parties will need to closely monitor further developments in the coming months as the various agencies respond to its decrees to gauge the full impact of the AI Initiative.  We will continue to closely follow the situation. [1]  Donald J. Trump, Executive Order on Maintaining American Leadership in Artificial Intelligence, The Whitehouse (Feb. 11, 2019), available at https://www.whitehouse.gov/presidential-actions/executive-order-maintaining-american-leadership-artificial-intelligence/. [2]  The White House, Accelerating America’s Leadership in Artificial Intelligence, Office of Science and Technology Policy (Feb. 11, 2019), available at https://www.whitehouse.gov/briefings-statements/president-donald-j-trump-is-accelerating-americas-leadership-in-artificial-intelligence/. [3]  See, e.g., Jamie Condliffe, In 2017, China is Doubling Down on AI, MIT Technology Review (Jan. 17, 2017), available at https://www.technologyreview.com/s/603378/in-2017-china-is-doubling-down-on-ai/; Cade Metz, As China Marches Forward on A.I., the White House Is Silent, N.Y. Times (Feb. 12, 2018), available at https://www.nytimes.com/2018/02/12/technology/china-trump-artificial-intelligence.html?module=inline; Jessica Baron, Will Trump’s New Artificial Intelligence Initiative Make The U.S. The World Leader In AI?, Forbes (Feb. 11, 2019), available at https://www.forbes.com/sites/jessicabaron/2019/02/11/will-trumps-new-artificial-intelligence-initiative-make-the-u-s-the-world-leader-in-ai/#70d3ea99a017 (noting that, after Canada in March 2017, the U.S. will be the 19th country to announce a formal strategy for the future of AI); see also, as noted in our recent Artificial Intelligence and Autonomous Systems Legal Update (4Q18), the German government’s new AI strategy, published in November 2018, which promises an investment of €3 billion before 2025 with the aim of promoting AI research, protecting data privacy and digitalizing businesses (available at https://www.bundesregierung.de/breg-en/chancellor/ai-a-brand-for-germany-1551432). [4]  Joshua New, Why the United States Needs a National Artificial Intelligence Strategy and What It Should Look Like, The Center for Data Innovation (Dec. 4, 2018), available at http://www2.datainnovation.org/2018-national-ai-strategy.pdf. [5]  David McCabe, Trump Points to Tech in State of the Union, Axios (Feb. 6, 2019), available at https://www.axios.com/trump-tech-state-of-the-union-1549422117-866c9b49-c029-4e36-acb8-3b78117579f0.html. [6]  Supra, n.2. [7]  Id. [8]  Supra, n.1 at § 2(a). [9]   Id. at § 5(a). [10]  Id. at § 6(a). [11]  Id. at § 6(a); (b). [12]  Id. at § 1(b). [13]  Id. at § 6(d)(i)(A)-(C). [14]  NIST’s officially recognized U.S. representative to ISO is the American National Standards Institute (“ANSI”), which acts as the standards organization representing the United States as a full ISO member, influencing ISO standards development and strategy by participating and voting in ISO technical and policy meetings. See Members List for the International Organization for Standardization, available at https://www.iso.org/members.html. [15]  Supra, n.1 at § 2(c). [16]  Id. at § 1(d). [17]  Id. at § 4(c) (noting that “[t]o the extent appropriate and consistent with applicable law, heads of AI R&D agencies shall explore opportunities for collaboration with non-Federal entities, including: the private sector; academia; non-profit organizations; State, local, tribal, and territorial governments; and foreign partners and allies, so all collaborators can benefit from each other’s investment and expertise in AI R&D.”; see also § 6(d)(ii) (noting that NIST’s plan for the development of technical standards “shall be developed in consultation with the Select Committee, as needed, and in consultation with the private sector, academia, non‑governmental entities, and other stakeholders, as appropriate.”). [18]  Id. at § 5 (stating that “[h]eads of all agencies shall review their Federal data and models to identify opportunities to increase access and use by the greater non-Federal AI research community in a manner that benefits that community, while protecting safety, security, privacy, and confidentiality.”). [19]  Id. at § 5(a)(i). [20]  Id. at § 5(a)(ii). [21]  Id. at § 1 (listing as part of its policy and principles “enhancing international and industry collaboration with foreign partners and allies” as a means to “[m]aintain[…] American leadership in AI.”). [22]  Id. at § 8. Gibson Dunn’s lawyers are available to assist in addressing any questions you may have regarding these developments.  Please contact the Gibson Dunn lawyer with whom you usually work, or any of the following lawyers: Artificial Intelligence and Automated Systems Group: H. Mark Lyon – Chair, Palo Alto (+1 650-849-5307, mlyon@gibsondunn.com) Frances A. Waldmann – Los Angeles (+1 213-229-7914, fwaldmann@gibsondunn.com) Administrative Law and Regulatory Group: Eugene Scalia – Co-Chair, Washington, D.C. (+1 202-955-8206, escalia@gibsondunn.com) Helgi C. Walker – Co-Chair, Washington, D.C.  (+1 202-887-3599, hwalker@gibsondunn.com) Michael D. Bopp – Washington, D.C. (+1 202-955-8256, mbopp@gibsondunn.com) Thomas H. Dupree, Jr. – Washington, D.C. (+1 202-955-8547, tdupree@gibsondunn.com) © 2019 Gibson, Dunn & Crutcher LLP Attorney Advertising:  The enclosed materials have been prepared for general informational purposes only and are not intended as legal advice.

January 22, 2019 |
Artificial Intelligence and Autonomous Systems Legal Update (4Q18)

Click for PDF We are pleased to provide the following update on recent legal developments in the areas of artificial intelligence, machine learning and autonomous systems (“AI”).  As AI technologies become increasingly commercially viable, one of the most interesting challenges lawmakers face in the governance of AI is determining which of its challenges can be safely left to ethics (appearing as informal guidance or voluntary standards), and which rules should be codified in law.[1]  Many of the recent updates we have chosen to highlight below illustrate how lawmakers and government agencies seek to develop AI strategies and policy with the aim of balancing the tension between protecting the public from the potentially harmful effects of AI technologies while encouraging positive innovation and competitiveness.[2]  For the most part, lawmakers continue to engage with a broad range of stakeholders on these issues, and there remains plenty of scope for companies operating in this space to participate in discussions around the legislative process and policymaking. __________________________ Table of Contents I.      Patent Eligibility for AI-Related Inventions II.    Federal Government Agencies Seek to Leverage the Benefit of AI and Innovative Technologies III.  Autonomous Vehicles IV.   Rising Concerns for AI’s Potential to Create Bias and Discrimination V.    Use of AI in Criminal Proceedings VI.   Legal Technology __________________________ I.   Patent Eligibility for AI-Related Inventions As the adoption of AI technologies progresses rapidly, AI-related patent applications have kept pace, blooming to over 154,000 worldwide since 2010.[3]  However, several U.S. court decisions in recent years have left technology and pharmaceutical companies unsure of whether their inventions are patentable under U.S. federal law.  One source of great unpredictability in the patent field is subject matter eligibility under 35. U.S.C. § 101—and in particular the abstract idea exception to patent eligibility—based primarily on the U.S. Supreme Court’s decision in Alice Corp. v. CLS Bank International,[4] resulting in well-documented frustration in the lower courts trying to make sense of the precedent[5] and ultimately running the risk of impeding innovation in the United States in AI and machine learning and the growth of commerce.[6] This confusion and unpredictability has prompted lawmakers to take action, inviting industry leaders and representatives of the American Bar Association’s IP law section as well as retired members of the judiciary to a closed-door roundtable discussion on December 12, 2018 to discuss potential legislation to rework 35 U.S.C. § 101 on patent eligibility.[7] Against this backdrop, on January 4, 2019 the United States Patent and Trademark Office (“USPTO”) guidance announced updated to help clarify the process that examiners should undertake when evaluating whether a pending claim is directed to an abstract idea under the Supreme Court’s two-step Alice test and thus not eligible for patent protection under 35 U.S.C. § 101.  The USPTO’s guidelines may arm applicants for AI-related inventions with a roadmap of how to avoid or overcome Section 101 rejections, but it remains to be seen how examiners interpret the guidelines.  For further details on the USPTO’s guidelines, please see our recent Client Alert on The Impact of the New USPTO Eligibility Guidelines on Artificial Intelligence-related Inventions. In November 2018, the European Patent Office (“EPO”) also issued new guidelines that set out patentability criteria for AI technologies and, in particular, provide a range of examples for what subject matter is exempt from patentability under Articles 52(1), (2) and (3) of the Convention on the Grant of European Patents.[8]  Under the EPO guidance, which was well-received by companies in the field,[9] an inventor of an AI technology must show that the claimed subject matter has a “technical character.”  While this approach does not deviate from the EPO’s long-held position on exclusions to patentability, the guidelines provide welcome clarity and specific examples.  For instance, the classification of digital images, videos, audio or speech signals based on low-level features (e.g., edges or pixel attributes for images) are typical technical applications of classification algorithms, but classifying text documents solely in respect of their textual content is not regarded to be per se a technical purpose but a linguistic one.  Notably, the guidelines also potentially open the door to patent protection for training methodologies and mechanisms for generating training datasets.  In sum, a claim to an AI algorithm based upon a mathematical or computational model on its own is likely to be considered non-technical.  Accordingly, careful drafting will be required to impart onto the AI or machine learning component a technical character by reference to a specific technical purpose and/or implementation, rather than describing it as an abstract entity.  AI or machine learning algorithms in the context of non-technical systems are not likely to be patentable. Much will depend on how the USPTO and the EPO enforce their new guidelines.  The EPO guidelines’ categorical exclusions of certain subject matter appears to stand in contrast to U.S. patent eligibility law (which may therefore prove more favorable to AI innovators seeking patent protection), but the EPO guidelines could offer a higher level of consistency and clarity as to what subject matter is exempt from patentability than the more fluid U.S. approach.  In the meantime, innovators in artificial intelligence and machine learning technologies should take note of these developments and exercise caution when making strategic decisions about which technologies should be patented and in which jurisdictions applications should be filed. II.   Federal Government Agencies Seek to Leverage the Benefit of AI and Innovative Technologies As noted in our Artificial Intelligence and Autonomous Systems Legal Update (3Q18), 2018 saw few notable legislative developments, but increasing federal government interest in AI technologies, a trend which has continued apace amid increasing appreciation by lawmakers of AI as a potent general purpose technology. A.   Future of Artificial Intelligence Act The legislative landscape has not been especially active in the AI sector this past quarter.  In December 2017, a group of senators and representatives introduced the Fundamentally Understanding the Usability and Realistic Evolution of Artificial Intelligence Act of 2017, also known as the FUTURE of Artificial Intelligence Act (the “Act”), which, if passed, will not regulate AI directly, but will instead form a Federal Advisory Committee on the Development and Implementation of Artificial Intelligence.[10]  The purpose of the Committee is to help inform the government’s response to the AI sector on several issues, including the competitiveness of the U.S. in regard to AI innovation, workforce issues including the possible effect of technological displacement of workers, education, ethics training, open sharing of data, and international cooperation.[11]  The Act’s definition of AI is broad and could encompass AI technologies in any number of fields and industries.  At present, the Act remains pending in Congress. B.   FDA Releases New Rules for Medical Devices Incorporating AI In early December 2018, the U.S. Food and Drug Administration (“FDA”) released a proposed rule that aimed to update the review process for certain medical devices before they enter the marketplace.  The proposed rule would clarify the applicable statutory language by establishing procedures and criteria for the so-called de novo pathway used to review the safety and effectiveness of innovative medical devices that do not have predicates, and would apply to certain medical devices incorporating AI.[12]  Back in April 2018, the FDA approved IDx_DR, a software program that uses an AI algorithm to detect eye damage from diabetes.[13]  Since then, the FDA has approved other AI devices at what appears to be an increasing rate.  If the rule is finalized, the FDA anticipates that companies developing novel medical devices will be able to take advantage of a more efficient process and clearer standards when seeking de novo classification. C.   IRS Invests in AI to Detect Criminal Activity More Efficiently Facing years of budget cuts and a declining number of employees, the IRS is increasingly investing in technology driven by AI to identify and prosecute tax fraud and rein in offshore tax evasion.  During an American Bar Association webcast in December 2018, Todd Egaas, Director of Technology, Operations and Investigative Services in the IRS’ Criminal Investigations office explained that, “We’ve been running thin on people lately and rich on data.  And so what we’ve been working on—and this is where we think data can help us—is how do we make the most use out of our people?”[14] Part of the IRS’ strategy is a recently signed seven-year, $99 million deal with Palantir Technologies to help the agency “connect the dot in millions of tax filings, bank transactions, phone records, and even social media posts.”[15]  Palantirs’ technology will be used to assist the IRS in determining which cases to investigate and prosecute with its more limited manpower.  Egaas and Benjamin Herndon, the IRS’ Chief Analytics Officer, offered some specific insights into how the IRS is using these advanced technologies.  Not only is the speed of processing data anticipated to increase to become near real-time, but machine learning algorithms and AI “identify patterns in graphs where noncompliance might be present” and “prove particularly helpful in combating identity thieves fraudulently applying for tax refunds.”[16] D.   Federal Agencies Urge Banks to Innovate The Board of Governors of the Federal Reserve System, the Federal Deposit Insurance Corporation, the U.S. Treasury’s Financial Crimes Enforcement Network (“FinCEN”), the National Credit Union Administration, and the Office of the Comptroller of the Currency (collectively, the “Agencies”) are urging banks to use AI and other innovative technologies to combat money laundering and terrorist financing, according to a joint statement issued on December 3, 2018.[17] The Agencies are attempting to foster innovation by encouraging banks to identify “new ways of using existing tools or adopting new technologies” to help “identify and report money laundering, terrorist financing, and other illicit financial activity by enhancing the effectiveness and efficiency” of existing Bank Secrecy Act/anti-money laundering (“BSA/AML”) compliance programs.[18]  The Agencies have seen how experimentations with AI and digital identity technologies have strengthened banks’ compliance approaches and have enhanced transaction monitoring systems.[19] Companies in the financial sector considering the use of these innovative compliance programs should note the Agencies’ assurances that doing so “will not result in additional regulatory expectations,” and that banks “should not” be criticized if their efforts “ultimately prove unsuccessful,” nor will the agencies necessarily impose supervisory actions if innovative pilot programs “expose gaps” in a compliance program.[20]  Similarly, if “banks test or implement artificial intelligence-based transaction monitoring systems and identify suspicious activity that would not otherwise have been identified under existing processes, there will be no automatic assumption that the banks’ existing processes are deficient.”[21]  However, the Agencies clarified that banks must continue to meet their BSA/AML compliance obligations when developing and testing pilot programs.[22] E.   FTC Hearings Address Consumer Protection and Antitrust Issues Sparked by AI Over the course of 2018, the Federal Trade Commission held a series of hearings on “Competition and Consumer Protection in the 21st Century,” which provided a platform for industry leaders, academics, and regulators to discuss whether changes in the domestic and world economics and new technologies, among other developments, may require changes to competition and consumer protection law, enforcement priorities, and policy.[23]  On November 13-14, 2018, the conversation turned to algorithms, artificial intelligence, and predictive analytics.  The risk that AI and algorithms will perpetuate bias, discrimination, and existing socioeconomic disparities was a shared concern.[24]  And there was broad agreement that bias can be combatted at several different stages in the development and use of AI.  For example, panelists spoke about the need for good algorithmic design, high-quality data, rigorous design, and some degree of consumer understanding to help manage and combat bias. Panelists also debated whether existing laws are robust enough to address ethical issues posed by the use of AI, or whether an AI consumer protection law is warranted.  There appeared to be consensus that existing laws, i.e., the Fair Credit Reporting Act and anti-discrimination laws, and the regulatory powers of the FTC (though Section 5 of the FTC Act, which prohibits unfair methods of conduct in commerce), were sufficient and that caution was warranted before proposing new laws. Antitrust panelists addressed issues relating to competitors’ use of algorithms to set prices, suggesting that concerns about algorithmic collusion are overhyped: not only is it very difficult to program algorithms to respond optimally to actions generated by a competitors’ algorithm over a long period of time, but algorithms are typically designed to respond to changes in pricing—not achieve an economic competitive equilibrium.[25]  In addition, unilateral decisions to use price optimization algorithms may be lawful, regardless of the outcome—much like how conscious parallelism is permissible under existing law.  From an enforcement perspective, companies may wish to note that the panelists encouraged enforcers to study and then distinguish between algorithms that produce “follow the leader” programs and those that detect and punish other programs for failing to achieve a desired outcome, since only the latter behavior is indicative of unlawful collusion. III.   Autonomous Vehicles A.   Recent Developments The global self-driving vehicle market is estimated to reach $42 billion by 2025,[26] and companies working on self-driving cars continue to aggressively compete for new talent and new technology.  And, given the fast pace of developments, governments worldwide are facing the challenge of regulating this novel and complex industry without stifling innovation and global competitiveness—some making more progress than others. 2019 could prove a watershed year for the fledgling autonomous vehicle industry as virtually every major automaker launches its own self-driving vehicle and some developers inch towards releasing “level 4 autonomy” vehicles—”unsupervised” cars that can operate within a limited domain.[27]  However, truly autonomous—level 4 or 5—vehicles won’t be found on public roads anywhere until countries succeed in rolling out uniform regulations nationwide, something no country has yet done.  Likewise, as public sensitivity to crashes and malfunctions remains high,[28] automakers continue to wrestle with the technological challenges of making commercially viable fully autonomous vehicles that can operate in all conditions (including, for example, darkness or inclement weather) and without set geographical boundaries. For example, Audi has announced that it has plans to be the first company in the world to sell a Level 3 car directly to the public: an Audi A8 sedan with an autopilot option called Traffic Jam Pilot, which works only at speeds under 50 kilometers per hour (37 miles per hour), and requires the driver to be prepared to take back the wheel after a warning—the definition of Level 3.[29]  However, concern over the lack of clear federal regulations for autonomous driving technology means Audi will not offer this option in the U.S. market.[30]  Audi has said it does not envisage selling Level 4 cars for some time—perhaps well into the 2020s. Toyota has revealed plans to unveil its own experimental Level 4 car, called the Urban Teammate, at the 2020 Summer Olympics in Tokyo, a highly restricted environment.[31] B.   DoT Releases Updated Guidance: “Preparing for the Future of Transportation: Automated Vehicles 3.0” As outlined in our Artificial Intelligence and Autonomous Systems Legal Update (3Q18), the absence of a uniform federal regulatory regime in the U.S. means that companies testing and manufacturing autonomous vehicles (“AVs”) must comb through a complex patchwork of state rules that exist with limited federal oversight. The continued absence of federal regulation renders the U.S. Department of Transportation’s (“DoT”) informal guidance increasingly important to companies operating in this space.  On October 3, the DoT’s National Highway Traffic Safety Administration (“NHTSA”) released its most significant 2018 road map on the design, testing and deployment of driverless vehicles: “Preparing for the Future of Transportation: Automated Vehicles 3.0” (commonly referred to as “AV 3.0”).[32] While one of its core principles is to promote consistency among federal, state and local requirements in order to advance the integration of AVs in the national transportation system, the guidance also reinforces that federal officials are eager to take the wheel on safety standards and that any state laws on automated vehicle design and performance will be preempted.  State, local and tribal governments will be responsible for licensing human drivers, registering motor vehicles, enacting and enforcing traffic laws, conducting safety inspections, and regulating motor vehicle insurance and liability.  The guidance includes several best practices for states on adapting their policies and procedures for licensing and registering automated vehicles, assessing the readiness of their roads, and training their transportation workforces for the arrival of automated vehicles. As in the previous iterations of the DoT’s guidance, the thread running throughout AV 3.0 is the commitment to voluntary, consensus-based technical standards and the removal of unnecessary barriers to the innovation of AV technologies.  AV 3.0 “builds upon — but does not replace” the DOT’s last AV guidelines, “Automated Driving Systems 2.0: A Vision for Safety,” which were released on September 12, 2017.[33]  AV 3.0 expands the applicability of the DOT’s AV guidance to include commercial vehicles, on-road transit and the roadways on which they operate.  In parallel, various other DOT agencies are gathering input from industry stakeholders on what sort of infrastructure improvements and strategic planning will be required to accommodate and coordinate autonomous vehicles operating in a variety of different modes of transportation. Despite the lack of compliance requirements or enforcement mechanisms within the guidance, the DoT has proposed modernized regulations that specifically recognize that the “driver” and “operator” of a vehicle may include an automated system,[34] and highlighted that it would prepare proactively for automation through pilot programs, investments and other means, announcing a national pilot program to test and deploy autonomous vehicles.[35] However, NHTSA will not abandon the traditional self-certification scheme, meaning that manufacturers can continue to self-certify the compliance of their products by reference to applicable standards.  Moreover, NHTSA will issue a proposed rule seeking comment on changes to streamline and modernize its procedures for processing applications for exemptions from the Federal Motor Vehicle Safety Standards (“FMVSS”). AV 3.0 highlights the importance of cybersecurity and data privacy as AV technologies become increasingly integrated, and encourages a coordinated effort across the government and private sectors for a unified approach to cyber incidents and information sharing.  However, the guidance contains no firm rules on data sharing and privacy, leaving it up to state and local governments to determine standards and resources to counteract cybersecurity threats.[36] IV.   Rising Concerns for AI’s Potential to Create Bias and Discrimination One of AI’s most salient features is its ability to take noisy data sets and provide the targeted results by using criteria often beyond our anticipation or easy comprehension.  As noted above, there is a growing consensus that AI systems perpetuate and amplify bias, and that computational methods are not inherently neutral and objective.[37]  As AI models are deployed into politics, commerce and broader society, we face unprecedented challenges in understanding their disproportionate impacts and how to apply our existing ethical framework—concepts such as transparency, inequality and fairness—to apparently dispassionate technologies. While discussions about the ethics of AI remain largely in the policy realm, increasing public awareness has led to rising concerns among lawmakers for AI’s potential for create bias and discrimination, while companies making use of AI and machine learning systems have responded to increasing scrutiny with regard to the risks of inadvertently creating discriminatory processes and outcomes.[38] The AI Now Institute recently observed the potential for a number of ethics and bias concerns that AI stakeholders should be aware of.  In the group’s recent AI Now Report 2018, the AI Now Institute drew attention to AI’s potential to create bias and called for a renewed focus on the state of equity and diversity in the AI field as well as increased accountability of AI firms as a potential solution to these issues.[39]  Companies making use of AI should anticipate that lawmakers may agree and be willing to scrutinize AI systems for “algorithmic fairness.”  Indeed, in December 2018, Members of the House Judiciary Committee questioned Google CEO Sundar Pichai—albeit with varying degrees of technical accuracy—about the potential for bias in search results.[40]  AI stakeholders would be remiss to not take note, and consider approaches for eliminating or mitigating potential bias beginning early in the design process and throughout the life cycle of products and services. V.   Use of AI in Criminal Proceedings The potential use and abuse of AI in the judicial context is palpable, particularly as the use of forensic and risk assessment software in criminal proceedings is on the rise.  Transparency appears to be a key due process issue, as criminal defendants urge courts to permit their review of such software, yet courts have so far been reluctant to hear challenges on these grounds. In 2017, the United States Supreme Court declined to hear a case coming out of the Wisconsin Supreme Court[41] which challenged the use of algorithmic risk assessment technology in criminal sentencing due to concerns that the software harbored gender biases and was disclosed to neither the court nor the defendant.  The broader debate over the use of such software, however, was recently revived by two prominent law firm partners who held a mock trial at New York University School of Law based on the case.  The mock trial centered on the due process concerns that arise from lack of judicial and defendant access to and scrutiny of the underlying source code of risk assessment tools—needed to determine if the source code incorporate flaws or biases—due to the software’s proprietary nature.[42]  In civil cases, a party may be compelled to share a proprietary algorithm pursuant to a protective order.  But in a criminal case, it may be too costly for a defendant to seek such review.  (And, in Loomis, the court denied the defendant’s request to access the algorithm.). A related issue was presented to the Ninth Circuit in United States v. Joseph Nguyen, a case in which a defendant was connected to a certain IP address, and the government argued that it identified and isolated the IP address as the sole source for a download of illegal material using a forensic software program.[43]  The defendant sought to review the evidence against him, including the forensic software’s source code, in order to challenge the prosecution’s claim.  In an amicus brief, the Electronic Frontier Foundation, a civil liberties organization focusing on digital rights, urged that “[w]here the government seeks to use evidence generated by forensic software owned by a third party, disclosure of the software’s source code is required by the Constitution and by the strong public interest in the integrity of court proceedings.”[44]  The Ninth Circuit, however, denied the defendant’s petition for rehearing en banc.[45] While courts, so far, appear reluctant to require the production of proprietary source code or other trade secret information in the criminal context, it still behooves companies providing products and services in highly-visible and important contexts, such as governmental law enforcement, to consider what information they would be willing to provide to ensure sufficient operational transparency and accountability to garner and maintain public trust. VI.   Legal Technology The legal industry continues to expand its use of AI technology to assist it with various tasks.  For example, corporate tax departments are beginning to use AI to sift large volumes of contracts to determine if an entity qualifies for research and development tax credits as well as to analyze court decisions to predict potential outcomes of litigation.  Indeed, some of the large accounting firms in the U.S. have reported that they expect to make large investments in AI in connection with the tax function budget.[46] The USPTO entered the AI sector in late 2018 when it began to develop AI tools to improve the agency’s search capabilities and to make the patent prosecution process more efficient.[47]  When a patent application is examined, the patent examiners must search through a “complex and vast corpus of human knowledge.”  USPTO hopes that a system that incorporates AI will help expedite patent examinations by allowing examiners to review applications more quickly and effectively, as well as to fill in any gaps that might exist with regard to prior art searches.  In November 2018, the agency unveiled a beta test of new software called “Unity” which incorporates AI to help patent examiners with prior art searches and announced it continues to work on other initiatives designed to streamline the patent prosecution process.[48]     [1]    See, e.g., Paul Nemitz, Constitutional Democracy and Technology in the Age and Artificial Intelligence, Phil. Trans. R. Soc. A 376: 20180089 (Nov. 15, 2018), available at https://royalsocietypublishing.org/doi/full/10.1098/rsta.2018.0089.     [2]    See, e.g., the German government’s new AI strategy, published in November 2018, which promises an investment of €3 billion before 2025 with the aim of promoting AI research, protecting data privacy and digitalizing businesses (https://www.bundesregierung.de/breg-en/chancellor/ai-a-brand-for-germany-1551432); and the European Commission’s new Draft Ethics Guidelines For Trustworthy Artificial Intelligence (Dec. 18, 2019), available at https://ec.europa.eu/digital-single-market/en/news/draft-ethics-guidelines-trustworthy-ai.     [3]    Louis Columbus, Microsoft Leads The AI Patent Race Going Into 2019, Forbes (Jan. 6, 2019), available at https://www.forbes.com/sites/louiscolumbus/2019/01/06/microsoft-leads-the-ai-patent-race-going-into-2019/#765459af44de; Jeff John Roberts, IBM Tops 2018 Patent List as A.I. and Quantum Computing Gain Prominence, Fortune (Jan. 8, 2019), available at http://fortune.com/2019/01/07/ibm-tops-2018-patent-list-as-ai-and-quantum-computing-gain-prominence/.     [4]    See Alice Corp. Pty. Ltd. v. CLS Bank Int’l, 134 S. Ct. 2347 (2014).     [5]    James Fussell, Alice Must Be Revisited In View Of Emerging Technologies, Law360 (Dec. 5, 2018), available at https://www.law360.com/articles/1107964/alice-must-be-revisited-in-view-of-emerging-technologies.     [6]    See id. (noting that although the number of AI patent applications has been growing dramatically in the past several years in the United States, the growth trajectory in Chinese applications spiked in 2014, the year of the Supreme Court’s Alice decision, and for the first time in 2016 surpassed U.S. filings).     [7]    Malathi Nayak, Google, Amazon Invited to Talk Patent Eligibility With Lawmakers, Bloomberg Law (Dec. 4, 2018), available at https://news.bloomberglaw.com/ip-law/google-amazon-invited-to-talk-patent-eligibility-with-lawmakers-1; one example of potential legislation is a bill introduced in June 2018 by Rep. Thomas Massie, R-Ky., and Marcy Kaptur, D-Ohio, which would amend Section 101 to “effectively abrogate[] Alice Corp. v. CLS Bank International, 134 S. Ct. 2347 (2014) and its predecessors to ensure that life sciences discoveries, computer software, and similar inventions and discoveries are patentable, and that those patents are enforceable.” (H.R. 6264 (Sec. 7(b)(3))).     [8]    Eur. Patent Office, Guidelines for Examination in the European Patent Office (Nov. 2018), G-II, 3.3.1, available at https://www.epo.org/law-practice/legal-texts/html/guidelines2018/e/g_ii_3_3_1.htm.     [9]    Patrick Wingrove, EPO AI Guidelines “Give Clarity and Direction,” Say In-House Counsel, Managing Intellectual Property (Jan. 15, 2019), available at http://www.managingip.com/Blog/3853828/EPO-AI-guidelines-give-clarity-and-direction-say-in-house-counsel.html.     [10]    H.R. 4625.     [11]    In November 2018, the Little Hoover Commission—a bipartisan, independent California state oversight agency—also published a comprehensive report analyzing the economic impact of AI technologies on the state of California between now and 2030.  The report “Artificial Intelligence: A Roadmap for California” calls for immediate action by the governor and legislature to prepare strategically for and take advantage of AI, while minimizing its risks.  Among the Commission’s recommendations are the creation of an AI special advisor in state government, an AI commission and the promotion of apprenticeships and training opportunities for employees whose jobs may be displaced by AI technologies.  See Little Hoover Comm’n, “Artificial Intelligence: A Roadmap for California” (Nov. 2018), available at https://lhc.ca.gov/sites/lhc.ca.gov/files/Reports/245/Report245.pdf.     [12]    Emily Field, FDA Issues Proposed Rule For Novel Medical Devices, Law360 (Dec. 4, 2018), available at https://www.law360.com/articles/1107804/fda-issues-proposed-rule-for-novel-medical-devices     [13]    FDA Approves Marketing For First AI Device for Diabetic Retinopathy Detection, EyeWire News (Apr. 11, 2018), available at https://eyewire.news/articles/fda-permits-marketing-of-ai-based-device-to-detect-certain-diabetes-related-eye-problems/     [14]    Vidya Kauri, AI Helping IRS Detect Tax Crimes With Fewer Resources, Law360 (Dec. 5, 2018), available at https://www.law360.com/articles/1108419/ai-helping-irs-detect-tax-crimes-with-fewer-resources.     [15]    Siri Bulusu, Palantir Deal May Make IRS ‘Big Brother-ish’ While Chasing Cheats, Bloomberg Tax, (Nov. 15, 2018), available at https://news.bloombergtax.com/daily-tax-report/palantir-deal-may-make-irs-big-brother-ish-while-chasing-cheats?context=article-related.  Indeed, in 2018, the IRS Criminal Investigation division collected 1.67 petabytes—or 1 million gigabytes of data.  According to the IRS’ Strategy Plan for FY 2018-2022 (http://src.bna.com/C54), the agency is handling an influx of data that is 100 times larger than what it received a decade ago.     [16]    Vidya Kauri, AI Helping IRS Detect Tax Crimes With Fewer Resources, Law360 (Dec. 5, 2018), available at https://www.law360.com/articles/1108419/ai-helping-irs-detect-tax-crimes-with-fewer-resources.     [17]    Board of Governors of the Federal Reserve System, Federal Deposit Insurance Corporation, Financial Crimes Enforcement Network, National Credit Union Administration, and Office of the Comptroller of the Currency, Joint Statement on Innovative Efforts to Combat Money Laundering and Terrorist Financing (Dec. 3, 2018), available at https://www.federalreserve.gov/newsevents/pressreleases/files/bcreg20181203a1.pdf.     [18]    Id. at 1.     [19]    Id.     [20]    Id. at 2.     [21]    Id.     [22]    Id.     [23]    Fed. Trade Comm’n, Hearings on Competition and Consumer Protection in the 21st Century, available at https://www.ftc.gov/policy/hearings-competition-consumer-protection.     [24]    Kestenbaum, Reingold & Bradshaw, What We Heard At The FTC Hearings: Days 12 And 13, Law360 (Nov. 28, 2018), available at https://www.law360.com/articles/1105676/what-we-heard-at-the-ftc-hearings-days-12-and-13; see also infra at IV.     [25]    Id.     [26]    Jeff Green, Driverless-Car Global Market Seen Reaching $42 Billion by 2025, Bloomberg, Jan. 8, 2015, available at https://www.bloomberg.com/news/articles/2015-01-08/driverless-car-global-market-seen-reaching-42-billion-by-2025.     [27]    Philip E. Ross, In 2019, We’ll Have Taxis Without Drivers—or Steering Wheels, IEEE Spectrum (Jan. 3, 2019), available at https://spectrum.ieee.org/transportation/self-driving/in-2019-well-have-taxis-without-driversor-steering-wheels (On the SAE autonomy scale, a Level 0 car has no autonomous capability or driver aids, while a Level 5 car is fully autonomous, to the point that manual controls are unnecessary.  Level 3 cars allow human drivers to take their eyes off the road and their hands off the wheel in certain situations, but still require humans to take over at other times and when prompted.).     [28]    See, e.g., the MIT Moral Machine experiment, an online platform by the Massachusetts Institute of Technology where MIT Media Lab researchers publish results from its global survey on autonomous driving ethics.  The survey generated the largest dataset on public attitudes in artificial intelligence ethics—asking questions such as whether it is better, for example, to kill two passengers or five pedestrians.  The experiment is available at http://moralmachine.mit.edu/.     [29]    Audi Technology Portal, World Debut For Highly Automated Driving: The Audi AI Traffic Jam Pilot, available at https://www.audi-technology-portal.de/en/electrics-electronics/driver-assistant-systems/audi-a8-audi-ai-traffic-jam-pilot.     [30]    Stephen Edelstein, 2019 Audi A8 Won’t Get Traffic Jam Pilot Driver-Assist Tech In The U.S., Digital Trends (May 16, 2018), available at https://www.digitaltrends.com/cars/2019-audi-a8-traffic-jam-pilot-not-coming-to-us/.     [31]    Supra, n. 25.     [32]    U.S. Dept. of Transp., Preparing for the Future of Transportation: Automated Vehicles 3.0 (September 2017), available at https://www.transportation.gov/sites/dot.gov/files/docs/policy-initiatives/automated-vehicles/320711/preparing-future-transportation-automated-vehicle-30.pdf; see also https://www.law360.com/articles/1113438/how-av-3-0-changed-the-autonomous-vehicle-game-in-2018     [33]    For more information, please see our September 2017 Client Alert on Accelerating Progress Toward a Long-Awaited Federal Regulatory Framework for Autonomous Vehicles in the United States.     [34]    AV 3.0 recognizes that certain current FMVSS were drafted with human drivers in mind (in that they include requirements for a steering wheel, brakes, mirrors, etc.), creating an unintended barrier to the innovation of AV technologies, and notes that NHTSA will issue a proposed rule seeking comment on proposed changes to certain FMVSS to accommodate AV technology innovation.  FMVSS will also be tweaked to be “more flexible and responsive, technology-neutral, and performance-oriented to accommodate rapid technological innovation.”     [35]    The “Pilot Program for Collaborative Research on Motor Vehicles With High or Full Driving Automation” is designed to facilitate, monitor and learn from the testing and development of AV technology and prepare for the impact of highly automated and autonomous vehicles on the roads under a variety of driving conditions.  The comment period lasted through December 10, 2018, but the timing on a final decision regarding the pilot program remains open.  NHTSA intends to rely on its “Special Exemption” authority in 49 U.S.C. § 30114 to provide exemptions for manufacturers seeking to engage in research, testing and demonstration projects.     [36]    Linda Chiem, 3 Takeaways From DOT’s New Automated Vehicles Policy, Law360 (Oct. 10, 2018), available at https://www.law360.com/articles/1090829/3-takeaways-from-dot-s-new-automated-vehicles-policy.     [37]    Meredith Whittaker, Et Al., AI Now Report 2018, AI Now Institute, 2.2.1 (Dec., 2018), available at https://ainowinstitute.org/AI_Now_2018_Report.pdf.     [38]    The AI Now Report 2018 notes that several technology companies—including IBM, Google, Microsoft and Facebook—have begun operationalizing fairness definitions, metrics, and tools.  See supra, n. 35 at 2.2.     [39]    Id.     [40]    Russell Brandom, Congress Thinks Google Has a Bias Problem—Does It?, The Verge (Dec. 12, 2018), available at https://www.theverge.com/2018/12/12/18136619/google-bias-sundar-pichai-google-hearing.     [41]    State v. Loomis, 881 N.W.2d 749 (Wis. 2016).     [42]    Natalie Rodriguez, Loomis Look-Back Previews AI Sentencing Fights to Come, Law360 (Dec. 9, 2018), available at https://www.law360.com/articles/1108727/loomis-look-back-previews-ai-sentencing-fights-to-come.     [43]    U.S.A. v. Joseph Nguyen, No. 17-50062 (9th Cir.).     [44]    Id. at 2.     [45]    Order, U.S.A. v. Joseph Nguyen, No. 17-50062 (9th Cir. Dec. 20, 2018).     [46]    Vidya Kauri, Artificial Intelligence To Revolutionize Tax Planning, Law360 (Sept. 18, 2018), available at https://www.law360.com/articles/1083886/artificial-intelligence-to-revolutionize-tax-planning.     [47]    Suzanne Monyak, USPTO Seeks Help Building AI For Faster Prior Art Searches, Law360 (Sept. 17, 2018), available at https://www.law360.com/articles/1083487/uspto-seeks-help-building-ai-for-faster-prior-art-searches.     [48]    Jimmy Hoover, USPTO Testing AI Software To Help Examiners ID Prior Art, Law360 (Nov. 15, 2018), available at https://www.law360.com/articles/1095703/uspto-testing-ai-software-to-help-examiners-id-prior-art. The following Gibson Dunn lawyers prepared this client update: H. Mark Lyon, Frances A. Waldmann, Claudia M. Barrett, Tony Bedel and Haley S. Morrisson. Gibson Dunn’s lawyers are available to assist in addressing any questions you may have regarding these developments.  Please contact the Gibson Dunn lawyer with whom you usually work, or any of the following lawyers in the firm’s Artificial Intelligence and Automated Systems Group: H. Mark Lyon – Chair, Palo Alto (+1 650-849-5307, mlyon@gibsondunn.com) Lisa A. Fontenot – Palo Alto (+650-849-5327, lfontenot@gibsondunn.com) Frances A. Waldmann – Los Angeles (+1 213-229-7914,fwaldmann@gibsondunn.com) © 2019 Gibson, Dunn & Crutcher LLP Attorney Advertising:  The enclosed materials have been prepared for general informational purposes only and are not intended as legal advice.

January 11, 2019 |
The Impact of the New USPTO Eligibility Guidelines on Artificial Intelligence-related Inventions

Click for PDF On January 4, 2019, the USPTO announced updated guidance to help clarify the process that examiners should undertake when evaluating whether a pending claim is directed to an abstract idea under the Supreme Court’s two-step Alice test and thus not eligible for patent protection under 35 U.S.C. § 101.  Specifically, for determining whether a claim recites an abstract idea, the USPTO defined three categories by extracting and synthesizing concepts identified by the courts:  (1) mathematical concepts, (2) certain methods of organizing human activity, and (3) mental processes.  If the examiner determines that the claim falls into one of these three categories, the examiner will continue to step two.  If not, the  claim should typically not be treated by the examiner as reciting an abstract idea, who should skip step two and instead deem the claim eligible under Section 101 for patenting.[1] As to step two, the USPTO split the inquiry into two separate inquiries for the examiner to undertake if the claim is found to recite an abstract idea.  First, the examiner should determine whether the abstract idea embodied in the claim is integrated into a practical application?  For this inquiry, the examiner looks to whether the claim, as a whole, integrates the abstract idea into a practical application that imposes “meaningful limits,” such that the claim is more than trying to monopolize the abstract idea.  For example, a claim may be a practical application if an additional element reflects an improvement in the functioning of a computer, or an improvement to other technology. Second, if the abstract idea underlying the claim is not integrated into a practical application, does the claim provide an inventive concept?  The USPTO explained that the Federal Circuit has held claims eligible when the additional elements recited in the claims provide “significantly more” than the abstract idea itself.  For example, a claim may be patent eligible if a specific limitation (or combination of limitations) is not well-understood, routine, conventional activity in the field. In essence, the USPTO’s guidance turns the Alice test into a three part test: 1.       Does the claim recite one of the categories the USPTO considers an abstract idea? 2.       If so: a.       is the abstract idea integrated into a practical application, or b.       does the claim provide an inventive concept? What is the Impact of the Guidance on AI-related Inventions? While it obviously remains to be seen what, if any, impact these new guidelines will have on the issuance of software patents generally, and artificial intelligence patents more specifically,  the key question going forward is whether the three categorical exceptions identified by the USPTO will be the exceptions that swallow the rule.  On one hand, by stating that rejections of artificial intelligence claims as abstract ideas should typically only arise if the claim falls into one of three enumerated categories, the USPTO does seem to be providing a more defined path for drafting claims that will avoid issues of patent eligibility.  If nothing else, at least the new guidelines add clarity into how examiners will apply Section 101 rejections and may give applicants a roadmap to overcoming any such rejection.  However, on the other hand, if the categorical exceptions such as the “mathematical concept” or “mental processes” categories are broadly construed by examiners in their application of the guidelines, many AI-related inventions may still be subject to eligibility rejections under Alice. As a result, at least until further experience with the manner in which examiners implement this guidance going forward, the eligibility of a software/AI-related claims likely still will come down to artful claim drafting.  However, there are a few additional takeaways from the guidelines that may be helpful to keep in mind specific to AI patents. Categories of Abstract Ideas Out of the USPTO’s three categories of abstract ideas, the one that, on its face, is most applicable to artificial intelligence is the mathematical concepts grouping.  After all, on some level, all software and AI are made up of a series of mathematical equations.  The USPTO guidance defines mathematical concepts as “mathematical relationships, mathematical formulas or equations, [and] mathematical calculations.”  This definition of mathematical concept is actually fairly narrow.  It does not seem to encompass algorithms more generally, and focuses instead on the actual formulas and calculations.  As a result, while caution is still warranted, by drafting a claim without including formulas and calculations, but instead focusing more on the structure of the algorithm, a patentee may be able to circumvent rejections due to falling into the mathematical concepts category. Drafting AI claims at too high a level can also cause the claims to implicate the USPTO’s other two categories.  The USPTO’s mental processes category—”concepts performed in the mind (including an observation, evaluation, judgment, opinion)”—may be implicated if AI claims are drafted too broadly.  Many of the applications for which we use AI are for concepts that would normally be performed in the human mind, that require observation, evaluation, judgment and opinion.  For example, an autonomous vehicle requires AI that observes obstacles, evaluates risks, and judges what to do next.  Automating human thought and judgment, such as in a car, may fall into this category if the claims are drafted by focusing too much on the function or result, and not enough on the structure or specifics of operation of the claimed invention. The third category, methods of organizing human activity, is defined by the guidance as “fundamental economic principles or practices (including hedging, insurance, mitigating risk); commercial or legal interactions(including agreements in the form of contract; legal obligations, advertising, marketing or sales activities or behaviors; business relations); managing personal behavior or relationships or interactions between people (including social activities, teaching, and following rules or instructions).”  Certain applications of AI may also implicate this category.  For example,  in-home assistants with voice recognition software may follow certain rules or instructions depending on the commands they are given.  As such, a broadly drafted claim covering a response to a verbal command may implicate this category. Step 2A:  Practical Application Even if a claim falls into one of the categories of abstract ideas identified in the guidance, the USPTO explains that practical applications of the abstract idea may still be patentable.  For the purposes of AI-related inventions, two examples in the USPTO’s guidelines are particularly important. First, the USPTO discloses that a claim may be eligible if “an additional element reflects an improvement in the functioning of a computer, or an improvement to other technology.”  The USPTO gives the example of modifying a hyperlink to dynamically produce a dual-source hybrid webpage.  Although it is unclear how this consideration will be applied in practice, this seems like an important consideration for AI-related claims.  AI-related software that improves the functioning of a computer, such as by optimizing multicore processors or voice recognition, may have a better chance of being eligible, if the claims are drafted in such a way as to highlight the specific steps or structure that provide this benefit and the specification clearly delineates those benefits.  It therefore behooves the applicant to specifically claim improvements to technology for certain inventions backed up with specification descriptions.  In addition, patent applicants should be very careful in describing any feature as conventional, even if that means providing a more robust and detailed description in the application.  Often, patent applicants describe components as conventional as a short-cut to avoid longer specifications.  The differences between what the AI is doing in the invention as compared to past uses should be described in detail and its impact on how that difference changes computer performance should be clear. Second, the USPTO explains that a claim may be eligible if “an additional element implements a judicial exception with, or uses a judicial exception in conjunction with, a particular machine or manufacture that is integral to the claim.”  The USPTO uses the example of a machine that uses gravity to improve speed.  This consideration seems to imply that software interacting with, or utilizing, hardware as a key element of the claimed invention will have an easier time being found eligible.  As such, applicants may choose to draft claims that tie their AI-related invention to specifically-required hardware to help show eligibility and include robust descriptions of that tie in the patent specification. Step 2B: Inventive Concept Finally, the “third” prong of the USPTO’s guidelines focuses on the “inventive concept” of the alleged invention.  The USPTO explains that the examiner should consider whether the claim “adds a specific limitation or combination of limitations that are not well-understood, routine, conventional activity in the field, which is indicative that  an inventive concept may be present.”  The impact of this inventive concept inquiry on AI-related claims is somewhat less clear because it is highly dependent on what the examiner considers conventional in the field of AI, or perhaps within specific types of AI-related inventions—e.g., there may well be differences based on application, such as facial recognition versus voice recognition versus autonomous vehicles.  In some cases, it may be possible to argue that particular limitations of the claim are not conventional steps or structures, and claim drafters should keep an eye out for ways to include non-conventional steps and structures in AI-related claims.  However, an applicant faced with a rejection based on this third prong may simply want to consider amending the claim to specifically recite an improvement to a technology or use of a machine to show there is a practical application, as many such inventions are often constructed based on otherwise conventional techniques. In sum, the clarity and specific examples provided in the USPTO’s guidelines may arm applicants for AI-related inventions with a roadmap of how to avoid or overcome Section 101 rejections, such as by avoiding mathematical formulas or tying the claim to a specific improvement or hardware.  Until we have experience with how examiners interpret the guidelines, however, applicants should continue to exercise caution and thoughtful claim drafting. [1] The guidance does note that, in rare cases, an examiner may determine that claims that fall outside of the three identified categories nevertheless recite an abstract idea.  In such cases, the examiner must further justify and explain the reasons for finding the claim to recite an abstract idea, and such determinations will require approval by the Technology Center Director. Gibson Dunn’s lawyers are available to assist in addressing any questions you may have regarding these developments.  Please contact the Gibson Dunn lawyer with whom you usually work, or the authors: Artificial Intelligence and Automated Systems Group: H. Mark Lyon – Chair, Palo Alto (+1 650-849-5307, mlyon@gibsondunn.com) Brian M. Buroker– Washington, D.C. (+1 202-955-8541, bburoker@gibsondunn.com) Ryan K. Iwahashi – Palo Alto (+1 650-849-5367, riwahashi@gibsondunn.com) Please also feel free to contact any of the following practice group leaders: Intellectual Property Group: Wayne Barsky – Los Angeles (+1 310-552-8500, wbarsky@gibsondunn.com) Josh Krevitt – New York (+1 212-351-4000, jkrevitt@gibsondunn.com) Mark Reiter – Dallas (+1 214-698-3100, mreiter@gibsondunn.com) © 2019 Gibson, Dunn & Crutcher LLP Attorney Advertising:  The enclosed materials have been prepared for general informational purposes only and are not intended as legal advice.

December 17, 2018 |
Gibson Dunn Launches Artificial Intelligence and Automated Systems Group

Gibson, Dunn & Crutcher LLP is pleased to announce that the firm has launched a new practice group focused on artificial intelligence and automated systems.  Building on a number of successful transactions and litigation matters in the area, the practice brings together the talents and experience of lawyers throughout Gibson Dunn, collectively possessing a diverse array of technical and legal experience and skills.  The Artificial Intelligence and Automated Systems (AI/AS) Practice Group is led by Palo Alto intellectual property partner Mark Lyon. “As Artificial Intelligence becomes more sophisticated, more widely accepted and more embedded in our economy, legal scrutiny in the U.S. of the potential uses and effects of these technologies has also been increasing,” said Ken Doran, Chairman and Managing Partner of Gibson Dunn.  “Clients are seeking guidance from counsel who can tackle these legal and regulatory developments and the implications for companies developing or using products based on these technologies.  We are extremely well-positioned in terms of experience, capability and knowledge to help our clients meet these new challenges.” “The legal issues surrounding AI technologies are complex and multi-disciplinary.  Given the rapid pace of AI development, it is critical to our clients that they be represented by counsel who are not just on top of the laws and regulations, but also have a fundamental understanding of the underlying technology.  We already have lawyers with the right background and knowledge to advise clients on the entire breadth of these issues – and, in fact, have been doing so for years,” Lyon said.  “The new Practice Group now provides us a more clearly visible way to organize our collective experience, allows us to make sure we have the right knowledge available for our clients, and gives us a platform to communicate our capabilities in the area to the market.” About Gibson Dunn’s Artificial Intelligence and Automated Systems Practice Gibson Dunn’s Artificial Intelligence and Automated Systems Practice Group brings together the talents and experience of more than 60 lawyers throughout the firm, and is well-positioned to assist clients in identifying, addressing, and responding to the wide variety of legal challenges, including an evolving regulatory landscape, that affect not only companies who may be developing products but also companies who use such technologies as part of their business. With experience in a full range of legal areas, the members of Gibson Dunn’s AI/AS Practice Group are engaged with cutting-edge technology companies in litigation and transactional matters spanning a wide spectrum of machine and deep learning, expert systems, neuromorphic computing, relational reasoning and analytic technologies.  The firm’s lawyers with technical backgrounds and advanced degrees can provide the necessary insight to develop and defend key growth areas including dedicated integrated circuit design, NLP, chatbots, augmented/virtual reality, computer vision, robotics, autonomous vehicles, and other new emerging tech forms.

November 21, 2018 |
New Export Controls on Emerging Technologies – 30-Day Public Comment Period Begins

Click for PDF On Monday, the Trump administration took the first step toward imposing new controls on the export of cutting-edge technologies.  Pursuant to the Export Control Reform Act of 2018 (“ECRA”), the U.S. Department of Commerce’s Bureau of Industry and Security (“BIS”) published a request for the public’s assistance in identifying “emerging technologies” essential for U.S. national security that should be subject to new export restrictions.[1]  The advance notice of proposed rulemaking (“ANPRM”) reiterates the general criteria for emerging technologies, provides a representative list of targeted technologies, and provides a 30-day period for comment on which technologies should be subject to these new controls. In response to this notice, companies that operate in certain high technology sectors, such as biotechnology, artificial intelligence, computer processing, and advanced materials, should consider filing public comments and prepare for pending controls.  These companies should start by identifying technologies they possess that are likely to be targeted for new export controls and gather important evidence on the efficacy of potential controls on these technologies.  Companies likely to be affected should  also consider the impact that tighter controls on the transfer of these technologies may have on their business operations.  Additionally, U.S. businesses that engage with emerging technologies must be mindful of new CFIUS regulations that require such businesses to declare certain controlling and non-controlling foreign investments to CFIUS before the investment is made. BACKGROUND On August 13, 2018, President Trump signed the John S. McCain National Defense Authorization Act for Fiscal Year 2019 (“FY 2019 NDAA”), an omnibus bill to authorize defense spending that includes—among other measures—the Export Control Reform Act of 2018 (“ECRA”).[2]  In addition to placing the U.S. export control regime on firm statutory footing for the first time in decades, ECRA significantly expanded the President’s authority to regulate and enforce export controls by requiring the Secretary of Commerce to establish controls on the export, re-export, or in-country transfer of “emerging or foundational technologies.”[3] ECRA was passed alongside the Foreign Investment Risk Review Modernization Act (“FIRRMA”), which reformed the CFIUS review process for inbound foreign investment.[4]  As originally drafted, FIRRMA would have included outbound investments—such as joint ventures or licensing agreements—in the list of covered transactions subject to CFIUS review to limit the outflow of technology important to U.S. national security.  This proposed provision was very controversial and was ultimately removed from the bill.  Instead, the final version of the NDAA included ECRA, which granted BIS the authority to work with the interagency group to identify and regulate the transfer of these emerging and foundational technologies.[5] WHAT ARE EMERGING TECHNOLOGIES? ECRA does not offer a precise definition of the “emerging technologies” to be controlled by BIS.  Instead, it offers criteria for BIS to consider when determining what technologies will fall within this area of BIS control.  Importantly, the definition of “technology” itself in the context of export controls is well established.  Such technology does not, for example, include end-items, commodities, or software.  Instead, technology is the information, in tangible or intangible form, necessary for the development, production, or use of such goods or software.[6]  Technology may include written or oral communications, blueprints, schematics, photographs, formulae, models, or information gained through mere visual inspection.[7]  For example, speech recognition software would not be a technology and therefore would not be subject to these new controls.  However, the source code for such software would be technology that could be considered “emerging,” depending on the criteria BIS applies. The ANPRM broadly describes emerging technologies as “those technologies essential to the national security of the United States that are not already subject to export controls under the Export Administration Regulations (“EAR”) or the International Traffic in Arms Regulations (“ITAR”).”[8]  The ANPRM suggests that technologies will be considered “essential to the national security of the United States” if they “have potential conventional weapons, intelligence collection, weapons of mass destruction, or terrorist applications or could provide the United States with a qualitative military or intelligence advantage.”[9] In narrowing down which of these technologies will be subject to new export controls, BIS will also consider the development of emerging technologies abroad, the effect of unilateral export restrictions on U.S. technological development, and the ability of export controls to limit the spread of these emerging technologies in foreign countries.  In making this assessment and further narrowing the category of affected technologies, BIS will consider information from a variety of interagency sources, as well as public information drawn from comments submitted in response to the ANPRM. Although the ANPRM does not provide concrete examples of “emerging technologies,” BIS does provide a list of technologies currently subject to limited controls that could be considered “emerging” and subject to new, broader controls.  These include the following: (1) Biotechnology, such as: (i)  nanobiology; (ii) synthetic biology; (iii) genomic and genetic engineering; or (iv) neurotech. (2) Artificial intelligence (AI) and machine learning technology, such as: (i) neural networks and deep learning (e.g., brain modelling, time series prediction, classification); (ii) evolution and genetic computation (e.g., genetic algorithms, genetic programming); (iii) reinforcement learning; (iv) computer vision (e.g., object recognition, image understanding); (v) expert systems (e.g., decision support systems, teaching systems); (vi) speech and audio processing (e.g., speech recognition and production); (vii) natural language processing (e.g., machine translation); (viii) planning (e.g., scheduling, game playing); (ix) audio and video manipulation technologies (e.g., voice cloning, deepfakes); (x) AI cloud technologies; or (xi) AI chipsets. (3) Position, Navigation, and Timing (PNT) technology. (4) Microprocessor technology, such as: (i) Systems-on-Chip (SoC); or (ii) Stacked Memory on Chip. (5) Advanced computing technology, such as: (i) memory-centric logic. (6) Data analytics technology, such as: (i) visualization; (ii) automated analysis algorithms; or (iii) context-aware computing. (7) Quantum information and sensing technology, such as: (i) quantum computing; (ii) quantum encryption; or (iii) quantum sensing. (8) Logistics technology, such as: (i) mobile electric power; (ii) modeling and simulation; (iii) total asset visibility; or (iv) distribution-based Logistics Systems (DBLS). (9) Additive manufacturing (e.g., 3D printing). (10) Robotics such as: (i) micro-drone and micro-robotic systems; (ii) swarming technology; (iii) self-assembling robots; (iv) molecular robotics; (v) robot compliers; or (vi) smart Dust. (11) Brain-computer interfaces, such as: (i) neural-controlled interfaces; (ii) mind-machine interfaces; (iii) direct neural interfaces; or (iv) brain-machine interfaces. (12) Hypersonics, such as: (i) flight control algorithms; (ii) propulsion technologies; (iii) thermal protection systems; or (iv) specialized materials (for structures, sensors, etc.). (13) Advanced Materials, such as: (i) adaptive camouflage; (ii) functional textiles (e.g., advanced fiber and fabric technology); or (iii) biomaterials. (14) Advanced surveillance technologies, such as faceprint and voiceprint technologies.[10] BIS REQUEST FOR COMMENT Along with a review of its mandate to regulate emerging technologies and a sample of several potentially affected industries, BIS specifically requested public comments on the following points: how the administration should define emerging technologies what the criteria should be for determining whether there are specific technologies within these general categories that are important to U.S. national security what sources the administration can refer to in order to identify emerging technologies what other general technology categories might be important to U.S. national security and warrant control information about the status of development of the listed technologies in the United States and other countries information about what impact the specific emerging technology controls would have on U.S. technological leadership, and suggestions for other approaches to identifying emerging technologies warranting controls.[11] Comments on these issues are due to BIS by December 19, 2018—only thirty days after the publication of the ANPRM. Critically, comments offered pursuant to this notice will be made public, and there is no express procedure for submitting redacted public comments and complete comments for the agency. HOW TO RESPOND Companies potentially affected by these new controls should simultaneously begin preparing for public comments and for pending controls.  The first step in this process should be the identification of potentially targeted technologies.  Companies should work with in-house engineers, researchers, and product development personnel—as well as export control experts—to begin identifying technology that may be targeted for control. Technologies currently controlled under the ITAR or broadly restricted by the EAR will not be included in the new category of “emerging technologies.”  Given the express limitations provided in ECRA, technologies produced outside of the United States are also unlikely to be targeted by the new controls, as unilateral U.S. export controls would do little to restrict the flow of these technologies.  Once a company identifies such non-controlled technologies predominantly of U.S.-origin, it should evaluate the extent to which it shares or will share this technology with non-U.S. persons and the means by which it makes such transfers. Having identified technology likely to be impacted by the new controls, companies should prepare public comments in response to the request posed in the ANPRM.  For example, companies may wish to suggest a definition for emerging technologies, or a limiting principle for a potential definition, that is based on an evaluation of potentially affected technologies, market concerns, and BIS’s policy objectives.  Concrete evidence of foreign production of comparable technology, the likely impact on U.S. technological superiority of new controls, and the ability of new controls to limit the spread of emerging technologies abroad will also be particularly persuasive.  Where possible, companies may also wish to differentiate their technology from comparable technology that may have conventional weapons, intelligence collection, weapons of mass destruction, or terrorist applications. In addition to providing comments to BIS, companies should also begin preparing to operate under expanded export controls.  Importantly, certain kinds of exports related to emerging technologies will not be subject to new licensing requirements.  For example, the provision of technology associated with the sale or license of finished goods or software will not be subject to a new licensing requirement if the U.S. party to the transaction generally makes the finished items and associated technology available to its customers, distributors, and resellers (e.g., an operation manual exported along with controlled hardware).[12] Similarly, the provision by a U.S. party of technology to a foreign supplier of goods or services to the U.S. party will not be restricted if the foreign supplier has no rights to exploit the technology contributed by the U.S. person other than to supply the procured goods or services.[13]  For example, the provision of blueprints to a foreign manufacturer under these circumstances would not be subject to the new controls.  Additionally, contribution by a U.S. person to an industry organization related to a standard or specification would not generally be subject to the new controls.[14] However, companies should be mindful of the circumstances in which new controls will limit their business operations.  For example, the new controls may limit operations under joint ventures or other cooperative arrangements where emerging technologies are currently exchanged.  In addition, the new controls are likely to limit the availability of certain license exceptions that could be used to facilitate such cooperative arrangements.  Cooperation with individuals and entities in countries subject to U.S. arms embargos, such as China, are likely to be significantly curtailed, as BIS may effectively prohibit exports of emerging technologies to those countries. With these potential impacts in mind, companies should begin evaluating how controls on targeted technologies will affect their operations. WHAT’S NEXT BIS will evaluate public comments offered during the 30-day window provided by the ANPRM, along with additional public and classified information collected through the interagency process, to establish the criteria to be used to identify  “emerging technologies” and related export controls.  As a part of this process, it is likely that BIS will rely on some of its existing mechanisms for monitoring and regulating emerging technologies to provide insight into the appropriate scope and content of the new controls. For example, BIS has indicated it will look to its Emerging Technologies and Research Advisory Committee, an advisory body of academics, industry personnel, and researchers that already assist BIS in identifying new technologies and gaps in existing controls.  BIS may also rely on the surveys and network of company partnerships used by its Office of Technology Evaluation to conduct assessments of defense-related technologies.  Other federal agencies engaged in the development of emerging technologies may also contribute to the identification of emerging technologies and appropriate controls, including for example the various advanced research projects agencies (e.g. DARPA, ARPA-E, and IARPA), the National Science Foundation’s Foundations of Emerging Technologies, and the national laboratories.  The work of these agencies and entities may suggest areas on which BIS could focus new controls. BIS’s current efforts to control emerging technologies and related products may also inform its development of new controls.  In 2012, BIS established a dedicated system for controlling emerging technologies under Export Control Classification Number (“ECCN”) 0Y521.  These new controls were similarly intended to restrict the export of items presenting a significant military or intelligence advantage to the United States.  Technology identified under this ECCN requires a license for export to all destinations, except Canada, with limited license exceptions available.  Although only a few items have been identified for control under this existing mechanism (e.g. X-ray deflecting epoxies, biosensor systems, and tools for tritium production), BIS’s use of the 0Y521 ECCN series may provide further evidence of the types of technologies BIS may target for control and the restrictions it will apply. As it continues to await public comments and identify emerging technologies, BIS plans to publish a similar ANPRM requesting the public’s assistance in identifying and defining “foundational technologies,” which will also be subjected to new ECRA-mandated controls.[15]  Once BIS has arrived at a definition for these terms and a set of potential controls, BIS will likely publish a proposed rule providing this information for another period of public comment.  Those comments will undergo a similar process of interagency review, and BIS will announce its final rule providing the new controls on the export of emerging and foundational technologies. Importantly, any technologies that BIS identifies as emerging or foundational through this rulemaking process will be considered “critical technologies” for the purposes of determining CFIUS jurisdiction.[16]  FIRRMA now requires that certain foreign investment in U.S. companies that deal in these critical technologies receive CFIUS review and approval.  Under CFIUS’s new program to pilot the implementation of these authorities, CFIUS must receive advance notice of certain types of non-controlling foreign investment in U.S. companies that design, test, manufacture, fabricate, or develop critical technologies—including emerging and foundational technologies identified by BIS—for use in one of several listed industries.[17]  In this regard, BIS’s final determination regarding what constitutes “emerging technologies” will also impact the scope of CFIUS’s expanded jurisdiction.    [1]   Review of Controls for Certain Emerging Technologies, 83 Fed. Reg. 58,201 (advance notice of proposed rulemaking Nov. 19, 2018), https://www.gpo.gov/fdsys/pkg/FR-2018-11-19/pdf/2018-25221.pdf [hereinafter, “ANPRM”].    [2]   Export Control Reform Act of 2018, Pub. L. No. 115-232, §§ 1751-1781 (2018).    [3]   Id. § 1758.    [4]   Foreign Investment Risk Review Modernization Act of 2018, Pub. L. No. 115-232, §§ 1701-1728 (2018).    [5]   Export Control Reform Act of 2018, Pub. L. No. 115-232, § 1758 (2018).    [6]   15 C.F.R. § 772.1.    [7]   Id.    [8]   ANPRM, supra note 1 at 58,201.    [9]   Id. [10]   Id. at 58,202. [11]   Id. [12]   Export Control Reform Act of 2018, Pub. L. No. 115-232, § 1758(b)(4)(c)(i) (2018). [13]   Id. § 1758(b)(4)(c)(iv). [14]   Id. § 1758(b)(4)(c)(v). [15]   ANPRM, supra note 1 at 58,202. [16]   Foreign Investment Risk Review Modernization Act of 2018, Pub. L. No. 115-232, § 1703 (2018). [17]   31 C.F.R. § 801.101. The following Gibson Dunn lawyers assisted in preparing this client update: Judith Alison Lee, Adam M. Smith, R.L. Pratt and Christopher Timura. Gibson Dunn’s lawyers are available to assist in addressing any questions you may have regarding the above developments.  Please contact the Gibson Dunn lawyer with whom you usually work, the authors, or any of the following leaders and members of the firm’s International Trade practice group: United States: Judith Alison Lee – Co-Chair, International Trade Practice, Washington, D.C. (+1 202-887-3591, jalee@gibsondunn.com) Ronald Kirk – Co-Chair, International Trade Practice, Dallas (+1 214-698-3295, rkirk@gibsondunn.com) Jose W. Fernandez – New York (+1 212-351-2376, jfernandez@gibsondunn.com) Marcellus A. McRae – Los Angeles (+1 213-229-7675, mmcrae@gibsondunn.com) Adam M. Smith – Washington, D.C. (+1 202-887-3547, asmith@gibsondunn.com) Christopher T. Timura – Washington, D.C. (+1 202-887-3690, ctimura@gibsondunn.com) Ben K. Belair – Washington, D.C. (+1 202-887-3743, bbelair@gibsondunn.com) Courtney M. Brown – Washington, D.C. (+1 202-955-8685, cmbrown@gibsondunn.com) Laura R. Cole – Washington, D.C. (+1 202-887-3787, lcole@gibsondunn.com) Stephanie L. Connor – Washington, D.C. (+1 202-955-8586, sconnor@gibsondunn.com) Helen L. Galloway – Los Angeles (+1 213-229-7342, hgalloway@gibsondunn.com) Henry C. Phillips – Washington, D.C. (+1 202-955-8535, hphillips@gibsondunn.com) R.L. Pratt – Washington, D.C. (+1 202-887-3785, rpratt@gibsondunn.com) Scott R. Toussaint – Palo Alto (+1 650-849-5320, stoussaint@gibsondunn.com) Europe: Peter Alexiadis – Brussels (+32 2 554 72 00, palexiadis@gibsondunn.com) Attila Borsos – Brussels (+32 2 554 72 10, aborsos@gibsondunn.com) Patrick Doris – London (+44 (0)207 071 4276, pdoris@gibsondunn.com) Penny Madden – London (+44 (0)20 7071 4226, pmadden@gibsondunn.com) Benno Schwarz – Munich (+49 89 189 33 110, bschwarz@gibsondunn.com) Michael Walther – Munich (+49 89 189 33-180, mwalther@gibsondunn.com) Richard W. Roeder – Munich (+49 89 189 33-160, rroeder@gibsondunn.com) © 2018 Gibson, Dunn & Crutcher LLP Attorney Advertising:  The enclosed materials have been prepared for general informational purposes only and are not intended as legal advice.

October 10, 2018 |
Artificial Intelligence and Autonomous Systems Legal Update (3Q18)

Click for PDF We are pleased to provide the following update on recent legal developments in the areas of artificial intelligence, machine learning, and autonomous systems (or “AI” for short), and their implications for companies developing or using products based on these technologies.  As the spread of AI rapidly increases, legal scrutiny in the U.S. of the potential uses and effects of these technologies (both beneficial and harmful) has also been increasing.  While we have chosen to highlight below several governmental and legislative actions from the past quarter, the area is rapidly evolving and we will continue to monitor further actions in these and related areas to provide future updates of potential interest on a regular basis. I.       Increasing Federal Government Interest in AI Technologies The Trump Administration and Congress have recently taken a number of steps aimed at pushing AI forward on the U.S. agenda, while also treating with caution foreign involvement in U.S.-based AI technologies.  Some of these actions may mean additional hurdles for cross-border transactions involving AI technology.  On the other hand, there may also be opportunities for companies engaged in the pursuit of AI technologies to influence the direction of future legislation at an early stage. A.       White House Studies AI In May, the Trump Administration kicked off what is becoming an active year in AI for the federal government by hosting an “Artificial Intelligence for American Industry” summit as part of its designation of AI as an “Administration R&D priority.”[1] During the summit, the White House also announced the establishment of a “Select Committee on Artificial Intelligence” to advise the President on research and development priorities and explore partnerships within the government and with industry.[2]  This Select Committee is housed within the National Science and Technology Council, and is chaired by Office of Science and Technology Policy leadership. Administration officials have said that a focus of the Select Committee will be to look at opportunities for increasing federal funds into AI research in the private sector, to ensure that the U.S. has (or maintains) a technological advantage in AI over other countries.  In addition, the Committee is to look at possible uses of the government’s vast store of taxpayer-funded data to promote the development of advanced AI technologies, without compromising security or individual privacy.  While it is believed that there will be opportunities for private stakeholders to have input into the Select Committee’s deliberations, the inaugural meeting of the Committee, which occurred in late June, was not open to the public for input. B.       AI in the NDAA for 2019 More recently, on August 13th, President Trump signed into law the John S. McCain National Defense Authorization Act (NDAA) for 2019,[3] which specifically authorizes the Department of Defense to appoint a senior official to coordinate activities relating to the development of AI technologies for the military, as well as to create a strategic plan for incorporating a number of AI technologies into its defense arsenal.  In addition, the NDAA includes the Foreign Investment Risk Review Modernization Act (FIRRMA)[4] and the Export Control Reform Act (ECRA),[5] both of which require the government to scrutinize cross-border transactions involving certain new technologies, likely including AI-related technologies. FIRRMA modifies the review process currently used by the Committee on Foreign Investment in the United States (CFIUS), an interagency committee that reviews the national security implications of investments by foreign entities in the United States.  With FIRRMA’s enactment, the scope of the transactions that CFIUS can review is expanded to include those involving “emerging and foundational technologies,” defined as those that are critical for maintaining the national security technological advantage of the United States.  While the changes to the CFIUS process are still fresh and untested, increased scrutiny under FIRRMA will likely have an impact on available foreign investment in the development and use of AI, at least where the AI technology involved is deemed such a critical technology and is sought to be purchased or licensed by foreign investors. Similarly, ECRA requires the President to establish an interagency review process with various agencies including the Departments of Defense, Energy, State and the head of other agencies “as appropriate,” to identify emerging and foundational technologies essential to national security in order to impose appropriate export controls.  Export licenses are to be denied if the proposed export would have a “significant negative impact” on the U.S. defense industrial base.  The terms “emerging and foundational technologies” are not expressly defined, but it is likely that AI technologies, which are of course “emerging,” would receive a close look under ECRA and that ECRA might also curtail whether certain AI technologies can be sold or licensed to foreign entities. The NDAA also established a National Security Commission on Artificial Intelligence “to review advances in artificial intelligence, related machine learning developments, and associated technologies.”  The Commission, made up of certain senior members of Congress as well as the Secretaries of Defense and Commerce, will function independently from other such panels established by the Trump Administration and will review developments in AI along with assessing risks related to AI and related technologies to consider how those methods relate to the national security and defense needs of the United States.  The Commission will focus on technologies that provide the U.S. with a competitive AI advantage, and will look at the need for AI research and investment as well as consider the legal and ethical risks associated with the use of AI.  Members are to be appointed within 90 days of the Commission being established and an initial report to the President and Congress is to be submitted by early February 2019. C.       Additional Congressional Interest in AI/Automation While a number of existing bills with potential impacts on the development of AI technologies remain stalled in Congress,[6] two more recently-introduced pieces of legislation are also worth monitoring as they progress through the legislative process. In late June, Senator Feinstein (D-CA) sponsored the “Bot Disclosure and Accountability Act of 2018,” which is intended to address  some of the concerns over the use of automated systems for distributing content through social media.[7] As introduced, the bill seeks to prohibit certain types of bot or other automated activity directed to political advertising, at least where such automated activity appears to impersonate human activity.  The bill would also require the Federal Trade Commission to establish and enforce regulations to require public disclosure of the use of bots, defined as any “automated software program or process intended to impersonate or replicate human activity online.”  The bill provides that any such regulations are to be aimed at the “social media provider,” and would place the burden of compliance on such providers of social media websites and other outlets.  Specifically, the FTC is to promulgate regulations requiring the provider to take steps to ensure that any users of a social media website owned or operated by the provider would receive “clear and conspicuous notice” of the use of bots and similar automated systems.  FTC regulations would also require social media providers to police their systems, removing non-compliant postings and/or taking other actions (including suspension or removal) against users that violate such regulations.  While there are significant differences, the Feinstein bill is nevertheless similar in many ways to California’s recently-enacted Bot disclosure law (S.B. 1001), discussed more fully in our previous client alert located here.[8] Also of note, on September 26th, a bipartisan group of Senators introduced the “Artificial Intelligence in Government Act,” which seeks to provide the federal government with additional resources to incorporate AI technologies in the government’s operations.[9] As written, this new bill would require the General Services Administration to bring on technical experts to advise other government agencies, conduct research into future federal AI policy, and promote inter-agency cooperation with regard to AI technologies.  The bill would also create yet another federal advisory board to advise government agencies on AI policy opportunities and concerns.  In addition, the newly-introduced legislation seeks to require the Office of Management and Budget to identify ways for the federal government to invest in and utilize AI technologies and tasks the Office of Personal Management with anticipating and providing training for the skills and competencies the government requires going-forward for incorporating AI into its overall data strategy. II.       Potential Impact on AI Technology of Recent California Privacy Legislation Interestingly, in the related area of data privacy regulation, the federal government has been slower to respond, and it is the state legislatures that are leading the charge.[10] Most machine learning algorithms depend on the availability of large data sets for purpose of training, testing, and refinement.  Typically, the larger and more complete the datasets available, the better.  However, these datasets often include highly personal information about consumers, patients, or others of interest—data that can sometimes be used to predict information specific to a particular person even if attempts are made to keep the source of such data anonymous. The European Union’s General Data Protection Regulation, or GDPR, which went into force on May 25, 2018, has deservedly garnered a great deal of press as one of the first, most comprehensive collections of data privacy protections. While we’re only months into its effective period, the full impact and enforcement of the GDPR’s provisions have yet to be felt.  Still, many U.S. companies, forced to take steps to comply with the provisions of GDPR at least with regard to EU citizens, have opted to take many of those same steps here in the U.S., despite the fact that no direct U.S. federal analogue to the GDPR yet exists.[11] Rather than wait for the federal government to act, several states have opted to follow the lead of the GDPR and enact their own versions of comprehensive data privacy laws.  Perhaps the most significant of these state-legislated omnibus privacy laws is the California Consumer Privacy Act (“CCPA”), signed into law on June 28, 2108, and slated to take effect on January 1, 2020.[12]  The CCPA is not identical to the GDPR, differing in a number of key respects.  However there are many similarities, in that the CCPA also has broadly defined definitions of personal information/data, and seeks to provide a right to notice of data collection, a right of access to and correction of collected data, a right to be forgotten, and a right to data portability.  But how do the CCPA’s requirements differ from the GDPR for companies engaged in the development and use of AI technologies?  While there are many issues to consider, below we examine several of the key differences of the CCPA and their impact on machine learning and other AI-based processing of collected data. A.       Inferences Drawn from Personal Information The GDPR defines personal data as “any information relating to an identified or identifiable natural person,” such as “a name, an identification number, location data, an online identifier or to one or more factors specific to the physical, physiological, genetic, mental, economic, cultural or social identify of that nature person.”[13]  Under the GDPR, personal data has implications in the AI space beyond just the data that is actually collected from an individual.  AI technology can be and often is used to generate additional information about a person from collected data, e.g., spending habits, facial features, risk of disease, or other inferences that can be made from the collected data.  Such inferences, or derivative data, may well constitute “personal data” under a broad view of the GDPR, although there is no specific mention of derivative data in the definition. By contrast, the CCPA goes farther and specifically includes “inferences drawn from any of the information identified in this subdivision to create a profile about a consumer reflecting the consumer’s preferences, characteristics, psychological trends, preferences, predispositions, behavior, attitudes, intelligence, abilities and aptitudes.”[14]  An “inference” is defined as “the derivation of information, data, assumptions, or conclusions from evidence, or another source of information or data.”[15] Arguably the primary purpose of many AI systems is to draw inferences from a user’s information, by mining data, looking for patterns, and generating analysis.  Although the CCPA does limit inferences to those drawn “to create a profile about a consumer,” the term “profile” is not defined in the CCPA.  However, the use of consumer information that is “deidentified” or “aggregated” is permitted by the CCPA.  Thus, one possible solution may be to take steps to “anonymize” any personal data used to derive any inferences.  As a result, when looking to CCPA compliance, companies may want to carefully consider the derivative/processed data that they are storing about a user, and consider additional steps that may be required for CCPA compliance. B.       Identifying Categories of Personal Information The CCPA also requires disclosures of the categories of personal information being collected, the categories of sources from which personal information is collected, the purpose for collecting and selling personal information, and the categories of third parties with whom the business shares personal information. [16]  Although these categories are likely known and definable for static data collection, it may be more difficult to specifically disclose the purpose and categories for certain information when dynamic machine learning algorithms are used.  This is particularly true when, as discussed above, inferences about a user are included as personal information.  In order to meet these disclosure requirements, companies may need to carefully consider how they will define all of the categories of personal information collected or the purposes of use of that information, particularly when machine learning algorithms are used to generate additional inferences from, or derivatives of, personal data. C.       Personal Data Includes Households The CCPA’s definition of “personal data” also includes information pertaining to non-individuals, such as “households” – a term that the CCPA does not further define.[17]  In the absence of an explicit definition, the term “household” would seem to target information collected about a home and its inhabits through smart home devices, such as thermostats, cameras, lights, TVs, and so on.  When looking to the types of personal data being collected, the CCPA may also encompass information about each of these smart home devices, such as name, location, usage, and special instructions (e.g., temperature controls, light timers, and motion sensing).  Furthermore, any inferences or derivative information generated by AI algorithms from the information collected from these smart home devices may also be covered as personal information.  Arguably, this could include information such as conversations with voice assistants or even information about when people are likely to be home determined via cameras or motion sensors.  Companies developing smart home, or other Internet of Things, devices thus should carefully consider whether the scope and use they make of any information collected from “households” falls under the CCPA requirements for disclosure or other restrictions. III.       Continuing Efforts to Regulate Autonomous Vehicles Much like the potential for a comprehensive U.S. data privacy law, and despite a flurry of legislative activity in Congress in 2017 and early 2018 towards such a national regulatory framework, autonomous vehicles continue to operate under a complex patchwork of state and local rules with limited federal oversight.  We previously provided an update (located here)[18] discussing the Safely Ensuring Lives Future Deployment and Research In Vehicle Evolution (SELF DRIVE) Act[19], which passed the U.S. House of Representatives by voice vote in September 2017 and its companion bill (the American Vision for Safer Transportation through Advancement of Revolutionary Technologies (AV START) Act).[20]  Both bills have since stalled in the Senate, and with them the anticipated implementation of a uniform regulatory framework for the development, testing and deployment of autonomous vehicles. As the two bills languish in Congress, ‘chaperoned’ autonomous vehicles have already begun coexisting on roads alongside human drivers.  The accelerating pace of policy proposals—and debate surrounding them—looks set to continue in late 2018 as virtually every major automaker is placing more autonomous vehicles on the road for testing and some manufacturers prepare to launch commercial services such as self-driving taxi ride-shares[21] into a national regulatory vacuum. A.       “Light-touch” Regulation The delineation of federal and state regulatory authority has emerged as a key issue because autonomous vehicles do not fit neatly into the existing regulatory structure.  One of the key aspects of the proposed federal legislation is that it empowers the National Highway Traffic Safety Administration (NHTSA) with the oversight of manufacturers of self-driving cars through enactment of future rules and regulations that will set the standards for safety and govern areas of privacy and cybersecurity relating to such vehicles.  The intention is to have a single body (the NHTSA) develop a consistent set of rules and regulations for manufacturers, rather than continuing to allow the states to adopt a web of potentially widely differing rules and regulations that may ultimately inhibit development and deployment of autonomous vehicles.  This approach was echoed by safety guidelines released by the Department of Transportation (DoT) for autonomous vehicles.  Through the guidelines (“a nonregulatory approach to automated vehicle technology safety”),[22] the DoT avoids any compliance requirement or enforcement mechanism, at least for the time being, as the scope of the guidance is expressly to support the industry as it develops best practices in the design, development, testing, and deployment of automated vehicle technologies. Under the proposed federal legislation, the states can still regulate autonomous vehicles, but the guidance encourages states not to pass laws that would “place unnecessary burdens on competition and innovation by limiting [autonomous vehicle] testing or deployment to motor vehicle manufacturers only.”[23]  The third iteration of the DoT’s federal guidance, published on October 4, 2018, builds upon—but does not replace—the existing guidance, and reiterates that the federal government is placing the onus for safety on companies developing the technologies rather than on government regulation. [24]  The guidelines, which now include buses, transit and trucks in addition to cars, remain voluntary. B.       Safety Much of the delay in enacting a regulatory framework is a result of policymakers’ struggle to balance the industry’s desire to speed both the development and deployment of autonomous vehicle technologies with the safety and security concerns of consumer advocates. The AV START bill requires that NHTSA must construct comprehensive safety regulations for AVs with a mandated, accelerated timeline for rulemaking, and the bill puts in place an interim regulatory framework that requires manufacturers to submit a Safety Evaluation Report addressing a range of key areas at least 90 days before testing, selling, or commercialization of an driverless cars.  But some lawmakers and consumer advocates remain skeptical in the wake of highly publicized setbacks in autonomous vehicle testing.[25]  Although the National Safety Transportation Board (NSTB) has authority to investigate auto accidents, there is still no federal regulatory framework governing liability for individuals and states.[26]  There are also ongoing concerns over cybersecurity risks[27], the use of forced arbitration clauses by autonomous vehicle manufacturers,[28] and miscellaneous engineering problems that revolve around the way in which autonomous vehicles interact with obstacles commonly faced by human drivers, such as emergency vehicles,[29] graffiti on road signs or even raindrops and tree shadows.[30] In August 2018, the Governors Highway Safety Association (GHSA) published a report outlining the key questions that manufacturers should urgently address.[31]  The report suggested that states seek to encourage “responsible” autonomous car testing and deployment while protecting public safety and that lawmakers “review all traffic laws.”  The report also notes that public debate often blurs the boundaries between the different levels of automation the NHTSA has defined (ranging from level 0 (no automation) to level 5 (fully self-driving without the need for human occupants)), remarking that “most AVs for the foreseeable future will be Levels 2 through 4.  Perhaps they should be called ‘occasionally self-driving.'”[32] C.       State Laws Currently, 21 states and the District of Columbia have passed laws regulating the deployment and testing of self-driving cars, and governors in 10 states have issued executive orders related to them.[33]  For example, California expanded its testing rules in April 2018 to allow for remote monitoring instead of a safety driver inside the vehicle.[34]  However, state laws differ on basic terminology, such as the definition of “vehicle operator.” Tennessee SB 151[35] points to the autonomous driving system (ADS) while Texas SB 2205[36] designates a “natural person” riding in the vehicle.  Meanwhile, Georgia SB 219[37] identifies the operator as the person who causes the ADS to engage, which might happen remotely in a vehicle fleet. These distinctions will affect how states license both human drivers and autonomous vehicles going forward.  Companies operating in this space accordingly need to stay abreast of legal developments in states in which they are developing or testing autonomous vehicles, while understanding that any new federal regulations may ultimately preempt those states’ authorities to determine, for example, crash protocols or how they handle their passengers’ data. D.       ‘Rest of the World’ While the U.S. was the first country to legislate for the testing of automated vehicles on public roads, the absence of a national regulatory framework risks impeding innovation and development.  In the meantime, other countries are vying for pole position among manufacturers looking to test vehicles on roads.[38]  KPMG’s 2018 Autonomous Vehicles Readiness Index ranks 20 countries’ preparedness for an autonomous vehicle future. The Netherlands took the top spot, outperforming the U.S. (3rd) and China (16th).[39]  Japan and Australia plan to have self-driving cars on public roads by 2020.[40]  The U.K. government has announced that it expects to see fully autonomous vehicles on U.K. roads by 2021, and is introducing legislation—the Automated and Electric Vehicles Act 2018—which installs an insurance framework addressing product liability issues arising out of accidents involving autonomous cars, including those wholly caused by an autonomous vehicle “when driving itself.”[41] E.       Looking Ahead While autonomous vehicles operating on public roads are likely to remain subject to both federal and state regulation, the federal government is facing increasing pressure to adopt a federal regulatory scheme for autonomous vehicles in 2018.[42]  Almost exactly one year after the House passed the SELF DRIVE Act, House Energy and Commerce Committee leaders called on the Senate to advance automated vehicle legislation, stating that “[a]fter a year of delays, forcing automakers and innovators to develop in a state-by-state patchwork of rules, the Senate must act to support this critical safety innovation and secure America’s place as a global leader in technology.”[43]  The continued absence of federal regulation renders the DoT’s informal guidance increasingly important.  The DoT has indicated that it will enact “flexible and technology-neutral” policies—rather than prescriptive performance-based standards—to encourage regulatory harmony and consistency as well as competition and innovation.[44]  Companies searching for more tangible guidance on safety standards at federal level may find it useful to review the recent guidance issued alongside the DoT’s announcement that it is developing (and seeking public input into) a pilot program for ‘highly or fully’ autonomous vehicles on U.S. roads.[45]  The safety standards being considered include technology disabling the vehicle if a sensor fails or barring vehicles from traveling above safe speeds, as well as a requirement that NHTSA be notified of any accident within 24 hours. [1] See https://www.whitehouse.gov/wp-content/uploads/2018/05/Summary-Report-of-White-House-AI-Summit.pdf; note also that the Trump Administration’s efforts in studying AI technologies follow, but appear largely separate from, several workshops on AI held by the Obama Administration in 2016, which resulted in two reports issued in late 2016 (see Preparing for the Future of Artificial Intelligence, and Artificial Intelligence, Automation, and the Economy). [2] Id. at Appendix A. [3] See https://www.mccain.senate.gov/public/index.cfm/2018/8/senate-passes-the-john-s-mccain-national-defense-authorization-act-for-fiscal-year-2019.  The full text of the NDAA is available at https://www.congress.gov/bill/115th-congress/house-bill/5515/text.  For additional information on CFIUS reform implemented by the NDAA, please see Gibson Dunn’s previous client update at https://www.gibsondunn.com/cfius-reform-our-analysis/. [4] See id.; see also https://www.treasury.gov/resource-center/international/Documents/FIRRMA-FAQs.pdf. [5] See https://foreignaffairs.house.gov/wp-content/uploads/2018/02/HR-5040-Section-by-Section.pdf.   [6] See, e.g. infra., Section III discussion of SELF DRIVE and AV START Acts, among others. [7] S.3127, 115th Congress (2018). [8] https://www.gibsondunn.com/new-california-security-of-connected-devices-law-and-ccpa-amendments/. [9] S.3502, 115th Congress (2018). [10] See also, infra., Section III for more discussion of specific regulatory efforts for autonomous vehicles. [11] However, as 2018 has already seen a fair number of hearings before Congress relating to digital data privacy issues, including appearances by key executives from many major tech companies, it seems likely that it may not be long before we see the introduction of a “GDPR-like” comprehensive data privacy bill.  Whether any resulting federal legislation would actually pre-empt state-enacted privacy laws to establish a unified federal framework is itself a hotly-contested issue, and remains to be seen. [12] AB 375 (2018); Cal. Civ. Code §1798.100, et seq. [13] Regulation (EU) 2016/679 (General Data Protection Regulation), Article 4 (1). [14] Cal. Civ. Code §1798.140(o)(1)(K). [15] Id.. at §1798.140(m). [16] Id. at §1798.110(c). [17] Id. at §1798.140(o)(1). [18] https://www.gibsondunn.com/accelerating-progress-toward-a-long-awaited-federal-regulatory-framework-for-autonomous-vehicles-in-the-united-states/. [19]   H.R. 3388, 115th Cong. (2017). [20]   U.S. Senate Committee on Commerce, Science and Transportation, Press Release, Oct. 24, 2017, available at https://www.commerce.senate.gov/public/index.cfm/pressreleases?ID=BA5E2D29-2BF3-4FC7-A79D-58B9E186412C. [21]   Sean O’Kane, Mercedes-Benz Self-Driving Taxi Pilot Coming to Silicon Valley in 2019, The Verge, Jul. 11, 2018, available at https://www.theverge.com/2018/7/11/17555274/mercedes-benz-self-driving-taxi-pilot-silicon-valley-2019. [22]   U.S. Dept. of Transp., Automated Driving Systems 2.0: A Vision for Safety 2.0, Sept. 2017, https://www.nhtsa.gov/sites/nhtsa.dot.gov/files/documents/13069a-ads2.0_090617_v9a_tag.pdf. [23]   Id., at para 2. [24]   U.S. DEPT. OF TRANSP., Preparing for the Future of Transportation: Automated Vehicles 3.0, Oct. 4, 2018, https://www.transportation.gov/sites/dot.gov/files/docs/policy-initiatives/automated-vehicles/320711/preparing-future-transportation-automated-vehicle-30.pdf. [25]   Sasha Lekach, Waymo’s Self-Driving Taxi Service Could Have Some Major Issues, Mashable, Aug. 28, 2018, available at https://mashable.com/2018/08/28/waymo-self-driving-taxi-problems/#dWzwp.UAEsqM. [26]   Robert L. Rabin, Uber Self-Driving Cars, Liability, and Regulation, Stanford Law School Blog, Mar. 20, 2018, available at https://law.stanford.edu/2018/03/20/uber-self-driving-cars-liability-regulation/. [27]   David Shephardson, U.S. Regulators Grappling with Self-Driving Vehicle Security, Reuters. Jul. 10, 2018, available at https://www.reuters.com/article/us-autos-selfdriving/us-regulators-grappling-with-self-driving-vehicle-security-idUSKBN1K02OD. [28]   Richard Blumenthal, Press Release, Ten Senators Seek Information from Autonomous Vehicle Manufacturers on Their Use of Forced Arbitration Clauses, Mar. 23, 2018, available at https://www.blumenthal.senate.gov/newsroom/press/release/ten-senators-seek-information-from-autonomous-vehicle-manufacturers-on-their-use-of-forced-arbitration-clauses. [29]   Kevin Krewell, How Will Autonomous Cars Respond to Emergency Vehicles, Forbes, Jul. 31, 2018, available at https://www.forbes.com/sites/tiriasresearch/2018/07/31/how-will-autonomous-cars-respond-to-emergency-vehicles/#3eed571627ef. [30]   Michael J. Coren, All The Things That Still Baffle Self-Driving Cars, Starting With Seagulls, Quartz, Sept. 23, 2018, available at https://qz.com/1397504/all-the-things-that-still-baffle-self-driving-cars-starting-with-seagulls/. [31]   ghsa, Preparing For Automated Vehicles: Traffic Safety Issues For States, Aug. 2018, available at https://www.ghsa.org/sites/default/files/2018-08/Final_AVs2018.pdf. [32]   Id., at 7. [33]   Brookings, The State of Self-Driving Car Laws Across the U.S., May 1, 2018, available at https://www.brookings.edu/blog/techtank/2018/05/01/the-state-of-self-driving-car-laws-across-the-u-s/. [34]   Aarian Marshall, Fully Self-Driving Cars Are Really Truly Coming to California, Wired, Feb. 26, 2018, available at, https://www.wired.com/story/california-self-driving-car-laws/; State of California, Department of Motor Vehicles, Autonomous Vehicles in California, available at https://www.dmv.ca.gov/portal/dmv/detail/vr/autonomous/bkgd. [35]   SB 151, available at http://www.capitol.tn.gov/Bills/110/Bill/SB0151.pdf. [36]   SB 2205, available at https://legiscan.com/TX/text/SB2205/2017. [37]   SB 219, available at http://www.legis.ga.gov/Legislation/en-US/display/20172018/SB/219. [38]   Tony Peng & Michael Sarazen, Global Survey of Autonomous Vehicle Regulations, Medium, Mar. 15, 2018, available at https://medium.com/syncedreview/global-survey-of-autonomous-vehicle-regulations-6b8608f205f9. [39]   KPMG, Autonomous Vehicles Readiness Index: Assessing Countries’ Openness and Preparedness for Autonomous Vehicles, 2018, (“The US has a highly innovative but largely disparate environment with little predictability regarding the uniform adoption of national standards for AVs. Therefore the prospect of  widespread driverless vehicles is unlikely in the near future. However, federal policy and regulatory guidance could certainly accelerate early adoption . . .”), p. 17, available at https://assets.kpmg.com/content/dam/kpmg/nl/pdf/2018/sector/automotive/autonomous-vehicles-readiness-index.pdf. [40]   Stanley White, Japan Looks to Launch Autonomous Car System in Tokyo by 2020, Automotive News, Jun. 4, 2018, available at http://www.autonews.com/article/20180604/MOBILITY/180609906/japan-self-driving-car; National Transport Commission Australia, Automated vehicles in Australia, available at https://www.ntc.gov.au/roads/technology/automated-vehicles-in-australia/. [41]   The Automated and Electric Vehicles Act 2018, available at http://www.legislation.gov.uk/ukpga/2018/18/contents/enacted; Lexology, Muddy Road Ahead Part II: Liability Legislation for Autonomous Vehicles in the United Kingdom, Sept. 21, 2018,  https://www.lexology.com/library/detail.aspx?g=89029292-ad7b-4c89-8ac9-eedec3d9113a; see further Anne Perkins, Government to Review Law Before Self-Driving Cars Arrive on UK Roads, The Guardian, Mar. 6, 2018, available at https://www.theguardian.com/technology/2018/mar/06/self-driving-cars-in-uk-riding-on-legal-review. [42]   Michaela Ross, Code & Conduit Podcast: Rep. Bob Latta Eyes Self-Driving Car Compromise This Year, Bloomberg Law, Jul. 26, 2018, available at https://www.bna.com/code-conduit-podcast-b73014481132/. [43]   Freight Waves, House Committee Urges Senate to Advance Self-Driving Vehicle Legislation, Sept. 10, 2018, available at https://www.freightwaves.com/news/house-committee-urges-senate-to-advance-self-driving-vehicle-legislation; House Energy and Commerce Committee, Press Release, Sept. 5, 2018, available at https://energycommerce.house.gov/news/press-release/media-advisory-walden-ec-leaders-to-call-on-senate-to-pass-self-driving-car-legislation/. [44]   See supra n. 24, U.S. DEPT. OF TRANSP., Preparing for the Future of Transportation: Automated Vehicles 3.0, Oct. 4, 2018, iv. [45]   David Shephardson, Self-driving cars may hit U.S. roads in pilot program, NHTSA says, Automotive News, Oct. 9, 2018, available at http://www.autonews.com/article/20181009/MOBILITY/181009630/self-driving-cars-may-hit-u.s.-roads-in-pilot-program-nhtsa-says. Gibson Dunn’s lawyers are available to assist in addressing any questions you may have regarding these developments.  Please contact the Gibson Dunn lawyer with whom you usually work, or the authors: H. Mark Lyon – Palo Alto (+1 650-849-5307, mlyon@gibsondunn.com) Claudia M. Barrett – Washington, D.C. (+1 202-887-3642, cbarrett@gibsondunn.com) Frances Annika Smithson – Los Angeles (+1 213-229-7914, fsmithson@gibsondunn.com) Ryan K. Iwahashi – Palo Alto (+1 650-849-5367, riwahashi@gibsondunn.com) Please also feel free to contact any of the following: Automotive/Transportation: Theodore J. Boutrous, Jr. – Los Angeles (+1 213-229-7000, tboutrous@gibsondunn.com) Christopher Chorba – Los Angeles (+1 213-229-7396, cchorba@gibsondunn.com) Theane Evangelis – Los Angeles (+1 213-229-7726, tevangelis@gibsondunn.com) Privacy, Cybersecurity and Consumer Protection: Alexander H. Southwell – New York (+1 212-351-3981, asouthwell@gibsondunn.com) Public Policy: Michael D. Bopp – Washington, D.C. (+1 202-955-8256, mbopp@gibsondunn.com) Mylan L. Denerstein – New York (+1 212-351-3850, mdenerstein@gibsondunn.com) © 2018 Gibson, Dunn & Crutcher LLP Attorney Advertising:  The enclosed materials have been prepared for general informational purposes only and are not intended as legal advice.

October 5, 2018 |
New California Security of Connected Devices Law and CCPA Amendments

Click for PDF California continues to lead the United States in focusing attention on privacy and security of user data and devices.  Last week, Governor Jerry Brown signed into law two identical bills requiring manufacturers to include “reasonable security feature[s]” on all devices which are “capable of connecting to the Internet” (commonly known as the Internet of Things).[1]  The law is described as the first of its kind in the United States, and comes just three months after passage of the California Consumer Privacy Act of 2018 (“CCPA”);[2] both laws are set to take effect January 1, 2020.[3]  Collectively, these laws represent a dramatic expansion of data privacy law that will impact the products and processes of many companies. Also last week, Governor Brown signed into law Senate Bill 1121, which implemented amendments to the CCPA relating primarily to enforcement of the provisions, and clarification of exemptions relating to medical information. Security of Connected Devices The new law is aimed at protecting “connected devices” from unauthorized access, and requires “reasonable security feature[s]” proportional to the device’s “nature and function” and the “information it may collect, contain, or transmit.”[4]  There are various notable exclusions, particularly where the devices are covered by certain other laws, or when a company merely purchases devices for resale (or for branding and resale) in California.[5]  Nonetheless, the law is unique in that it may require security for Internet-connected products regardless of the type of information or data at issue—a contrast to the CCPA and other data privacy and security laws. Who Must Comply with the Law? Anyone “who manufactures, or contracts with another person to manufacture on the person’s behalf, connected devices that are sold or offered for sale in California” is subject to the statute.[6]  However, the law includes an explicit carve-out that “contract[ing] with another person to manufacture on the person’s behalf” does not include a “contract only to purchase a connected device, or only to purchase and brand a connected device.”[7]  Thus, if a company is merely purchasing whole units, and reselling, or even branding and reselling—effectively without the ability to indicate specifications for the device—it will likely not be subject to the new law. What’s Required? The law applies to manufacturers of “connected devices.”  A “connected device” is defined as “capable of connecting to the Internet . . . and . . . assigned an Internet Protocol address or Bluetooth address.”[8]  The number of products falling into this category is increasing at a remarkable rate, and the products span a multitude of applications, from consumer products (such as smart home features, including automatic lights or thermostats controlled remotely), to commercial use cases (such as electronic toll systems and “smart agriculture”). The law requires that such manufacturers “equip the device with a reasonable security feature or features” that is: Appropriate to the nature and function of the device; Appropriate to the information it may collect, contain, or transmit; and Designed to protect the device and its information from unauthorized access, destruction, use, modification, or disclosure.[9] The law does not specify what is “reasonable,” and relies upon the manufacturer to determine what is appropriate to the device.  As a result, “reasonable” will likely be further refined through enforcement actions (described below) . However, the law does provide that a device will satisfy the provisions if it is “equipped with a means for authentication outside a local area network,” and (1) each device is preprogrammed with a unique password, or (2) the user must create a “new means of authentication” (such as a password) before the device may be used.[10] [11] What’s Not Covered? Notably, the law excludes certain devices or manufacturers, particularly where they are covered by other existing laws, and makes clear statements of what this law does not do.  For example, the law does not apply to[12]: Any unaffiliated third-party software or applications the user adds to the device; Any provider of an electronic store, gateway, marketplace, or other means of purchasing or downloading software or applications; Devices subject to security requirements under federal law (e.g., FDA); and “Manufacturers” subject to HIPAA or the Confidentiality of Medical Information Act—at least “with respect to any activity regulated by those acts.”[13] How Will It Be Enforced? The law expressly does not provide for a private right of action, and it may only be enforced by the “Attorney General, a city attorney, a county counsel, or a district attorney.”[14]  It further does not set forth any criminal penalty, include a maximum civil fine, or specify any other authorized relief.  Nonetheless, the authorization of the enumerated entities to enforce it presumably includes the authority for those entities to seek civil fines, as they can under other consumer protection statutes (for example, Section 17206 of the California Business & Professions Code).[15] What Can You Do? If your company sells, or intends to sell, a product in California that connects to the Internet, consider: Whether the company is a “manufacturer”; The security features of the device, if any; What security features might be reasonable given the nature and function of the device and the nature of the data collected or used; Possibilities for alternative, or additional security measures for the specific device; and Engineering resources and timeline required to implement additional features. Many connected devices on the market today already have authentication and security features, but even those that do may benefit from an evaluation of their sufficiency in preparation for this new law.  Because the law may require actual product changes, rather than merely policy changes, addressing these issues early is important. Consultation with legal and information security professionals may be helpful. Amendments to CCPA Signed by Governor Brown on September 23, 2018 As anticipated, the California Legislature has begun to pass amendments to the CCPA, though the current changes are relatively modest.  Governor Brown signed the latest amendments to the CCPA on September 23, 2018, which included[16]: Extending the deadline for the California Attorney General (“AG”) to develop and publish rules implementing the CCPA until July 1, 2020; Prohibiting the AG from enforcing the Act until either July 1, 2020, or six months after the publication of the regulations, whichever comes first; Limiting the civil penalties that the AG can impose to $2,500 for each violation of the CCPA or up to $7,500 per each intentional violation; Removing the requirement for a consumer to notify the AG within 30 days of filing a civil action in the event of a data breach and to then wait six months to see if the AG elects to pursue the case; Clarifying that consumers only have a right of action related to a business’ alleged failure to “implement and maintain reasonable security procedures and practices” that results in a breach and not for any other violations of the Act; Updating the definition of “personal information” to stress that certain identifiers (e.g., IP address, geolocation information and web browsing history) only constitute personal information if the data can be “reasonably linked, directly or indirectly, with a particular consumer or household”; and Explicitly exempting entities covered by HIPAA, GLBA and DPPA, as well as California’s Confidentiality of Medical Information Act and its Financial Information Privacy Act. The foregoing amendments may not have been of major significance—they were passed on the last day of the most recent legislative session.  The California Legislature is expected to consider more substantive changes to the law when it reconvenes for the 2019 – 2020 session in January 2019, including addressing additional concerns regarding enforcement mechanisms, the law’s broad scope, and the sweeping disclosure obligations. Companies that may be impacted by the CCPA should continue to monitor legislative and regulatory developments relating to the CCPA, and should begin planning for the implementation of this broad statute.    [1]   Assembly Bill 1906 and Senate Bill 327 contain identical language.    [2]   The California Consumer Privacy Act was the subject of a detailed analysis in a client alert issued by Gibson Dunn on July 12, 2018.  That publication is available here.    [3]   The law will be enacted as California Civil Code Sections 1798.91.04 to 1798.91.06.    [4]   Cal. Civil Code § 1798.91.04(a)(1) and (a)(2).    [5]   Cal. Civil Code § 1798.91.05(c) and § 1798.91.06.    [6]   Cal. Civil Code § 1798.91.05(c).    [7]   Cal. Civil Code § 1798.91.05(c).    [8]   Cal. Civil Code § 1798.91.05(b).    [9]   Cal. Civil Code § 1798.91.04(a)(1), (a)(2), and (a)(3).    [10]   Cal. Civil Code § 1798.91.04(b) (emphasis added).    [11]   Authentication is simply defined as a “method of verifying the authority” of a user accessing the information or device. Cal. Civil Code § 1798.91.05(a).    [12]   Cal. Civil Code § 1798.91.06.    [13]   That said, those laws generally require stricter provisions for security measures.    [14]   Cal. Civil Code § 1798.91.06(e).    [15]   See Cal. Bus. & Prof. Code § 17204.    [16]   S.B. 1121. S. Reg. Sess. 2017-2018. (CA 2018) The following Gibson Dunn lawyers assisted in the preparation of this client alert: Joshua A. Jessen, Benjamin B. Wagner, and Cassandra L. Gaedt-Sheckter. Gibson Dunn’s lawyers are available to assist with any questions you may have regarding these issues.  For further information, please contact the Gibson Dunn lawyer with whom you usually work or the following leaders and members of the firm’s Privacy, Cybersecurity and Consumer Protection practice group: United States Alexander H. Southwell – Co-Chair, New York (+1 212-351-3981, asouthwell@gibsondunn.com) M. Sean Royall – Dallas (+1 214-698-3256, sroyall@gibsondunn.com) Debra Wong Yang – Los Angeles (+1 213-229-7472, dwongyang@gibsondunn.com) Christopher Chorba – Los Angeles (+1 213-229-7396, cchorba@gibsondunn.com) Richard H. Cunningham – Denver (+1 303-298-5752, rhcunningham@gibsondunn.com) Howard S. Hogan – Washington, D.C. (+1 202-887-3640, hhogan@gibsondunn.com) Joshua A. Jessen – Orange County/Palo Alto (+1 949-451-4114/+1 650-849-5375, jjessen@gibsondunn.com) Kristin A. Linsley – San Francisco (+1 415-393-8395, klinsley@gibsondunn.com) H. Mark Lyon – Palo Alto (+1 650-849-5307, mlyon@gibsondunn.com) Shaalu Mehra – Palo Alto (+1 650-849-5282, smehra@gibsondunn.com) Karl G. Nelson – Dallas (+1 214-698-3203, knelson@gibsondunn.com) Eric D. Vandevelde – Los Angeles (+1 213-229-7186, evandevelde@gibsondunn.com) Benjamin B. Wagner – Palo Alto (+1 650-849-5395, bwagner@gibsondunn.com) Michael Li-Ming Wong – San Francisco/Palo Alto (+1 415-393-8333/+1 650-849-5393, mwong@gibsondunn.com) Ryan T. Bergsieker – Denver (+1 303-298-5774, rbergsieker@gibsondunn.com) Europe Ahmed Baladi – Co-Chair, Paris (+33 (0)1 56 43 13 00, abaladi@gibsondunn.com) James A. Cox – London (+44 (0)207071 4250, jacox@gibsondunn.com) Patrick Doris – London (+44 (0)20 7071 4276, pdoris@gibsondunn.com) Bernard Grinspan – Paris (+33 (0)1 56 43 13 00, bgrinspan@gibsondunn.com) Penny Madden – London (+44 (0)20 7071 4226, pmadden@gibsondunn.com) Jean-Philippe Robé – Paris (+33 (0)1 56 43 13 00, jrobe@gibsondunn.com) Michael Walther – Munich (+49 89 189 33-180, mwalther@gibsondunn.com) Nicolas Autet – Paris (+33 (0)1 56 43 13 00, nautet@gibsondunn.com) Kai Gesing – Munich (+49 89 189 33-180, kgesing@gibsondunn.com) Sarah Wazen – London (+44 (0)20 7071 4203, swazen@gibsondunn.com) Alejandro Guerrero – Brussels (+32 2 554 7218, aguerrero@gibsondunn.com) Asia Kelly Austin – Hong Kong (+852 2214 3788, kaustin@gibsondunn.com) Jai S. Pathak – Singapore (+65 6507 3683, jpathak@gibsondunn.com) © 2018 Gibson, Dunn & Crutcher LLP Attorney Advertising:  The enclosed materials have been prepared for general informational purposes only and are not intended as legal advice.

March 16, 2018 |
Aerospace and Related Technologies – Key Developments in 2017 and Early 2018

Click for PDF This March 2018 edition of Gibson Dunn’s Aerospace and Related Technologies Update discusses newsworthy developments, trends, and key decisions from 2017 and early 2018 that are of interest to aerospace and defense, satellite, and drone companies; and new market entrants in the commercial space and related technology sectors, including the private equity and other financial institutions that support and enable their growth. Specifically, this update covers the following areas: (1) commercial unmanned aircraft systems (“UAS”), or drones; (2) government contracts litigation involving companies in the aerospace and defense industry; (3) the commercial space sector; and (4) cybersecurity and privacy issues related to the national airspace.  We discuss each of these areas in turn below. I.    COMMERCIAL UNMANNED AIRCRAFT SYSTEMS The commercial drone industry has continued to mature through advancements in technology, government relations, and public perception.  Commercial drones are being used for various sensory data collection, building inspections, utility inspections, agriculture monitoring and treatment, railway inspections, pipeline inspections, mapping of mines, and photography.  New drone applications are being created on a regular basis.  For example, the concept of flying drone taxis was validated in Dubai in September 2017 when an uncrewed two-seater drone successfully conducted its first test flight. Around a year and a half ago, United States regulations governing non-recreational drone operations were finalized.  Since then, the Federal Aviation Administration (“FAA”) has issued over 60,000 remote pilot certificates.  The FAA has and continues to make efforts to advance its technology, and it recently released a prototype application to provide operators with automatic approval of specific airspace authorizations.  The national beta test of this system will launch in 2018, and we will be sure to report back with the results. One of the biggest boons for the industry over the past 15 months was the positive public perception stemming from Hurricane Harvey relief efforts.  In the days following the disaster, drones worked in concert with government agencies to support search and rescue missions, inspect roads and railroads, and assess water plants, oil refineries, cell towers, and power lines.  Further, major insurance companies used drones to assess claims in a safer, faster, and more efficient manner.  The aftermath of this disaster demonstrated the value of drone technology and increasingly has driven a positive public perception of the industry.  Indeed, even aside from the disaster relief efforts, media sources continue to carry positive drone stories.  For example, in January 2018, Australian lifeguards were testing a drone with the ability to release an inflatable rescue pod; during its testing, the drone was called into action, and rescued two teenagers from drowning. The future is bright, but there are still many obstacles for the industry to overcome before it fully matures, such as clarity around low altitude airspace, privacy concerns, and the risk to people, property, and other aircraft. To get you caught up on 2017 and early 2018 drone developments, we have briefly summarized below: (A) highlights of drone litigation impacting airspace, including highlights from previous years for context; (B) drone registration; (C) privacy issues related to drones; (D) the United States government’s expanded use of drones; (E) drone countermeasures; (F) drone safety studies; and (G) the UAS airspace integration pilot program. A.    Litigation Highlights Regarding Airspace Huerta v. Haughwout, No. 3:16-cv-358, Dkt. No. 30 (D. Conn. Jul. 18, 2016) The latter half of 2016 featured an important decision regarding the FAA’s authority over low-level airspace.  The 2016 decision, Huerta v. Haughwout—also known as “the flamethrower drone case,” involved two YouTube videos posted by the Haughwouts.  One video featured a drone firing an attached handgun, while a second video showed a drone using an attached flamethrower to scorch a turkey.  After the videos were publicly uploaded, the FAA served the Haughwouts with an administrative subpoena to acquire further information about the activities featured in the videos.  The Haughwouts refused to comply with the FAA’s subpoenas, asserting that their activities were not subject to investigation by the FAA.  In response, the FAA sought enforcement of the subpoenas in the District of Connecticut.[1] Judge Jeffrey Meyer found the administrative subpoenas to be valid.  Most importantly, however, his order included dicta casting doubt on the FAA’s claim to control all airspace from the ground up:  “The FAA believes it has regulatory sovereignty over every inch of outdoor air in the United States…. [T]hat ambition may be difficult to reconcile with the terms of the FAA’s statute that refer to ‘navigable airspace.'”  While this dicta addressed the question of where the FAA’s authority begins, Judge Meyer also noted that “the case does not yet require an answer to that question.”[2]  Judge Meyer further stated: Congress surely understands that state and local authorities are (usually) well positioned to regulate what people do in their own backyards.  The Constitution creates a limited national government in recognition of the traditional police power of state and local government.  No clause in the Constitution vests the federal government with a general police power over all of the air or all objects that leave the ground.  Although the Commerce Clause allows for broad federal authority over interstate and foreign commerce, it is far from clear that Congress intends–or could constitutionally intend–to regulate all that is airborne on one’s own property and that poses no plausible threat to or substantial effect on air transport or interstate commerce in general.[3] 2017 featured the resolution of another lawsuit where the plaintiff attempted to extend the significance of Haughwout in an effort to get the courts to address the question of what “navigable airspace” means in the context of drones (see discussion of Singer v. City of Newton, infra). Boggs v. Merideth, No. 3:16-cv-00006 (W.D. Ky. Jan. 4, 2016) In Boggs v. Merideth—better known as “the Drone Slayer case”—a landowner shot down an operator’s drone with a shotgun in the Western District of Kentucky.[4]  The plaintiff flew his drone roughly 200 feet above the defendant’s property, causing the defendant—the self-anointed “Drone Slayer”—to claim the drone was trespassing and invading his privacy and shoot it down.  The plaintiff believed the airspace 200 feet above the ground was federal airspace and therefore the defendant could not claim the drone was trespassing. Following a state judge’s finding that the defendant acted “within his rights,” the drone operator filed a complaint in federal court for declaratory judgment to “define clearly the rights of aircraft operators and property owners.”[5]  The case had the potential to be a key decision on the scope of federal authority over the use of airspace.  Rather than claiming defense of property, however, the defendant moved to dismiss the complaint on jurisdictional grounds.  The plaintiff unsuccessfully attempted to rely on the decision in Huerta v. Haughwout for the proposition that all cases involving the regulation of drone flight should be resolved by federal courts.  The court rejected the plaintiff’s argument, noting that Haughwout only concerned the FAA’s ability to exercise subpoena power and enforce subpoenas in federal court.  In fact, the district court noted, the court in Haughwout “expressed serious skepticism as to whether all unmanned aircrafts are subject to FAA regulation.”[6]  In his March 2017 order, Senior District Court Judge Thomas B. Russell granted the defendant’s motion to dismiss for lack of federal jurisdiction, stating that the issue of whether or not the drone was in protected airspace only arises on the presumption that the defendant would raise the defense that he was defending his property.[7]  Consequently, there was no federal question jurisdiction and the case was thrown out without ever reaching its merits. While the answer to what exactly constitutes “navigable airspace” in the drone context remained unanswered in 2017, the year did mark the beginning of federal courts addressing the overlap between conflicting state, local, and federal drone laws. Singer v. City of Newton No. 1:17-cv-10071 (D. Mass. Jan. 17, 2017) On September 21, 2017, a federal judge in the District of Massachusetts held that portions of the City of Newton, Massachusetts’s (“Newton”) ordinance attempting to regulate unmanned aircraft operations within the city were invalid.[8] The case, Singer v. City of Newton, marks the first time a federal court has struck down a local ordinance attempting to regulate drones.  The court held the following four city ordinance provisions to be unenforceable: (1) a requirement that all owners register their drones with the city; (2) a ban on all drone operations under 400 feet that are over private property unless done with express permission of the property owner; (3) a ban on all drone operations over public property, regardless of altitude, unless done with the express permission of the city; and (4) a requirement that no drone be operated beyond the visual line of sight of its operator.[9] All four of these provisions of the Newton ordinance were found to be preempted by federal regulations promulgated by the FAA. In the course of holding that the four sections of Newton’s ordinance were each preempted, the court identified the congressional objectives each section inhibited.  One relevant congressional objective is to make the FAA the exclusive regulatory authority for registration of drones.  The Newton ordinance required the registration of drones with the City of Newton, which impeded Congress’s objective; thus, the court found that section to be preempted.[10] The court also identified a congressional objective for the FAA to develop a comprehensive plan to safely accelerate the integration of drones into the national airspace system.  The two sections of the Newton ordinance requiring prior permission to fly above both public and private property within the city effectively eliminated any drone activity without prior permission; thus those sections were held to interfere with the federal objective and were invalidated.[11] Lastly, the court found that the Newton ordinance’s provision barring drone usage beyond the visual line of sight of the operator conflicted with a less restrictive FAA rule allowing such usage if a waiver is obtained or if a separate visual observer can see the drone throughout its flight and assist the operator.[12] The Singer ruling marked the long-anticipated beginning of federal courts addressing overlapping state, local, and federal drone laws.  While the ruling is significant for invalidating sections of a local ordinance and thus establishing a framework that federal courts may follow to invalidate state and local drone laws elsewhere, it is important not to overstate the case’s current significance.  The court in Singer declined to hold that law relating to airspace was expressly preempted or field preempted, but rather decided it was conflict preempted.  Consequently, the case does not provide support for the assertion that all state and local drone laws related to airspace will be preempted by FAA regulations.  Further, the court did not opine on the lower limits of the National Airspace and whether it goes to the ground, an issue likely to come up in future litigation. The unchallenged portions of the Newton ordinance still stand, and the closing lines in the opinion recognize that Newton is free to redraft the invalidated portions to avoid direct conflict with FAA regulations.  Thus it remains possible, even in the District of Massachusetts, for federal law to coexist with state and local laws in this field.  In order to successfully avoid invalidation in the courts, however, state and local lawmakers must draft legislation that allows for compliance with federal regulations, and which does not interfere with any federal objectives. The year 2017 left much to still be determined by the courts.  While Newton demonstrated that preemption concerns do and will continue to exist, the case did not address the boundary of the National Airspace.  Haughwout did address the boundary—though only through dicta—and suggested that, when the issue is decided, the boundary will likely not extend to the ground.  Thus, as was the case at the start of 2017, where the boundary will be drawn remains to be seen. B.    Drone Registration: From Mandatory to Optional and Back to Mandatory In December 2015, days before tens of thousands of drones were gifted for the holidays, the FAA adopted rules requiring the registration of drones weighing more than 0.55 pounds prior to operation.  This registration requirement only impacted recreational users, as commercial users are required to register under Part 107.  This rule was challenged in Taylor v. Huerta, and on May 19, 2017, the U.S. Court of Appeals for the D.C. Circuit vacated the rule.[13]  The FAA instituted a program to issue refunds, and recreational pilots enjoyed the freedom of flying unregistered drones for the next seven months. The Circuit Court struck down the rule because the FAA lacked statutory authority to issue such a rule for recreational pilots.  Section 336 of the FAA Modernization and Reform Act of 2012 states that the “Administrator of the Federal Aviation Administration may not promulgate any rule or regulation regarding a model aircraft.”[14]  The Court held that the FAA’s registration rule “directly violates that clear statutory prohibition” and vacated the rule to the extent it applied to model aircraft.[15]  The FAA responded by offering $5 registration fee refunds and the option to have one’s information removed from the federal database, but encouraging recreational operators to voluntarily register their drones. However, in a turn of events, on December 12, 2017, the President signed the National Defense Authorization Act of 2018, which included a provision reinstating the rule: Restoration Of Rules For Registration And Marking Of Unmanned Aircraft.—The rules adopted by the Administrator of the Federal Aviation Administration in the matter of registration and marking requirements for small unmanned aircraft (FAA-2015-7396; published on December 16, 2015) that were vacated by the United States Court of Appeals for the District of Columbia Circuit in Taylor v. Huerta (No. 15-1495; decided on May 19, 2017) shall be restored to effect on the date of enactment of this Act.[16] As a result of the Act, both recreational and commercial pilots are now required to register their drones, and one can do so on the FAA’s website. C.    UAS and Privacy 1.    Voluntary Best Practices Remain Intact A 2015 Presidential Memorandum issued by then President Obama ordered the National Telecommunications and Information Administration (“NTIA”) of the U.S. Department of Commerce to create a private-sector engagement process to help develop voluntary best practices for privacy and transparency issues regarding commercial and private drone use.[17]  Since Part 107 of Title 14 of the Code of Federal Regulations (“Part 107”)[18] does not address privacy, privacy advocates hoped that the NTIA would force the FAA to promulgate privacy regulations.[19]  Prior attempts to petition the FAA to consider privacy concerns in its Notice of Proposed Rulemaking (“NPRM”) for Part 107 were unsuccessful.[20] The NTIA issued its voluntary best privacy practices for drones on May 19, 2016.[21]  While the final best practices found support from some privacy organizations and most of the commercial drone industry, other privacy groups raised concerns that the best practices neither established nor encouraged binding legal standards.[22]  Nonetheless, the best practices offer useful guidelines for companies testing and/or actively conducting drone operations. 2.    Litigation Regarding the FAA’s Role in Addressing Privacy As we discussed in an earlier update, the Electronic Privacy Information Center (“EPIC”) challenged the FAA’s decision to exclude privacy regulations from Part 107 in an August 2016 petition for review.[23]  In 2012, EPIC petitioned the FAA to promulgate privacy regulations applicable to drone use, which the FAA denied in February 2014.[24]  EPIC argued that the FAA Modernization and Reform Act of 2012 required the FAA to consider privacy issues in its NPRM.[25]  The FAA argued that while the Act directed the FAA to develop a comprehensive plan to safely integrate drones into the national airspace system, privacy considerations went “beyond the scope” of that plan.[26]  The D.C. Circuit dismissed EPIC’s petition for review on two grounds.[27]  First, the Court deemed EPIC’s petition for review “time-barred” because EPIC filed 65 days past the time allotted under 49 U.S.C. § 46110(a).[28]  Second, the Court held that the FAA’s “conclusion that privacy is beyond the scope of the NPRM” was not a final agency determination subject to judicial review.[29] After the rule became final, EPIC filed a new petition for review asking the court to vacate Part 107 and remand it to the FAA for further proceedings.[30]  Consolidated with a related case, Taylor v. FAA, No. 16-1302 (D.C. Cir. filed August 29, 2016), EPIC argues that the FAA violated the Act by: (1) refusing to consider “privacy hazards,” and (2) refusing to “conduct comprehensive drone rulemaking,” which necessarily includes issues related to privacy.[31]  The FAA argues: (1) EPIC lacks standing, (2) the FAA reasonably decided not to address privacy concerns, and (3) even if EPIC has standing, Section 333 of the Act does not require the FAA to promulgate privacy regulations.[32]  Judge Merrick Garland, Judge David Sentelle, and Judge A. Raymond Randolph heard oral arguments in the consolidated cases on January 25, 2018.[33]  All eyes thus remain on the D.C. Circuit to determine whether the FAA must issue regulations covering privacy concerns raised by increased drone use. D.    The United States Government Expands Its Use of Drones Four years after the U.S. Department of Defense (“DoD”) issued its 25-year “vision and strategy for the continued development, production, test, training, operation, and sustainment of unmanned [aircraft] systems technology,”[34] the drone defense industry continues to experience rapid growth.  A recent market report estimated that commercial and government drone sales will surpass $12 billion by 2021.[35]  However, that estimate is likely conservative when considering that the DoD allocated almost $5.7 billion to drone acquisition and research in 2017 alone.[36]  Likewise, the DoD allocates almost $7 billion to drone technology in its 2018 fiscal year Defense Budget.[37]  Additionally, Goldman Sachs forecasted a $70 billion market opportunity for military drones by 2020.[38]  According to Goldman Sachs: “Current drone technology has already surpassed manned aircraft in endurance, range, safety and cost efficiency — but research and development is far from over.  The next generation of drones will widen the gap between manned and unmanned flight even further, adding greater stealth, sensory, payload, range, autonomous, and communications capabilities.”[39]  It should thus come as no surprise that organizations developing defense-specific drones will expect increased demand for complete systems and parts in the coming years. 1.    United States Government’s Domestic Use Drones The U.S. government mostly acquires drones for overseas military operations, a trend dating back to the deployment of the Predator drone in post-9/11 conflict territories.[40]  Domestic use of DoD-owned drones remains subject to strict governmental approval, and armed drones are prohibited on U.S. soil.[41]  In February 2015, the Deputy Secretary of Defense issued Policy Memorandum 15-002 entitled “Guidance for the Domestic Use of Unmanned Aircraft Systems.”[42]  Under the policy, the Secretary of Defense must approve all domestic use of DoD-owned UAVs, with one exception—domestic search and rescue missions overseen by the Air Force Rescue Coordination Center.[43]  However, DoD personnel may use drones to surveil U.S. persons where permitted by law and where approved by the Secretary.[44]  The policy expired on February 17, 2018,[45] and it remains to be seen how the Trump administration will handle domestic use of DoD-owned drones and the integration of UAVs into day-to-day civilian operations. E.    Drone Countermeasures In response to the rapid growth of militarized consumer drones, particularly in ISIS-controlled territories,[48] 2017 saw an increased offering of anti-drone technologies in the U.S.[49]  In April 2017, the U.S. Army’s Rapid Equipment Force purchased 50 of Radio Hill Technologies’ “Dronebuster” radar guns.[50]  The Dronebuster uses radio frequency technology to interrupt the control of drones by effectively jamming the control frequency or the GPS signal.[51]  The end-user can overwhelm the drone and deprive its operator of control or cause the drone to “fall out of the sky.”[52]  Handheld radar-type guns like the Dronebuster weigh about five pounds and cost an average of $30,000.[53]  The U.S. military also experimented with the Mobile High-Energy Laser-equipped Stryker vehicle.[54]  Similar to the Dronebuster, the 5 to 10kW laser overwhelms target drones’ control systems with high bursts of energy.[55]  It can shoot down drones 600 meters away, all without making a sound.[56] F.    Drone Safety Studies Making UAS operations commonplace in urban airspace will be a big step in the technological and economic advancement of the U.S.; however, there are obstacles to overcome in ensuring the safe operation of drones in urban areas.  On April 28, 2017, the Alliance for System Safety of UAS through Research Excellence (“ASSURE”) released the results of a study that explored the severity of a UAS collision with people and property on the ground.[57]  First, ASSURE determined the most likely impact scenarios by reviewing various operating environments for UAS and determining their likely exposure to people and other manned aircraft.[58]  Then the team conducted crash tests and analyzed crash dynamics by measuring kinetic energy transfer.[59]  The results revealed that earlier measurements of the danger of collision grossly overestimate the risk of injury from a drone.[60]  ASSURE concluded that the DJI Phantom 3 drone has a 0.03% chance of causing a head injury if it falls on a person’s head.[61]  This is a very low probability considering blocks of steel or wood of the same weight have a 99% risk of causing a head injury in the same scenario.[62]  The disparity in probability of head injury is largely due to the fact that the DJI Phantom 3 drone absorbs most of the energy resulting from a collision, and therefore less energy is transferred on impact from the drone than from a block of steel or wood in the same collision.[63] In fact there are numerous steps that drone designers and manufacturers can take to reduce the likelihood of injury in the event of a collision.[64]  Projectile mass and velocity, as well as stiffness of the UAS, are the primary drivers of impact damage.[65]  As such, multi-rotor drones tend to be safer because they fall more slowly due to the drag of the rotors as the drones fall through the air.[66]  The study made clear that blade guards should be a design requirement for drones used in close proximity to people in order to minimize the lacerations that can result from a collision.[67]  Moreover, ASSURE found that the more flexible the structure of the drone, the more energy the drone retains during impact, causing less harm to the impacted object of the collision.[68] Regarding crashes with other manned aircraft, however, the study revealed that the impact of a drone can be much more severe than the impact of a bird of equivalent size and speed.[69]  As such, the structural components of a commercial aircraft that allows it to withstand bird strikes from birds up to eight pounds are not an appropriate guideline for preventing damage from a UAS strike.[70]  The study also examined the dangers associated with lithium batteries, which are used to power most drones, in collisions.[71]  The major concern is the risk of a battery fire.[72]  The study found that typical high-speed impacts cause complete destruction of the battery, eliminating any concerns about battery fires.[73]  However, the lower impact crashes, which are mainly associated with take-off and landing, left parts of the battery intact, posing a risk of battery fire.[74] While the ASSURE study is the first of its kind, it certainly marks the need for more studies that analyze the practical aspects of collisions and how to reduce risk to minimize harm.  The hazards associated with commonplace drone operation are many.[75]  Analysis of the physical impact of a collision is one aspect of minimizing UAS risks.  There is still much work to be done in order to minimize other collateral risks, such as the risk of technology failures, which range from UAS platform failures, to failures of hardware or communication links controlling the UAS.[76]  Environmental hazards, such as the effect of rain, lightning, and other types of weather remains to be studied.[77]  Ways to safeguard against human error or intentional interference is another aspect of UAS safety that has yet to be studied in detail.[78]  Data link spoofing, jamming, or hijacking poses significant safety hazards, particularly as incidents of data breaches become more and more common.[79]  Before the integration of UAS into national airspace can be fully implemented, industry stakeholders must collaborate to conduct studies that will help inform legislators about what kind of technological requirements and operational regulations are necessary. G.    UAS Airspace Integration Pilot Program In October 2017, the U.S. Department of Transportation (“DOT”) announced that it was launching the Unmanned Aircraft Systems Integration Pilot Program.[80]  The program, which was established in response to a presidential directive, is meant to accelerate the integration of UAS into the national airspace through the creation of public-private partnerships between UAS operators, governmental entities, and other private stakeholders.[81]  The program is designed to establish greater regulatory certainty and stability regarding drone use.[82]  After reviewing the applications, DOT will select a minimum of five partnerships with the goal of collaborating with the selected industry stakeholder in order to evaluate certain advanced UAS operational concepts, such as night operations, flights beyond the pilot’s line of sight, detect-and-avoid technologies, flights over people, counter-UAS security operations, package delivery, the integrity and dependability of data links between pilot and aircraft, and cooperation between local authorities and the FAA in overseeing UAS operations.[83] One such application was made by the City of Palo Alto, in partnership with the Stanford Blood Center, Stanford hospital, and Matternet, a private drone company.[84]  The City of Palo Alto has proposed the use of drones to deliver units of blood from the Stanford Blood Center to Stanford hospital, which would involve establishing an approved flight path for drones to transfer the units of blood in urgent situations.[85]  Matternet has already tested its drones’ capacity for transporting blood and other medical samples in Switzerland.[86]  A second project proposed by the City of Palo Alto involves the use of drones in order to monitor the perimeter of the Palo Alto Airport.[87]  This project involves a partnership between the city and a company called Multirotor, a German drone company that has experience working with the German army and the Berlin Police Department to integrate UAS as tools for law enforcement activities.[88] The creation of the pilot program has given stakeholders the sense that the current administration is supportive of integrating drones into the national airspace.  The support of the government has created the potential for unprecedented growth in an industry that could bring lucrative returns to its stakeholders.  The DOT has already received over 2,800 interested party applications.[89]  The majority of these applications have come from commercial drone companies, as well as various other stakeholders including energy companies, law enforcement agencies, and insurance providers.[90]  The UAS Pilot Program is to last for three years.[91]  The projected economic benefit of integrated UAS is estimated to equal $82 billion, creating up to 100,000 jobs.[92]  Industries that could see immediate returns from the program include precision agriculture, infrastructure inspection and monitoring, photography, commerce, and crisis management.[93]  The advent of established, government-sanctioned rules for the operation of UAS will motivate industry stakeholders both in the public and private sectors to push forward with new and innovative ways to use drones. II.    GOVERNMENT CONTRACTS LITIGATION IN THE AEROSPACE AND DEFENSE INDUSTRY Gibson Dunn’s 2017 Year-End Government Contracts Litigation Update and 2017 Mid-Year Government Contracts Litigation Update cover the waterfront of the most important opinions issued by the U.S. Court of Appeals for the Federal Circuit, U.S. Court of Federal Claims, Armed Services Board of Contract Appeals (“ASBCA”), and Civilian Board of Contract Appeals among other tribunals.  We invite you to review those publications for a full report on case law developments in the government contracts arena. In this update, we (A) summarize key court decisions related to government contracting from 2017 that involve players in the aerospace and defense industry.  The cases discussed herein, and in the Government Contracts Litigation Updates referenced above, address a wide range of issues with which government contractors in the aerospace and defense industry are likely familiar. A.    Select Decisions Related to Government Contractors in the Aerospace and Defense Industry Technology Systems, Inc., ASBCA No. 59577 (Jan. 12, 2017) TSI held four cost-plus-fixed-fee contracts with the Navy for research and development.  Several years into the contracts, the government disallowed expenses that had not been questioned in prior years.  TSI appealed to the ASBCA, arguing that it relied to its detriment on the government’s failure to challenge those same expenses in prior years. The Board (Prouty, A.J.) held that the challenged costs were “largely not allowable” and that “the principle of retroactive disallowance,” which it deemed “a theory for challenging audits whose heyday has come and gone,” did not apply because the same costs had simply not come up in the prior audits.  The theory of retroactive disallowance, first articulated in a Court of Claims case in 1971, prevents the government from challenging costs already incurred when the cost previously had been accepted following final audit of historical costs; the contractor reasonably believed that it would continue to be approved; and it detrimentally relied on the prior acceptance.  Tracing the precedent discussing the principle, the Board cited the Federal Circuit’s decision in Rumsfeld v. United Technologies Corp., 315 F.3d 1361 (Fed. Cir. 2003), which stated that “affirmative misconduct” on the part of the government would be required for the principle of retroactive disallowance to apply because it is a form of estoppel against the government.  The Board “sum[med] up: there is no way to read our recent precedent or the Federal Circuit’s except to include an affirmative misconduct requirement amongst the elements of retroactive disallowance.  Period.”  Further, the Board held that the government’s failure to challenge the same costs in prior years did not constitute a “course of conduct precluding the government from disallowing the costs in subsequent audits.” Delfasco LLC, ASBCA No. 59153 (Feb. 14, 2017) Delfasco had a contract with the Army for the manufacture and delivery of a specified number of munition suspension lugs.  The Army thereafter exercised an option to double the number of lugs required.  When Delfasco stopped making deliveries due to an inability to pay its subcontractor, the Army terminated the contract for default.  Delfasco appealed to the ASBCA, asserting that the government had waived its right to terminate for untimely performance by allegedly stringing Delfasco along even after the notice of termination. The Board (Prouty, A.J.) set out the test for waiver in a case involving termination for default due to late delivery as follows:  “(1) failure to terminate within a reasonable time after the default under circumstances indicating forbearance, and (2) reliance by the contractor on the failure to terminate and continued performance by him under the contract with the Government’s knowledge and implied or express consent.”  The Board held that Delfasco failed to satisfy the first prong because the government’s show cause letter placed Delfasco on notice that any continued performance would only be for the purpose of mitigating damages.  Moreover, Delfasco failed to satisfy the second prong because Delfasco’s payment to its subcontractor after the show cause letter would have been owed regardless, and was not paid in reliance upon the government’s failure to terminate.  Therefore, the Board found that the government had not waived its right to terminate, and denied the appeal. Raytheon Co., ASBCA Nos. 57743 et al. (Apr. 17, 2017) Raytheon appealed from three final decisions determining that an assortment of costs—including those associated with consultants, lobbyists, a corporate development database, and executive aircraft—were expressly unallowable and thus subject to penalties.  After a two-week trial, the Board (Scott, A.J.) sided largely with Raytheon in a wide-ranging decision that covers a number of important cost principles issues. First, the Board rejected the government’s argument that the consultant costs were expressly unallowable simply because the government was dissatisfied with the level of written detail of the work product submitted to support the costs.  Judge Scott noted that written work product is not a requirement to support a consultant’s services under FAR 31.205-33(f), particularly not where, as here, much of the consultants’ work was delivered orally due to the classified nature of the work performed.  The Board found that not only were the consultant costs not expressly unallowable, but indeed were allowable.  This is a significant ruling because the documentation of consultant costs is a recurring issue as government auditors frequently make demands concerning the amount of documentation required to support these costs during audits. Second, the government sought to impose penalties for costs that inadvertently were not withdrawn in accordance with an advance agreement between Raytheon and the government concerning two executive aircraft.  Raytheon agreed that the costs should have been withdrawn and agreed to withdraw them when the error was brought to its attention, but asserted that the costs were not expressly unallowable and subject to penalty.  The Board agreed, holding that the advance agreements did not themselves clearly name and state the costs to be unallowable, and further that advance agreements do not have the ability to create penalties because a cost must be named and stated to be unallowable in a cost principle (not an advance agreement) to be subject to penalties.  This ruling could have significance for future disputes arising out of advance agreements. Third, the government alleged that costs associated with the design and development of a database to support the operations of Raytheon’s Corporate Development office were expressly unallowable organizational costs under FAR 31.205-27.  The Board disagreed, validating Raytheon’s argument that a significant purpose of the Corporate Development office was allowable generalized long-range management planning under FAR 31.205-12, thus rendering the costs allowable (not expressly unallowable). The only cost for which the Board denied Raytheon’s appeals concerned the salary costs of government relations personnel engaged in lobbying activities.  Raytheon presented evidence that it had a robust process for withdrawing these costs as unallowable under FAR 31.205-22, but inadvertently missed certain costs in this instance due to, among other things, “spreadsheet errors.”  Raytheon agreed that the costs were unallowable and should be withdrawn, but disputed that the costs of employee compensation (a generally allowable cost) were expressly unallowable and further argued that the contracting officer should have waived penalties under FAR 42.709-5(c) based on expert evidence that Raytheon’s control systems for excluding unallowable costs were “best in class.”  The Board found that salary costs associated with unallowable lobbying activities are expressly unallowable and that the contracting officer did not abuse his discretion in denying the penalty waiver. L-3 Comms. Integrated Sys. L.P. v. United States, No. 16-1265C (Fed. Cl. May 31, 2017) L-3 entered an “undefinitized contractual action” (“UCA”) with the Air Force in which it agreed to provide certain training services while still negotiating the terms of the contract.  After the parties failed to reach agreement on the prices for two line items in the UCA, the Air Force issued a unilateral contract modification, setting prices for those line items and definitizing the contract.  L-3 argued that the Air Force’s price determination was unreasonable, arbitrary and capricious, and in violation of the FAR, and filed suit seeking damages.  The government moved to dismiss for lack of subject matter jurisdiction. The Court of Federal Claims (Kaplan, J.) dismissed L-3’s complaint, concurring with the government that L-3 had never presented a certified claim to the contracting officer for payment “of a sum certain to cover the losses it allegedly suffered.”  The court found that the proposals L-3 had presented to the Air Force were not “claims,” but rather proposals made during contract negotiations that did not contain the requisite claim certification language. Innoventor, Inc., ASBCA No. 59903 (July 11, 2017) In 2011, the government entered into a fixed-price contract with Innoventor for the design and manufacture of a dynamic brake test stand.  As part of the contract’s purchase specifications, the new design had to undergo and pass certain testing.  After problems arose in the testing process, Innoventor submitted a proposal to modify certain design components and applied for an equitable adjustment due to “instability of expectations.”  The contracting officer denied Innoventor’s request for an equitable adjustment, stating that the government had not issued a modification directing a change that would give rise to such an adjustment.  Innoventor submitted a claim, which the contracting officer denied, and Innoventor appealed. The Board (Sweet, A.J.) held that the government was entitled to judgment as a matter of law because there was no evidence that the government changed Innoventor’s performance requirements, let alone that anyone with authority directed any constructive changes.  Here, the contract was clear that Innoventor’s design had to pass certain tests, and because it failed some of them, and did not perform pursuant to the contract terms, there was no change in the original contract terms that would give rise to a constructive change.  The Board also found that there was no evidence that any person beyond the contracting officer had authority to direct a change because the contract expressly provided that only the contracting officer has authority to change a contract.  Accordingly, the Board denied Innoventor’s appeal. L-3 Commc’ns Integrated Sys., L.P., ASBCA Nos. 60713 et al. (Sept. 27, 2017) L-3 appealed from multiple final decisions asserting government claims for the recovery of purportedly unallowable airfare costs.  Rather than audit and challenge specific airfare costs, the Defense Contract Audit Agency simply applied a 79% “decrement factor” to all of L-3’s international airfare costs over a specified dollar amount, claiming that this was justified based on prior-year audits.  After filing the appeals, L-3 moved to dismiss for lack of jurisdiction on the grounds that the government had failed to provide adequate notice of its claims by failing to identify which specific airfare costs were alleged to be unallowable, as well as the basis for those allegations. The Board (D’Alessandris, A.J.) denied the motion to dismiss, holding that the contracting officer’s final decisions sufficiently stated a claim in that they set forth a sum certain and a basis for such a claim.  The Board held that L-3 had enough information to understand how the government reached its claim, and its contention that this was not a valid basis for the disallowance of costs for the year in dispute went to the merits and not the sufficiency of the final decisions. Scott v. United States, No. 17-471 (Fed. Cl. Oct. 24, 2017) Brian X. Scott brought a pro se claim in the Court of Federal Claims seeking monetary and injunctive relief for alleged harms arising from the Air Force’s handling of his unsolicited proposal for contractual work.  Scott was an Air Force employee who submitted a proposal for countering the threat of a drone strike at the base where he was stationed.  The proposal was rejected, but Scott alleged that portions of the proposal were later partially implemented.  Scott sued, claiming that the Air Force failed properly to review his proposal and that his intellectual property was being misappropriated.  Scott argued that jurisdiction was proper under the Tucker Act because an implied-in-fact contract arose that prohibited the Air Force from using any data, concept, or idea from his proposal, which was submitted to a contracting officer with a restrictive legend consistent with FAR § 15.608. The Court of Federal Claims (Lettow, J.) found that it had jurisdiction under the Tucker Act because an implied-in-fact contract was formed when the Air Force became obligated to follow the FAR’s regulatory constraints with regard to Scott’s proposal.  Nevertheless, the Court granted the government’s motion to dismiss because Scott’s factual allegations, even taken in the light most favorable to him, did not plausibly establish that the government acted unreasonably or failed to properly evaluate his unsolicited proposal by using concepts from the proposal where Scott’s proposal addressed a previously published agency requirement. III.    COMMERCIAL SPACE SECTOR A.    Overview of Private Space Launches and Significant Milestones Space exploration is always fascinating—2017 and early 2018 was no exception.  Starting off in February 2017, India’s Polar Satellite Launch Vehicle launched 104 satellites, setting a record for the number of satellites launched from a single rocket.[101]  In June, NASA finally unveiled its 12 chosen candidates for its astronaut program out of a pool of over 18,000 applicants, which was a record-breaking number.[102]  A few months later, NASA’s Cassini spacecraft was intentionally plunged into Saturn, ending over a decade’s worth of service.[103]  President Donald Trump also signed Space Policy Directive 1, which instructs NASA to send astronauts back to the moon, which President Trump noted would help establish a foundation for an eventual mission to Mars.[104] In what was widely expected to be a record year for private space launches, SpaceX and other private space companies clearly delivered.  In 2017, SpaceX, the company founded and run by Elon Musk, flew a record 18 missions utilizing the Falcon 9 rocket.[105]  Blue Origin, the company founded by Jeff Bezos, also made significant progress.  It was able to launch a new version of its New Shepard vehicle on its first flight, which Bezos hopes will lay the foundation for potential crewed missions.[106]  Then, in late December, California startup Made in Space sent a machine designed to make exotic ZBLAN optical fiber to the International Space Station.[107]  Without a doubt, 2017 played witness to many significant milestones in space exploration. Additional milestones have already been surpassed in early 2018.  February 6, 2018 was a historic date for Space technology and exploration—SpaceX’s Falcon Heavy had its maiden launch.  The Falcon Heavy can carry payloads larger than any available commercial rocket, and it has the potential to launch payloads outside of Earth’s orbit.  In fact, the Falcon Heavy did just that by launching a Tesla Roadster, driven by “Starman” into interplanetary space.  Starman will likely continue driving its orbit for millions of years.  It is only a matter of time until Starman is replaced with astronauts and the destination becomes Mars—SpaceX plans to launch such a mission in 2024. B.    Update on Outer Space Treaty and Surrounding Debate The Treaty on Principles Governing the Activities of States in the Exploration and Use of Outer Space, Including the Moon and Other Celestial Bodies, otherwise known as the Outer Space Treaty, recently celebrated its 50th anniversary.  Signed in 1967 and designed to prevent a new form of colonial competition, the Treaty was lauded for its principal framework on international space law.  Indeed, shortly after the Treaty was entered into force, the United States and the Soviet Union successfully collaborated on many space missions and exercises.[108] The Treaty is not complex.  Consisting of 17 short articles, the Treaty obligates its signatories to perform space exploration “for the benefit and interest of all countries” and to not “place in orbit around the Earth any objects carrying nuclear weapons or any other kinds of weapons of mass destruction.”[109]  Having been in force for over 50 years, there have recently been discussions regarding whether the Treaty is ripe for an update.  Only as far back as half a decade ago, experts met in Australia to discuss moon-mining of anything from water and fuel to rare minerals in what was then a world’s first “Off-Earth Mining Forum.”[110]  Discussion surrounded the legality of such mining under the Treaty.  Then in 2014, NASA accepted applications from companies that desired to mine rare moon minerals in a program called “Lunar Cargo Transportation and Landing by Soft Touchdown.”[111]  This once again sparked a debate on the legality of such actions, specifically lunar property rights. In 2017, the focus turned toward private and commercial space flight, and spurred conversation as to whether the 50-year-old treaty needed an update.  For one, the Treaty was designed, and has been entirely focused, on only individual countries.  Thus, there is an argument that the Treaty does not apply to private appropriation of celestial territory.  Second, the quaint nature of the Treaty has spawned efforts at tackling the private appropriation issues.  For instance, the United States passed the Space Act of 2015, which provides for private commercial “exploration and exploitation of space resources.”[112]  The Act has incited further debate on the various legal loopholes that inherently afflict the Treaty and its ban on countries owning celestial territory. Meanwhile, the U.S. government has continued to find methods of regulation, specifically those involving the FAA and the Federal Communications Commission (“FCC”), among others.[113]  Now, lawmakers are purportedly discussing legislation that would provide a regulatory framework for private commercial space travel to adhere to the Treaty, as there currently does not exist a framework for the U.S. government to oversee the launch of private space stations.[114] Moreover, Senator Ted Cruz (R-TX) has been leading the charge on updating the Treaty to address issues related to modern spaceflight, where private commercial entities are playing an ever-increasing role.[115]  In May, Senator Cruz, the chairman of the Subcommittee on Space, Science, and Competitiveness, convened a hearing to “examine U.S. government obligations under the [Treaty]” and to also “explore the Treaty’s potential impacts on expansion of our nation’s commerce and settlement in space.”[116]  Featuring a panel of legal experts and a panel of commercial space business leaders, the hearing raised a number of different viewpoints with one apparently unifying message: the Treaty should not be amended.  One of the panel members, Peter Marquez, while acknowledging that the Treaty is not perfect, expressed concern that opening up the Treaty to modifications would leave the space industry worse off, and would be a detriment to national and international security.[117] One area of particular interest was Article VI of the Treaty, which provides that nations authorize and supervise space activities performed by non-governmental entities, such as a private commercial space company.  The CEO of Moon Express, Bob Richards, noted that while the Treaty should remain unchanged, the U.S. should adopt a streamlined regulatory procedure and process to make approvals for space activities more efficient and clear.[118]  One of the legal experts sitting on the panel, Laura Montgomery, expressed her belief that the U.S. need not further regulate new commercial space because a close reading of the Treaty would indicate that mining and other similar activities do not require such governmental approvals.[119] While the ultimate general consensus appeared to be that no change to the Treaty was necessary to accomplish the goals of private commercial space enterprises, the hearing did bring to light the issues that currently confront modern space protocols. C.    The American Space Commerce Free Enterprise Act of 2017, Which Seeks to Overhaul U.S. Commercial Space Licensing Regime, Passes Committee but Stalls in House On June 7, 2017, House members led by Rep. Lamar Smith (R-TX), Chairman of the U.S. House Science, Space, and Technology Committee, introduced H.R. 2809—the American Space, Commerce, and Free Enterprise Act of 2017 (“ASCFEA”).[120]  The bill, if adopted, would amend Title 51 of the United States Code to liberalize licensing requirements to conduct a variety of commercial space activities, while consolidating the licensing approval process for such activities under the authority of the U.S. Department of Commerce (“DOC”).[121] The regulation of commercial space activities historically has been distributed among a variety of agencies—with the National Oceanic and Atmospheric Administration (“NOAA”) governing remote sensing, the FCC governing communications satellites,[122] and the FAA/AST regulating launch, reentry, and some other non-traditional activities.[123]  But with that patchwork of authority, proponents of the Act believe there exists a regulatory gap for overseeing and authorizing new and innovative space activities.[124]  A primary goal of the Act is to address this perceived uncertainty, and in so doing, resolve long-standing questions associated with the United States’ responsibility to regulate commercial space activities under the Outer Space Treaty,[125] which the bill’s text references extensively. In its current form, the bill would grant the Office of Space Commerce (within the DOC) “the authority to issue certifications to U.S. nationals and nongovernmental entities for the operation of:  (1) specified human-made objects manufactured or assembled in outer space . . . and (2) all items carried on such objects that are intended for use in outer space.”[126]  The bill further eliminates the Commercial Remote Sensing Regulatory Affairs Office of the NOAA, and vests authority to issue permits for remote sensing systems, again, in the DOC.[127]  The bill also creates a certification process for other “commercial payloads not otherwise licensed by the government,” thereby providing fallback legislation for “non-traditional applications like satellite servicing, commercial space stations and lunar landers.”[128]  The DOC hence would occupy all the regulatory authority for commercial space activities, except for the FCC and FAA/AST’s current authority, which those agencies would maintain.[129] The commercial space industry supports the bill, and in particular the bill’s apparent presumption in favor of regulatory approval.[130]  Industry also supports the bill’s overhaul of the regulation of remote sensing—for example, the bill requires the DOC to issue a certification decision within just 60 days (or else the application is granted),[131] provide an explanation for any rejections, and grant every application that seeks authorization for activities involving “the same or substantially similar capabilities, derived data, products, or services are already commercially available or reasonably expected to be made available in the next 3 years in the international or domestic marketplace.”[132] Some opponents of the bill contend that the consolidation of regulatory approval will limit interagency review, which is important because the DoD, State Department, and the intelligence community currently play some regulatory role in the review of aspects of new commercial space activities that are perceived to potentially pose a threat to national security.[133]  Others contend that the Office of Space Commerce has inadequate resources and experience to handle the regulatory approvals.  The bill seeks to ameliorate these concerns by authorizing $5 million in funding for the Office in 2018.[134]  The Department of Justice also has voiced some constitutional concerns.[135] The House referred the bill to the House Committee on Science, Space, and Technology,[136] which on June 8, 2017 passed three amendments by voice vote.[137]  Since being marked up in committee, the bill has seen no further action by the House.[138]  The DOC currently is seeking public input on possible changes to commercial space operations licensing more broadly.[139] D.    Industry and Government Regulators Call for Changes to NOAA’s Licensing of Remote Sensing Technology ASCFEA’s effort to strip NOAA of its authority to regulate remote sensing technology coincides with a growing number of complaints from the remote sensing industry and government regulators concerning NOAA’s ability to handle an increased number of licensing applications.[140] The Land Remote Sensing Policy Act of 1992 authorized the Secretary of Commerce to “license private sector parties to operate private remote sensing space systems.”[141]  But despite a sea change in remote sensing technology and activities since 1992, that law remains the main source of authority for remote sensing licensing, and Congress has made few modifications to the law since its inception.[142]  Given the speed of technological change, and increased industry competition, remote sensing companies are advocating for NOAA to adopt a “permissive” approach to licensing, akin to the language proposed in the ASCFEA.[143] NOAA’s issues have been exacerbated by the fact that license applications are now more varied and complex than they were previously.[144]   Representatives from NOAA describe how prior to 2011, it took an average of 51 days to review license applications, since many applications sought permission for similar concepts for satellite systems.[145]  Even though the Land Remote Sensing Policy Act of 1992 calls for a 120-day approval window, in practice, applications now extend far longer than that—and further, NOAA sometimes provides little to no explanation about why it rejects particular applications.[146]  Under the ASCFEA, the DOC would be required to approve applications using the “same or substantially similar capabilities, derived data, products, or services as are already commercially available or reasonably expected to be made available in the next 3 years in the international or domestic marketplace.”[147] Another complexity is that many companies develop technology that do not solely or traditionally perform remote sensing functions, but have remote sensing capabilities.[148]  The ASCFEA addresses this problem by offering exceptions for “De Minimis” uses of remote sensing technology.[150] E.    Commercial Space Policy in the Trump Era On December 11, 2017, President Trump signed White House Space Policy Directive 1, entitled “Reinvigorating America’s Human Space Exploration Program.”[151]  As the subject suggests, the Directive’s goal is to bring a renewed focus on human space flight at a time when the United States lacks an organic capability to send American astronauts into low-Earth orbit, let alone beyond.[152]  Fittingly, President Trump signed the directive on the forty-fifth anniversary of the lunar landing of Apollo 17, with Apollo 17 astronaut Senator Harrison Schmitt present at the ceremony.[153] According to the Directive, the United States will “[l]ead an innovative and sustainable program of exploration with commercial and international partners to enable human expansion across the solar system….”[154]  The directive calls for missions beyond low-Earth orbit, with the United States “lead[ing] the return of humans to the Moon for long-term exploration and utilization, followed by human missions to Mars and other destinations.”[155] NASA is already working with several commercial entities to develop transportation to and from low-Earth orbit, as well as to the International Space Station.[156]  And a call for a return to the moon for use as a stepping-stone to other destinations is not new with President Trump; previous administrations have expressed a similar desire.[157]  What remains to be seen is how this “long-term exploration” will be funded, with a good indicator being what “will be reflected in NASA’s FISCAL Year 2019 budget request.”[158]  Until then, “No bucks, no Buck Rogers.”[159] F.    Updates on Space Law in Luxembourg, India, and Australia Luxembourg Continues its Push for Commercial Space Prominence The small country of Luxembourg, a signatory to the Outer Space Treaty,[160] has major commercial space ambitions.  In 2016, Luxembourg passed a law to set aside €200 million to fund commercial space mining activities, and also offered to help interested companies obtain private financing.[161]  On July 13, 2017, following the United States’ lead,[162] Luxembourg passed a law that gives qualifying companies the right to own any space resources they extract from celestial bodies including asteroids.[163]  The law further outlines a regulatory framework for “the government to authorize and supervise resource extraction and other space activities,” except for communications satellites, which a different Luxembourg agency regulates.[164]  To qualify for a space mining license, companies must be centrally administered and own a registered office in Luxembourg, and also must obtain regulatory approval.[165]  It is as of now unclear whether the Luxembourg law (as well as the U.S.’s analogous law) violate the Outer Space Treaty, which prohibits companies from claiming territory on celestial bodies, but does not clarify whether that prohibition extends to materials extracted from those celestial bodies.[166] India Unveils Draft of New Commercial Space Law; Sets Satellite Launch Record In November 2017, the India Department of Space released and sought comments for the “Space Activities Act, 2017.”[167]  The stated goal of the bill is to “encourage enhanced participation of non-governmental/private sector agencies in space activities in India.”[168]  The bill as currently drafted vests authority in the Indian Government to formulate a licensing scheme for any and all “Commercial Space Activity,” and states that licenses may be granted if the sought activity does not jeopardize public health or safety, and does not violate India’s international treaty obligations, such as the Outer Space Treaty, to which India is a signatory.[169] India’s space agency also made headlines this year when it sent 104 satellites into space in 18 minutes—purportedly tripling the prior record for single-day satellite launches.[170]  The New York Times reports that satellite and other orbital companies closely scrutinized the launch, since India’s space agency is cheaper to employ for satellite launches than its European and North American counterparts.[171] Australia Announced that It Will Create a Space Agency; Details Pending In September 2017, Australia’s Acting Minister for Industry, Innovation and Science announced that Australia will create a national space agency.[172]  While details are still pending, Australia’s goal purportedly is to take advantage of the $300-$400 billion space economy, while creating Australian jobs in the process.[173] IV.    CYBERSECURITY AND PRIVACY ISSUES IN THE NATIONAL AIRSPACE A.    Cybersecurity Issues The Federal Aviation Administration (FAA) has lagged behind other sectors in establishing robust cybersecurity and privacy safeguards in the national airspace, although federal policy identifies the transportation sector (which includes the aviation industry) as one of the 16 “critical infrastructure” sectors that have the ability to impact significantly the nation’s security, economy, and public health and safety.[174]  The need for the FAA to establish robust safeguards is obvious, as the catastrophic impact of a cyber attack on the national airspace is not hard to imagine post-9/11.  Recently, one hacker claimed he compromised the cabin-based in-flight entertainment system to control a commercial airline engine in flight. One development of note is the reintroduction of the Cybersecurity Standards for Aircraft to Improve Resilience Act of 2017 by U.S. Senators Edward Markey and Richard Blumenthal.[175] Senator Markey first introduced legislation aimed at improving aircraft cyber security protection in April 2016, following a 2015 survey of U.S. airline CEOs to discover standard cybersecurity protocols used by the aviation industry.  If signed into law, the bill would require the U.S. Department of Transportation to work with DoD, Homeland Security, the Director of National Intelligence, and the FCC to incorporate requirements relating to cybersecurity into the requirements for certification.  Additionally, the bill would establish standard protections for all “entry points” to the electronic systems of aircraft operating in the U.S.  This would include the use of isolation measures to separate critical software systems from noncritical software systems. B.    UAS Privacy Concerns UAS are equipped with highly sophisticated surveillance technology with the ability to collect personal information, including physical location.  Senator Ayotte, Chair of the Subcommittee on Aviation Operations, Safety, and Security, summarized the privacy concerns drones pose as follows: “Unlimited surveillance by government or private actors is not something that our society is ready or willing or should accept.  Because [drones] can significantly lower the threshold for observation, the risk of abuse and the risk of abusive surveillance increases.”  We describe below several recent federal and state efforts to address this issue. 1.    State Legislation Addressing Privacy Concerns At least five out of the twenty-one states that either passed legislation or adopted resolutions related to UAS in 2017 specifically addressed privacy concerns.[176] Colorado HB 1070 requires the center of excellence within the department of public safety to perform a study that identifies ways to integrate UAS within local and state government functions relating to firefighting, search and rescue, accident reconstruction, crime scene documentation, emergency management, and emergencies involving significant property loss, injury or death.  The study must consider privacy concerns, in addition to costs and timeliness of deployment, for each of these uses. New Jersey SB 3370 allows UAS operation that is consistent with federal law, but also creates criminal offenses for certain UAS surveillance and privacy violations.  For example, using a UAS to conduct surveillance of a correction facility is a third degree crime.  Additionally, the law also applies the operation of UAS to limitations within restraining orders and specifies that convictions under the law are separate from other convictions such as harassment, stalking, and invasion of privacy. South Dakota SB 22 also prohibits operation of drones over the grounds of correctional and military facilities, making such operation a class 1 misdemeanor.  Further, the law modifies the crime of unlawful surveillance to include intentional use of a drone to observe, photograph or record someone in a private place with a reasonable expectation of privacy, and landing a drone on the property of an individual without that person’s consent.  Such purportedly unlawful surveillance is a class 1 misdemeanor unless the individual is operating the drone for commercial or agricultural purposes, or the individual is acting within his or her capacity as an emergency management worker. Utah HB 217 modifies criminal trespass to include drones entering and remaining unlawfully over property with specified intent.  Depending on the intent, a violation is either a class B misdemeanor, a class A misdemeanor, or an infraction, unless the person is operating a UAS for legitimate commercial or educational purposes consistent with FAA regulations.  Utah HB 217 also modifies the offense of voyeurism, a class B misdemeanor, to include the use of any type of technology, including UAS, to secretly record video of a person in certain instances. Virginia HB 2350 makes it a Class 1 misdemeanor to use UAS to trespass upon the property of another for the purpose of secretly or furtively peeping, spying, or attempting to peep or spy into a dwelling or occupied building located on such property. 2.    UAS Identification and Tracking Report The FAA chartered an Aviation Rulemaking Committee (“ARC”) in June 2017 to provide recommendations on the technologies available for remote identification and tracking of UAS, and how remote identification may be implemented.[177]  However, the ARC’s 213 page final report, dated September 30, 2017, notes that the ARC lacked sufficient time to fully address privacy and data protection concerns, and that therefore those topics were not addressed: [T]he ARC also lacks sufficient time to perform an exhaustive analysis of all the privacy implications of remote ID, tracking, or UTM, and did not specifically engage with privacy experts, from industry or otherwise, during this ARC.  These members agree, however, that it is fundamentally important that privacy be fully considered and that appropriate privacy protections are in place before data collection and sharing by any party (either through remote ID and/or UTM) is required for operations.  A non-exhaustive list of important privacy considerations include, amongst other issues, any data collection, retention, sharing, use and access.  Privacy must be considered with regard to both PII and historical tracking information.  The privacy of all individuals (including operators and customers) should be addressed, and privacy should be a consideration during the rulemaking for remote ID and tracking. Accordingly, the ARC recognizes the fundamental importance of fully addressing privacy and data protection concerns, and we anticipate that future rulemaking will address these issues. IV.    CONCLUSION We will continue to keep you informed on these and other related issues as they develop. [1] See Huerta, No. 3:16-cv-358, Dkt. No. 30. [2] Id. [3] Id. [4] See Boggs, No. 3:16-cv-00006, Dkt. No. 1 (W.D. Ky. Jan. 4, 2016). [5] See id. [6] See Boggs, No. 3:16-cv-00006, Dkt. No. 20 (W.D. Ky. Jan. 4, 2016). [7] See id. [8] See Singer, No. 1:17-cv-10071, Dkt. N. 63 (D. Mass. Jan. 17, 2017). [9] See id. [10] See id. [11] See id. [12] See id. [13] See Taylor v. Huerta, 856 F.3d 1089 (D.C. Cir. 2017). [14] See Pub. L. No. 112–95, § 336(a), 126 Stat. 11, 77 (2012) (codified at 49 U.S.C. § 40101 note). [15] See Taylor, 856 F.3d at 1090. [16] See Pub. L. No. 115–91, § 3 1092(d), (2017). [17] The White House, Office of the Press Secretary, Presidential Memorandum:  Promoting Economic Competitiveness While Safeguarding Privacy, Civil Rights, and Civil Liberties in Domestic Use of Unmanned Aircraft Systems, Feb. 15, 2015, available at https://obamawhitehouse.archives.gov/the-press-office/2015/02/15/presidential-memorandum-promoting-economic-competitiveness-while-safegua. [18] Operation and Certification of Small Unmanned Aircraft Systems, 81 Fed. Reg. 42064 (June 28, 2016). [19] Electronic Privacy Information Center (“EPIC”), EPIC v. FAA: Challenging the FAA’s Failure to Establish Drone Privacy Rules, https://epic.org/privacy/litigation/apa/faa/drones/ (last visited Jan. 18, 2018). [20] See generally Electronic Privacy Information Center v. FAA (EPIC I), 821 F.3d 39, 41-42 (D.C. Cir. 2016) (noting that FAA denied EPIC’s petition for rulemaking requesting that the FAA consider privacy concerns). [21] Voluntary Best Practices for UAS Privacy, Transparency, and Accountability, NTIA-Convened Multistakeholder Process (May 18, 2016), https://www.ntia.doc.gov/files/ntia/publications/ uas_privacy_best_practices_6-21-16.pdf. [22] EPIC, supra, note xix. [23] EPIC I, supra, note xx, at 41. [24] Id. 41-42. [25] Id. [26] Id. [27] Id. at 42-43. [28] Id. at 42. [29] Id. at 43. [30] Pet. For Review, Electronic Privacy Information Center v. FAA (EPIC II), Nos. 16-1297, 16-1302 (Filed Aug. 22, 2016), https://epic.org/privacy/litigation/apa/faa/drones/EPIC-Petition-08222016.pdf. [31] Appellant Opening Br., EPIC II, Nos. 16-1297, 16-1302 (Filed Feb. 28, 2017), https://epic.org/privacy/litigation/apa/faa/drones/1663292-EPIC-Brief.pdf. [32] Appellee Reply Br., EPIC II, Nos. 16-1297, 16-1302 (Filed April 27, 2017), https://epic.org/privacy/litigation/apa/faa/drones/1673002-FAA-Reply-Brief.pdf. [33] United States Court of Appeals District of Columbia Circuit, Oral Argument Calendar, https://www.cadc.uscourts.gov/internet/sixtyday.nsf/fullcalendar?OpenView&count=1000 (last visited Jan. 18, 2018). [34] United States Department of Defense, Unmanned Systems Integrated Roadmap (2013), https://dod.defense.gov/Portals/1/Documents/pubs/DOD-USRM-2013.pdf. [35] Andrew Meola, Drone Marker Shows Positive Outlook with Strong Industry Growth and Trends, Business Insider, July 13, 2017, available at http://www.businessinsider.com/drone-industry-analysis-market-trends-growth-forecasts-2017-7. [36] Office of the Under Secretary of Defense, U.S. Department of Defense Fiscal Year 2017 Budget Request (Feb. 2016). [37] Office of the Under Secretary of Defense, U.S. Department of Defense Fiscal Year 2018 Budget Request (May 2017). [38] Goldman Sachs, Drones: Reporting for Work, http://www.goldmansachs.com/our-thinking/technology-driving-innovation/drones/ (last visited Jan. 18, 2017). [39] Id. [40] Chris Woods, The Story of America’s Very First Drone Strike, The Atlantic, May 30, 2016, available at https://www.theatlantic.com/international/archive/2015/05/america-first-drone-strike-afghanistan/394463/. [41] Deputy Secretary of Defense, Policy Memorandum 15-002, “Guidance for the Domestic Use of Unmanned Aircraft Systems” (Feb. 17, 2015), https://dod.defense.gov/Portals/1/Documents/Policy%20Memorandum%2015-002%20_Guidance%20for%20the%20Domestic%20Use%20of%20Unmanned%20Aircraft%20Systems_.pdf. [42] Id. [43] Id. [44] Id. [45] Id. [47] Id. [48] Eric Schmitt, Pentagon Tests Lasers and Nets to Combat Vexing Foe: ISIS Drones, N.Y. Times, Sept. 23, 2017, available at https://www.nytimes.com/2017/09/23/world/middleeast/isis-drones-pentagon-experiments.html. [49] Id. [50] Christopher Woody, The Pentagon is Getting Better at Stopping Enemy Drones—and Testing Its Own for Delivering Gear to the Battlefield, Business Insider, Apr. 24, 2017, available at https://www.businessinsider.com/military-adding-drones-and-drone-defense-to-its-arsenal-2017-4. [51] Id. [52] Radio Hill Technology, Birth of the Dronebuster, http://www.radiohill.com/product/ (last visited Jan. 18, 2018). [53] Id. [54] Kyle Mizokami, The Army’s Drone-Killing Lasers are Getting a Tenfold Power Boost, Popular Mechanics, July 18, 2017, available at http://www.popularmechanics.com/military/research/news/a27381/us-army-drone-killing-laser-power/. [55] Sydney J. Freedberg Jr., Drone Killing Laser Stars in Army Field Test, Breaking Defense, May 11, 2017, available at https://breakingdefense.com/2017/05/drone-killing-laser-stars-in-army-field-test/. [56] Mizokami, supra, note lv. [57] ASSURE, UAS Ground Collision Severity Evaluation Final Report, United States (2017), available at http://www.assureuas.org/projects/deliverables/sUASGroundCollisionReport.php?Code=230 (ASSURE Study). [58] Id. [59] Id. [60] Id. [61] DJI, DJI Welcomes FAA-Commissioned Report Analyzing Drone Safety Near People, Newsroom News, Apr. 28, 2017, available at https://www.dji.com/newsroom/news/dji-welcomes-faa-commissioned-report-analyzing-drone-safety-near-people. [62] Id. [63] Id. [64] ASSURE Study, supra note lviii. [65] Id. [66] Id. [67] Id. [68] Id. [69] ASSURE, FAA and Assure Announce Results of Air-to-Air Collision Study, ASSURE: Alliance for System Safety of UAS through Research Excellence, Nov. 27, 2017, available at https://pr.cirlot.com/faa-and-assure-announce-results-of-air-to-air-collision-study/. [70] Id. [71] ASSURE Study, supra note lviii. [72] Id. [73] Id. [74] Id. [75] See Pathiyil, et al., Issues of Safety and Risk management for Unmanned Aircraft Operations in Urban Airspace, 2017 Workshop on Research, Education and Development of Unmanned Aerial Systems (RED-UAS), Oct. 3, 2017, available at http://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=8101671. [76] Id. [77] Id. [78] Id. [79] Id. [80] Patrick C. Miller, 2,800 Interested Parties Apply for UAS Integration Pilot Program, UAS Magazine, Jan. 3, 2018, available at http://www.uasmagazine.com/articles/1801/2-800-interested-parties-apply-for-uas-integration-pilot-program. [81] Unmanned Aircraft Systems Integration Pilot Program, 82 Fed. Reg. 50,301 (Oct. 25, 2017) (Presidential directive creating the program); see also Unmanned Aircraft Systems Integration Pilot Program—Announcement of Establishment of Program and Request for Applications, 82 Fed. Reg. 215 (Nov. 8, 2017) (Department of Transportation Notice of the UAS Pilot Program). [82] See id. [83] See id. [84] Elaine Goodman, Blood Deliveries by Drone Proposed—City Submits Unique Ideas to FAA, Daily Post, Jan. 5, 2018, available at http://padailypost.com/2018/01/05/blood-deliveries-by-drone-proposed-city-submits-unique-ideas-to-faa/. [85] Id. [86] Id. [87] Id. [88] Id. [89] Miller, supra note lxxxi. [90] Id. [91] Id. [92] Id. [93] Id. [101]   NASA Spaceflight, India’s PSLV deploys a record 104 satellites (Feb. 14, 2017), available at https://www.nasaspaceflight.com/2017/02/indias-pslv-record-104-satellites/. [102]   NASA, NASA’s Newest Astronaut Recruits to Conduct Research off the Earth, For the Earth and Deep Space Missions (June 7, 2017), available at https://www.nasa.gov/press-release/nasa-s-newest-astronaut-recruits-to-conduct-research-off-the-earth-for-the-earth-and. [103]   NASA, Cassini Spacecraft Ends Its Historic Exploration of Saturn (Sept. 15, 2017), available at https://www.nasa.gov/press-release/nasa-s-cassini-spacecraft-ends-its-historic-exploration-of-saturn. [104]   NASA, New Space Policy Directive Calls for Human Expansion Across Solar System (Dec. 11, 2017), available at https://www.nasa.gov/press-release/new-space-policy-directive-calls-for-human-expansion-across-solar-system. [105]   TechCrunch, SpaceX caps a record year with 18th successful launch of 2017 (Dec. 22, 2017), available at https://techcrunch.com/2017/12/22/spacex-caps-a-record-year-with-18th-successful-launch-of-2017/. [106]   The Verge, After a year away from test flights, Blue Origin launches and lands its rocket again (Dec. 12, 2017), available at https://www.theverge.com/2017/12/12/16759934/blue-origin-new-shepard-test-flight-launch-landing. [107]   Space.com, SpaceX Launches (and Lands) Used Rocket on Historic NASA Cargo Mission (Dec. 15, 2017), available at https://www.space.com/39063-spacex-launches-used-rocket-dragon-spacecraft-for-nasa.html. [108]   U.S. Department of State, Treaty on Principles Governing the Activities of States in the Exploration and Use of Outer Space, Including the Moon and Other Celestial Bodies, available at https://www.state.gov/t/isn/5181.htm#treaty. [109] NTI, Treaty on Principles Governing the Activities of States in the Exploration and Use of Outer Space, Including the Moon and Other Celestial Bodies (Outer Space Treaty) (Feb. 1, 2017), available at http://www.nti.org/learn/treaties-and-regimes/treaty-principles-governing-activities-states-exploration-and-use-outer-space-including-moon-and-other-celestial-bodies-outer-space-treaty/. [110] PHYS.ORG, Space likely for rare earth search, scientists say (Feb. 20, 2013), available at https://phys.org/news/2013-02-space-rare-earths-scientists.html. [111]   NASA, Lunar CATALYST (Jan. 16, 2014), available at https://www.nasa.gov/content/lunar-catalyst/#.WmLx1qinGHs. [112]   The Conversation, The Outer Space Treaty has been remarkably successful – but is it fit for the modern age? (Jan. 27, 2017), available at http://theconversation.com/the-outer-space-treaty-has-been-remarkably-successful-but-is-it-fit-for-the-modern-age-71381. [113]   The Verge, How an international treaty signed 50 years ago became the backbone for space law (Jan. 27, 2017), available at https://www.theverge.com/2017/1/27/14398492/outer-space-treaty-50-anniversary-exploration-guidelines. [114]   Id. [115]   The Space Review, Is it time to update the Outer Space Treaty? (June 5, 2017), available at http://www.thespacereview.com/article/3256/1. [116]   U.S. Senate, Reopening the American Frontier:  Exploring How the Outer Space Treaty Will Impact American Commerce and Settlement in Space (May 23, 2017), available at https://www.commerce.senate.gov/public/index.cfm/hearings?ID=5A91CD95-CDA5-46F2-8E18-2D2DFCAE4355. [117]   The Space Review, supra note cxvi. [118]   Id. [119]   Id. [120] H.R. Rep No. 2809 (2017), available at https://www.congress.gov/bill/115th-congress/house-bill/2809.  The other primary sponsors of the bill are Brian Babin (R-TX), chairman of the space subcommittee; and Rep. Jim Bridenstine (R-OK). [121] Sandy Mazza, Space exploration regulations need overhaul, new report says, Daily Breeze (Dec. 2, 2017), https://www.dailybreeze.com/2017/12/02/space-exploration-regulations-need-overhaul-new-report-says/.  The Act’s stated purpose is to “provide greater transparency, greater efficiency, and less administrative burden for nongovernmental entities of the United States seeking to conduct space activities.”  H.R. Rep No. 2809 (2017), available at https://www.congress.gov/bill/115th-congress/house-bill/2809 (Section 2(c)). [122] Jeff Foust, House bill seeks to streamline oversight of commercial space activities, Space News (June 8, 2017), http://spacenews.com/house-bill-seeks-to-streamline-oversight-of-commercial-space-activities/. [123] Marcia Smith, New Commercial Space Bill Clears House Committee, Space Policy Online (June 8, 2017), https://spacepolicyonline.com/news/new-commercial-space-bill-clears-house-committee/. [124] Under the Obama administration, many in government and industry presumed that the regulation of new space activities would fall to FAA/AST.  See Marcia Smith, New Commercial Space Bill Clears House Committee, Space Policy Online (June 8, 2017), https://spacepolicyonline.com/news/new-commercial-space-bill-clears-house-committee/ (In fact, the agency heads of the FAA/AST, and the Office of Science and Technology Policy, recommended the same). [125] Marcia Smith, Companies Agree FAA Best Agency to Regulate Non-Traditional Space Activities, Space Policy Online (Nov. 15, 2017), https://spacepolicyonline.com/news/companies-agree-faa-best-agency-to-regulate-non-traditional-space-activities/. [126] H.R. Rep No. 2809 (2017), available at https://www.congress.gov/bill/115th-congress/house-bill/2809. [127] Id. [128] Jeff Foust, House bill seeks to streamline oversight of commercial space activities, Space News (June 8, 2017), http://spacenews.com/house-bill-seeks-to-streamline-oversight-of-commercial-space-activities/. [129] Marcia Smith, New Commercial Space Bill Clears House Committee, Space Policy Online (June 8, 2017), https://spacepolicyonline.com/news/new-commercial-space-bill-clears-house-committee/. [130] Marcia Smith, New Commercial Space Bill Clears House Committee, Space Policy Online (June 8, 2017), https://spacepolicyonline.com/news/new-commercial-space-bill-clears-house-committee/; Marcia Smith, Companies Agree FAA Best Agency to Regulate Non-Traditional Space Activities, Space Policy Online (Nov. 15, 2017), https://spacepolicyonline.com/news/companies-agree-faa-best-agency-to-regulate-non-traditional-space-activities/.  The bill, for example, requires e the Secretary of Commerce to issue certifications or permits for commercial space activities, unless, for example, the Secretary finds by “clear and convincing evidence” that the permit would violate the Outer Space Treaty.  Bob Zimmerman, What You Need To Know About The Space Law Congress Is Considering, The Federalist (July 11, 2017), http://thefederalist.com/2017/07/11/need-know-space-law-congress-considering/.  Indeed, the policy section of the bill finds that “United States citizens and entities are free to explore and use space, including the utilization of outer space and resources contained therein, without conditions or limitations” and “this freedom is only to be limited when necessary to assure United States national security interests are met” or fulfill treaty obligations.  H.R. Rep No. 2809 (2017), available at https://www.congress.gov/bill/115th-congress/house-bill/2809. [131] Jeff Foust, House bill seeks to streamline oversight of commercial space activities, Space News (June 8, 2017), http://spacenews.com/house-bill-seeks-to-streamline-oversight-of-commercial-space-activities/. [132] Joshua Hampson, The American Space Commerce Free Enterprise Act, Niskanen Center (June 15, 2017), https://niskanencenter.org/blog/american-space-commerce-free-enterprise-act/. [133] Jeff Foust, House bill seeks to streamline oversight of commercial space activities, Space News (June 8, 2017), http://spacenews.com/house-bill-seeks-to-streamline-oversight-of-commercial-space-activities/. [134] Jeff Foust, House bill seeks to streamline oversight of commercial space activities, Space News (June 8, 2017), http://spacenews.com/house-bill-seeks-to-streamline-oversight-of-commercial-space-activities/; Congressional Budget Office Cost Estimate, Congressional Budget Office (July 7, 2017), https://www.cbo.gov/system/files/115th-congress-2017-2018/costestimate/hr2809.pdf. [135] Samuel R. Ramer, Letter from the Office of the Assistant Attorney General, Justice Department (July 17, 2017), https://www.justice.gov/ola/page/file/995646/download. [136] H.R. Rep No. 2809 (2017), available at https://www.congress.gov/bill/115th-congress/house-bill/2809/all-actions. [137] Marcia Smith, New Commercial Space Bill Clears House Committee, Space Policy Online (June 8, 2017), https://spacepolicyonline.com/news/new-commercial-space-bill-clears-house-committee/. [138] Jeffrey Hill, Congressman Babin Hints that Cybersecurity Could be Included in Larger Commercial Space Legislative Package, Satellite Today (Nov. 7, 2017), http://www.satellitetoday.com/government/2017/11/07/cybersecurity-featured-space-commerce-act/. [139] Commerce Department Now Accepting Public Inputs on Regulatory Streamlining, Space Commerce (Oct. 27, 2017), http://www.space.commerce.gov/commerce-department-now-accepting-public-inputs-on-regulatory-streamlining/; Sandy Mazza, Space exploration regulations need overhaul, new report says, Daily Breeze (Dec. 2, 2017), https://www.dailybreeze.com/2017/12/02/space-exploration-regulations-need-overhaul-new-report-says/. [140] Sean Kelly, The new national security strategy prioritizes space, The Hill (Jan. 3, 2018), http://thehill.com/opinion/national-security/367240-the-new-national-security-strategy-prioritizes-space; Jeff Foust, House panel criticizes commercial remote sensing licensing, Space News (Sept. 8, 2016), http://spacenews.com/house-panel-criticizes-commercial-remote-sensing-licensing/.  Critics argue that the NOAA’s approval pace is harming U.S. companies to the benefit of foreign competitors. Randy Showstack, Remote Sensing Regulations Come Under Congressional Scrutiny, EOS (Sept. 14, 2016), https://eos.org/articles/remote-sensing-regulations-come-under-congressional-scrutiny. [141] H.R. Rep No. 6133 (1992), available at https://www.congress.gov/bill/102nd-congress/house-bill/6133. [142] Randy Showstack, Remote Sensing Regulations Come Under Congressional Scrutiny, EOS (Sept. 14, 2016), https://eos.org/articles/remote-sensing-regulations-come-under-congressional-scrutiny.  Indeed, the Commercial Space Launch Competitiveness Act, signed into law in November 2016, requires the Department of Commerce to analyze possible statutory updates to the remote sensing licensing scheme.  Jeff Foust, House panel criticizes commercial remote sensing licensing, Space News (Sept. 8, 2016), http://spacenews.com/house-panel-criticizes-commercial-remote-sensing-licensing/.  The text of the ASCFEA also recognizes that since “the passage of the Land Remote Sensing Policy Act of 1992, the National Oceanic and Atmospheric Administration’s Office of Commercial Remote Sensing has experienced a significant increase in applications for private remote sensing space system licenses . . .”  H.R. Rep No. 2809 (2017), available at https://www.congress.gov/bill/115th-congress/house-bill/2809. [143] Joshua Hampson, The American Space Commerce Free Enterprise Act, Niskanen Center (June 15, 2017), https://niskanencenter.org/blog/american-space-commerce-free-enterprise-act/.  The ASCFEA defines a Space-Based Remote Sensing System as “a space object in Earth orbit that is “(A) designed to image the Earth; or (B) capable of imaging a space object in Earth orbit operated by the Federal Government.”  H.R. Rep No. 2809 (2017), available at https://www.congress.gov/bill/115th-congress/house-bill/2809. [144] Jeff Foust, Commercial remote sensing companies seek streamlined regulations, Space News (Mar. 17, 2017), http://spacenews.com/commercial-remote-sensing-companies-seek-streamlined-regulations/. [145] Id. [146] Jeff Foust, House panel criticizes commercial remote sensing licensing, Space News (Sept. 8, 2016), http://spacenews.com/house-panel-criticizes-commercial-remote-sensing-licensing/. [147] H.R. Rep No. 2809 (2017), available at https://www.congress.gov/bill/115th-congress/house-bill/2809 (Chapter 8012 § 80202(e)(1)). [148] Jeff Foust, Commercial remote sensing companies seek streamlined regulations, Space News (Mar. 17, 2017), http://spacenews.com/commercial-remote-sensing-companies-seek-streamlined-regulations/. [150] H.R. Rep No. 2809 (2017), available at https://www.congress.gov/bill/115th-congress/house-bill/2809 (Chapter 802 § 80201(d)). [151] Reinvigorating America’s Human Space Exploration Program, 82 Fed. Reg. 59501 (Dec. 11, 2017) [152] Nell Greenfieldboyce, President Trump Is Sending NASA Back to the Moon (Dec. 11, 2017) available at https://www.npr.org/sections/thetwo-way/2017/12/11/569936446/president-trump-is-sending-nasa-back-to-the-moon. [153] See Press Release, NASA, New Space Policy Directive Calls for Human Expansion Across Solar System (Dec. 11, 2017); see also NASA, https://www.nasa.gov/mission_pages/apollo/missions/apollo17.html (last visited Jan. 21, 2018). [154] Reinvigorating America’s Human Space Exploration Program, supra note clii. [155] Id. [156] NASA, Commercial Crew Program – The Essentials, available at https://www.nasa.gov/content/commercial-crew-program-the-essentials/#.VjOJ3berRaT. [157] Michael Sheetz, Trump Orders NASA to Send American Astronauts to the Moon, Mars, CNBC (Dec. 11, 2017) available at https://www.cnbc.com/2017/12/11/trump-orders-nasa-to-send-american-astronauts-to-the-moon-mars.html. [158] See New Space Policy Directive Calls for Human Expansion Across Solar System, supra note cv; see also Christian Davenport, Trump Vows Americans Will Return to the Moon.  The Question Is How?, (Dec. 11, 2017) available at https://www.washingtonpost.com/news/the-switch/wp/2017/12/11/trump-vows-americans-will-return-to-the-moon-the-question-is-how/?utm_term=.4ceb20131cdf. [159] The Right Stuff (The Ladd Company 1983). [160] Laurent Thailly and Fiona Schneider, Luxembourg set to become Europe’s commercial space exploration hub with new Space law, Ogier (Jan. 8, 2017), https://www.ogier.com/news/the-luxembourg-space-law. [161] Reuters Staff, Luxembourg sets aside 200 million euros to fund space mining ventures, Reuters (June 3, 2016), https://www.reuters.com/article/us-luxembourg-space-mining/luxembourg-sets-aside-200-million-euros-to-fund-space-mining-ventures-idUSKCN0YP22H; Laurent Thailly and Fiona Schneider, Luxembourg set to become Europe’s commercial space exploration hub with new Space law, Ogier (Jan. 8, 2017), https://www.ogier.com/news/the-luxembourg-space-law.  Luxembourg invested €23 million in U.S. company Planetary Resources, and now owns a 10% share in the company.  Kenneth Chang, If no one owns the moon, can anyone make money up there?, The Independent (Dec. 4, 2017), http://www.independent.co.uk/news/long_reads/if-no-one-owns-the-moon-can-anyone-make-money-up-there-space-astronomy-a8087126.html. [162] In 2015, the U.S. passed the Commercial Space Launch Competitiveness Act, which clarified that companies that extract materials from celestial bodies can own those materials.  Andrew Silver, Luxembourg passes first EU space mining law. One can possess the Spice, The Register (July 14, 2017), https://www.theregister.co.uk/2017/07/14/luxembourg_passes_space_mining_law/. [163] Jeff Foust, Luxembourg adopts space resources law, Space News (July 17, 2017), http://spacenews.com/luxembourg-adopts-space-resources-law/. [164] Jeff Foust, Luxembourg adopts space resources law, Space News (July 17, 2017), http://spacenews.com/luxembourg-adopts-space-resources-law;  Paul Zenners, Press Release, Space Resources (July 13, 2017), http://www.spaceresources.public.lu/content/dam/spaceresources/press-release/2017/2017_07_13%20PressRelease_Law_Space_Resources_EN.pdf. [165] Laurent Thailly and Fiona Schneider, Luxembourg set to become Europe’s commercial space exploration hub with new Space law, Ogier (Jan. 8, 2017), https://www.ogier.com/news/the-luxembourg-space-law.  Reportedly, two American companies already plan to move to Luxembourg:  Deep Space Industries and Planetary Resources. Vasudevan Mukunth, Fiat Luxembourg: How a Tiny European Nation is Leading the Evolution of Space Law, The Wire (July 15, 2017), https://thewire.in/157687/luxembourg-space-asteroid-mining-dsi/. [166] Andrew Silver, Luxembourg passes first EU space mining law. One can possess the Spice, The Register (July 14, 2017), https://www.theregister.co.uk/2017/07/14/luxembourg_passes_space_mining_law/;  Mark Kaufman, Luxembourg’s Asteroid Mining is Legal Says Space Law Expert, inverse.com (Aug. 1, 2017), https://www.inverse.com/article/34935-luxembourg-s-asteroid-mining-is-legal-says-space-law-expert. [167] Antariksh Bhavan, Seeking comments on Draft “Space Activities Bill, 2017” from the stake holders/public-regarding, ISRO (Nov. 21, 2017), https://www.isro.gov.in/update/21-nov-2017/seeking-comments-draft-space-activities-bill-2017-stake-holders-public-regarding;  Special Correspondent, Govt. unveils draft of law to regulate space sector, The Hindu (Nov. 22, 2017), http://www.thehindu.com/sci-tech/science/govt-unveils-draft-of-law-to-regulate-space-sector/article20629386.ece;  Raghu Krishnan & T E Narasimhan, Draft space law gives private firms a grip on rocket, satellite making, Business Standard (Nov. 22, 2017), http://www.business-standard.com/article/economy-policy/draft-space-law-gives-private-firms-a-grip-on-rocket-satellite-making-117112101234_1.html. [168] Antariksh Bhavan, Seeking comments on Draft “Space Activities Bill, 2017” from the stake holders/public-regarding, ISRO (Nov. 21, 2017), https://www.isro.gov.in/update/21-nov-2017/seeking-comments-draft-space-activities-bill-2017-stake-holders-public-regarding. [169] Id. [170] Ellen Barry, India Launches 104 Satellites From a Single Rocket, Ramping Up a Space Race, The New York Times (Feb. 15, 2017), https://www.nytimes.com/2017/02/15/world/asia/india-satellites-rocket.html. [171] Id. [172] Yes, Australia will have a space agency. What does this mean? Experts respond, The Conversation (Sept. 25, 2017), http://theconversation.com/yes-australia-will-have-a-space-agency-what-does-this-mean-experts-respond-84588;  Jordan Chong, Better late than never, Australia heads (back) to space, Australian Aviation (Dec. 29, 2017), http://australianaviation.com.au/2017/12/better-late-than-never-australia-heads-back-to-space/. [173] Andrew Griffin, Australia launches brand new space agency in attempt to flee the Earth, The Independent (Sept. 25, 2017), http://www.independent.co.uk/news/science/australia-space-agency-nasa-earth-roscosmos-malcolm-turnbull-economy-a7966751.html;  Henry Belot, Australian space agency to employ thousands and tap $420b industry, Government says, ABC (Sept. 25, 2017), http://www.abc.net.au/news/2017-09-25/government-to-establish-national-space-agency/8980268. [174]   White House, Critical Infrastructure Security and Resilience, Presidential Policy Directive/PPD-21 (Feb. 12, 2013). [175]   Woodrow Bellamy III, Senators Reintroduce Aircraft Cyber Security Legislation, Aviation Today (Mar. 24, 2017), http://www.aviationtoday.com/2017/03/24/senators-reintroduce-aircraft-cyber-security-legislation/. [176]   The eighteen states that passed UAS legislation in 2017 were Colorado, Connecticut, Florida, Georgia, Indiana, Kentucky, Louisiana, Minnesota, Montana, Nevada, New Jersey, North Carolina, Oregon, South Dakota, Texas, Utah, Virginia and Wyoming. The three states that passed resolutions related to UAS were Alaska, North Dakota and Utah. [177]   Under Section 2202 of the FAA Extension, Safety, and Security Act of 2016, Pub. L. 114-190, Congress directed the FAA to convene industry stakeholders to facilitate the development of consensus standards for identifying operators and UAS owners.  The final report identifies the following as the ARC’s stated objectives: The stated objectives of the ARC charter were: to identify, categorize and recommend available and emerging technology for the remote identification and tracking of UAS; to identify the requirements for meeting the security and public safety needs of the law enforcement, homeland defense, and national security communities for the remote identification and tracking of UAS; and to evaluate the feasibility and affordability of available technical solutions, and determine how well those technologies address the needs of the law enforcement and air traffic control communities. The final ARC report is available at: https://www.faa.gov/regulations_policies/rulemaking/committees/documents/media/UAS%20ID%20ARC%20Final%20Report%20with%20Appendices.pdf. Gibson Dunn lawyers are available to assist in addressing any questions you may have regarding the issues discussed above. Please contact the Gibson Dunn lawyer with whom you usually work, any member of the Aerospace and Related Technologies industry group, or any of the following: Washington, D.C. Karen L. Manos – Co-Chair (+1 202-955-8536, kmanos@gibsondunn.com) Lindsay M. Paulin (+1 202-887-3701, lpaulin@gibsondunn.com) Erin N. Rankin (+1 202-955-8246, erankin@gibsondunn.com) Christopher T. Timura (+1 202-887-3690, ctimura@gibsondunn.com) Justin P. Accomando (+1 202-887-3796, jaccomando@gibsondunn.com) Brian M. Lipshutz (+1 202-887-3514, blipshutz@gibsondunn.com) Melinda R. Biancuzzo (+1 202-887-3724, mbiancuzzo@gibsondunn.com) New York David M. Wilf – Co-Chair (+1 212-351-4027, dwilf@gibsondunn.com) Alexander H. Southwell (+1 212-351-3981, asouthwell@gibsondunn.com) Nicolas H.R. Dumont (+1 212-351-3837, ndumont@gibsondunn.com) Eun Sung Lim (+1 212-351-2483, elim@gibsondunn.com) Los Angeles William J. Peters – Co-Chair (+1 213-229-7515, wpeters@gibsondunn.com) David A. Battaglia (+1 213-229-7380, dbattaglia@gibsondunn.com) Perlette M. Jura (+1 213-229-7121, pjura@gibsondunn.com) Eric D. Vandevelde (+1 213-229-7186, evandevelde@gibsondunn.com) Matthew B. Dubeck (+1 213-229-7622, mdubeck@gibsondunn.com) Lauren M. Fischer (+1 213-229-7983, lfischer@gibsondunn.com) Dhananjay S. Manthripragada (+1 213-229-7366, dmanthripragada@gibsondunn.com) James A. Santiago (+1 213-229-7929, jsantiago@gibsondunn.com) Denver Jared Greenberg (+1 303-298-5707, jgreenberg@gibsondunn.com) London Mitri J. Najjar (+44 (0)20 7071 4262, mnajjar@gibsondunn.com) Orange County Casper J. Yen (+1 949-451-4105, cyen@gibsondunn.com) Rustin K. Mangum (+1 949-451-4069, rmangum@gibsondunn.com) Sydney Sherman (+1 949-451-3804, ssherman@gibsondunn.com) Paris Ahmed Baladi (+33 (0)1 56 43 13 00, abaladi@gibsondunn.com) San Francisco Kristin A. Linsley (+1 415-393-8395, klinsley@gibsondunn.com) Matthew Reagan (+1 415-393-8314, mreagan@gibsondunn.com) © 2018 Gibson, Dunn & Crutcher LLP Attorney Advertising:  The enclosed materials have been prepared for general informational purposes only and are not intended as legal advice.

September 14, 2017 |
Accelerating Progress Toward a Long-Awaited Federal Regulatory Framework for Autonomous Vehicles in the United States

House Passes the SELF DRIVE Act, Consideration in Senate Committee Hearing on Including Large Commercial Autonomous Vehicles, and New Department of Transportation Guidelines Autonomous vehicles have operated for some time under a patchwork of state and local rules with limited federal oversight, but the last two weeks have seen a number of interesting legal developments towards a national regulatory framework.  The accelerated pace of policy proposals—and debate surrounding them—is set to continue as automakers’ request more autonomous vehicles be put on the road for testing.  While it seems likely that the federal government will step into a leading role by passing initial legislation later this year or early next, autonomous vehicles operating on public roads are likely to remain subject to both federal and state regulation. SELF DRIVE Act On September 6, 2017, lawmakers in the House took a major step toward advancing the development of autonomous vehicles, approving legislation that would put vehicles onto public roads more quickly and curb states from slowing their spread.  Amid strong bipartisan support the House voted to pass H.R. 3388,[1] a bill which is intended to accelerate the development of self-driving cars.  The “Safely Ensuring Lives Future Deployment and Research In Vehicle Evolution” or “SELF DRIVE” Act empowers the National Highway Traffic Safety Administration (NHTSA) with the oversight of manufacturers of self-driving cars through enactment of future rules and regulations that will set the standards for safety and govern areas of privacy and cybersecurity relating to such vehicles.  One key aspect of the Act is broad preemption of the states from enacting legislation that would conflict with the Act’s provisions or the rules and regulations promulgated under the authority of the Act by the NHTSA.   While state authorities will likely retain their ability to oversee areas involving human driver and autonomous vehicle operation, the Act contemplates that the NHTSA would continue to retain its authority to oversee manufacturers of autonomous vehicles, just as it has with non-autonomous vehicles, to ensure overall safety.  In addition, the NHTSA is required to create a Highly Automated Vehicle Advisory Council to study and report on the performance and progress of autonomous vehicles.  This new council is to include members from a wide range of constituencies, including members of the industry, consumer advocates, researchers, and state and local authorities.  The intention is to have a single body (the NHTSA) develop a consistent set of rules and regulations for manufacturers, rather than continuing to allow the states to adopt a web of potentially widely differing rules and regulations that may ultimately inhibit development and deployment of autonomous vehicles. While a uniform, concrete set of standards for the entire country has been welcomed by automakers and business lobbyists alike,[2] whether the NHTSA has the resources to fulfil its supervisory role as envisaged by the bill remains a real question—echoed by the concerns of a number of consumer advocate groups.[3]  The requirements that manufacturers develop specific privacy and cybersecurity plans before being allowed to sell vehicles, along with specific requirements for disclosures to vehicle purchasers, are likely to please many consumer advocates.  However, while the Act does provide for a phase-in period, it would ultimately increase the possible number of autonomous vehicles on the road dramatically.  Currently, automakers and companies interested in testing self-driving technology must apply to the NHTSA for exemptions in relation to certain safety requirements that non-autonomous vehicles must meet; and the agency only grants 2,500 exemptions per year.  This bill increases that cap to 25,000 per year initially, and expands it up to 100,000 annually in three years’ time.  Although the number of exemptions represents a slight decrease from the original draft bill, the bill passed by the House could still amount to potentially millions of such cars each year: a tall order for an agency which does not yet have an administrator.[4] Senate Committee Hearing on Large Commercial Autonomous Vehicles The SELF DRIVE Act now moves to the Senate, which has its own self-driving car legislation under review by the Commerce, Science and Transportation Committee.[5]  One key difference between the House and Senate versions of the legislation appears to be in the area of commercial vehicles, as the House version is focused on passenger cars and light trucks, not heavy commercial vehicles over 10,000 pounds (4,536 kg).  The Chair and some other members of the Senate Commerce Committee appear inclined to include large commercial vehicles in any autonomous vehicle legislation that makes its way to the President, and, as a result, scheduled a public hearing on September 13, 2017, to consider the impact of large autonomous vehicles on road safety.[6] Various witnesses at the September 13 hearing—including a trucking trade group, truck maker Navistar International Corp. and a representative from the National Safety Council—urged the Senate panel to develop a clear, uniform federal regulatory framework for autonomous vehicles by including large commercial vehicles in the Senate bill.  Labor union Teamsters opposed such a move,[7] citing concerns that self-driving trucks are subject to a very different set of safety and operational issues—warranting their own regulation and bill—and also raised particular concerns over cybersecurity and the loss of driving jobs.  Senator Gary Peters (D-Mich), who has been working with Republicans to draft self-driving legislation, said at the hearing he did not support including commercial trucks and that safety and job impacts must be addressed further.  Senator John Thune (R-S.D.), the Committee Chair, supports the inclusion of commercial trucks in the legislation, but said no final decision had been made. Thune indicated that he hoped to introduce a bill and get committee approval by early October 2017.[8] Revised Department of Transport Guidelines In addition to the flurry of activity in Congress, on September 12, 2017, the Department of Transportation (DoT) released new safety guidelines[9] for autonomous vehicles to replace those issued by the Obama administration last year.  Transportation Secretary Elaine Chao announced that these purely voluntary guidelines (“a nonregulatory approach to automated vehicle technology safety”)[10] are designed to clarify the federal government’s view of best practices for vehicle developers and state legislators and officials as more test cars reach public roads, and offer the lighter regulatory touch for which automakers previously advocated.[11]  Through the guidelines, the DoT avoids any compliance requirement or enforcement mechanism, at least for the time being, as the scope of the guidance is expressly to “support the industry as it develops best practices in the design, development, testing, and deployment of automated vehicle technologies.” Under the Obama administration, automakers were asked to follow a 15-point safety assessment before putting test vehicles on the road.[12]  The new, pared-down guidelines reduce the suggested safety assessment to a 12-point voluntary assessment, asking automakers to consider cybersecurity, crash protection, how the vehicle interacts with occupants and backup plans in the event that the vehicle encounters a problem.  Significantly, the DoT is no longer specifically asking automakers to address in a safety assessment any ethical or privacy issues (although a footnote to the guidelines does suggest the DoT recognizes the continued relevance of such issues).  Nor is it requesting that manufacturers share information beyond crash data, apparently reflecting concerns expressed by automakers in response to the previous guidelines, which suggested that automakers collect, store, analyze and share with NHTSA, and potentially with competitors, event reconstruction data from “near misses”, i.e. positive outcomes in which the system correctly detects a safety-relevant situation, and successfully avoids an incident.[13]  The federal guidelines are purposely crafted in a way that they can be adapted, and will be updated again next year.  NHTSA has invited public comment on the voluntary guidance and best practices.[14] As may be evident from the goals of the SELF DRIVE Act noted above, the delineation of federal and state regulatory authority has emerged as a key issue, chiefly because autonomous vehicles do not fit neatly into the existing regulatory structure.  Historically, the DoT has regulated and enforced how vehicles are built, particularly with regard to overall safety concerns, but states are typically responsible for the operation of vehicles.  The DoT’s new best practices for state legislatures explicitly seek to continue this basic schema and allocate overarching regulatory authority for autonomous vehicles to the federal level.  (“[A]llowing NHTSA alone to regulate the safety design and performance aspects of ADS technology will help avoid conflicting federal and state laws and regulations that could impede deployment.”)[15]  States can still regulate autonomous vehicles, but the guidance encourages states not to pass laws that would “place unnecessary burdens on competition and innovation by limiting [autonomous vehicle] testing or deployment to motor vehicle manufacturers only.”  It remains to be seen how the DoT’s proposed strict delineation will cope with issues surrounding liability for auto accidents.[16] California, as a state that has previously enacted legislation and regulations requiring automakers to publicly report crashes of autonomous test vehicles and which requires and issues approvals prior to testing of autonomous vehicles on California public roads,[17] said it was reviewing the new guidelines.[18] Looking Ahead In addition to putting some limits on roll-out and affording some assurances on security and privacy safeguards, the new legislation and guidelines indicate strong federal intent to provide more uniformity of requirements imposed on manufacturers across the country, and to keep the general oversight of the design and manufacture of autonomous vehicles with the federal government (as it has been) rather than cede such power to the individual states.  As a result, the intent seems to be to speed both development and deployment of autonomous vehicle technologies. If the Senate passes legislation that would amend the SELF DRIVE Act in any significant way, the resulting bills will need to go to Conference to reconcile any differences between the versions passed by the two chambers.  In addition, given the very busy nature of the legislative calendar this year, particularly that of the Senate, it is hard to say whether Congress will be able to send some version of the SELF DRIVE Act to the President for ratification into law before the end of the year, or whether additional time will be required.  However, what does seem to be clearly emerging from recent events is the near-universal, bipartisan support in Congress and the Executive branch for the need to take back the reins from the states and amend federal safety requirements to account for the introduction of autonomous vehicles on U.S. roads in the form of new federal legislation and regulations. Debate surrounding the regulation of large commercial vehicles is proving more contentious, in large part because the specter of potential job losses resonates strongly with labour groups.  Whether or not large commercial vehicles are ultimately included in the bills currently being considered by Congress, a national regulatory framework for all autonomous vehicles has moved well within reach.  We will continue to carefully monitor these developments and the bevy of unresolved policy issues—safety, ethics, data privacy, cybersecurity, insurance—that will inevitably accompany them. [1] H.R. 3388, 115th Cong. (2017), available at https://www.congress.gov/bill/115th-congress/house-bill/3388/text [2] Cecilia Kang, Self-Driving Cars’ Prospects Rise With Vote by House, N.Y. Times, Sept. 6, 2017, https://www.nytimes.com/2017/09/06/technology/self-driving-cars-prospects-rise-with-vote-by-congress.html. [3] Id. [4] NHTSA Administrator, NHTSA: National Highway Traffic Safety Administrator, https://one.nhtsa.gov/About-NHTSA/About-the-Administrator, (last visited Sept. 12, 2017) (“The position of NHTSA Administrator  is currently unfilled.”). [5] U.S. Chamber of Commerce, Senate Self Driving Car Legislation Staff Draft, Sept. 8, 2017, available at https://www.uschamber.com/report/senate-self-driving-car-legislation-staff-draft) [6] U.S. Senate Committee on Commerce, Science and Transportation, Press Release, Sept. 6, 2017, available at https://www.commerce.senate.gov/public/index.cfm/pressreleases?ID=BAC7FBCE-424B-4C61-8082-71B3E3D9333B [7] Gina Chon, Teamsters Union Tries to Slow Self-Driving Truck Push, N.Y. TIMES, Aug. 11, 2017,  https://www.nytimes.com/2017/08/11/business/dealbook/teamsters-union-tries-to-slow-self-driving-truck-push.html?_r=0 [8] Reuters, Trucking Industry, Navistar back U.S. Self-Driving Legislation, N.Y. TIMES, Sept. 13, 2017,  https://www.nytimes.com/reuters/2017/09/13/business/13reuters-autos-selfdriving.html?_r=0 [9] Automated Vehicles for Safety, NHTSA: National Highway Traffic Safety Administrator, https://www.nhtsa.gov/technology-innovation/automated-vehicles. [10] U.S. Dept. of Transp., Automated Driving Systems 2.0: A Vision for Safety ii (September 2017), https://www.nhtsa.gov/sites/nhtsa.dot.gov/files/documents/13069a-ads2.0_090617_v9a_tag.pdf. [11] Patti Waldmeir & Leslie Hook, Trump administration promises light touch on driverless cars, Financial Times, Sept. 12, 2017, https://www.ft.com/content/ff3a02fc-97e6-11e7-a652-cde3f882dd7b. [12] U.S. Dept. of Transp., Federal Automated Vehicles Policy, Sept. 2016, https://www.transportation.gov/AV/federal-automated-vehicles-policy-september-2016 [13] Id., at pp. 17-18. [14] Notice of Public Availability and Request for Comments, Docket No. NHTSA-2017-0082, (signed on Sept. 12, 2017), https://www.nhtsa.gov/sites/nhtsa.dot.gov/files/documents/ads2.0_frn_08312017.pdf [15] U.S. Dept. of Transp., Automated Driving Systems 2.0: A Vision for Safety ii (September 2017), at § 2, https://www.nhtsa.gov/sites/nhtsa.dot.gov/files/documents/automated-driving-systems-2.0-best-practices-for-state-legislatures.pdf [16]  Kang, supra note 2. [17] Testing of Autonomous Vehicles, State of California: Department of Motor Vehicles, https://www.dmv.ca.gov/portal/dmv/detail/vr/autonomous/testing. [18] Russ Mitchell, Driverless cars on public highways? Go for it, Trump administration says, L.A. Times, Sept. 12, 2017, http://www.latimes.com/business/autos/la-fi-hy-driverless-regs-chao-20170912-story.html. Gibson Dunn’s lawyers are available to assist in addressing any questions you may have regarding these developments.  Please contact the Gibson Dunn lawyer with whom you usually work, or the authors: H. Mark Lyon – Palo Alto (+1 650-849-5307, mlyon@gibsondunn.com) Frances Annika Smithson – Los Angeles (+1 213-229-7914, fsmithson@gibsondunn.com) Please also feel free to contact any of the following: Automotive/Transportation: Theodore J. Boutrous, Jr. – Los Angeles (+1 213-229-7000, tboutrous@gibsondunn.com) Christopher Chorba – Los Angeles (+1 213-229-7396, cchorba@gibsondunn.com) Theane Evangelis – Los Angeles (+1 213-229-7726, tevangelis@gibsondunn.com) Public Policy: Michael D. Bopp – Washington, D.C. (+1 202-955-8256, mbopp@gibsondunn.com) Mylan L. Denerstein – New York (+1 212-351-3850, mdenerstein@gibsondunn.com)   © 2017 Gibson, Dunn & Crutcher LLP Attorney Advertising:  The enclosed materials have been prepared for general informational purposes only and are not intended as legal advice.

August 10, 2017 |
When AI Creates IP: Inventorship Issues To Consider

Palo Alto partner Mark Lyon and associates Alison Watkins and Ryan Iwahashi are the authors of “When AI Creates IP Inventorship: Issues to Consider,” [PDF] published by Law360 on August 10, 2017.