Artificial Intelligence and Autonomous Systems Legal Update (4Q18)

January 22, 2019

Click for PDF

We are pleased to provide the following update on recent legal developments in the areas of artificial intelligence, machine learning and autonomous systems (“AI”).  As AI technologies become increasingly commercially viable, one of the most interesting challenges lawmakers face in the governance of AI is determining which of its challenges can be safely left to ethics (appearing as informal guidance or voluntary standards), and which rules should be codified in law.[1]  Many of the recent updates we have chosen to highlight below illustrate how lawmakers and government agencies seek to develop AI strategies and policy with the aim of balancing the tension between protecting the public from the potentially harmful effects of AI technologies while encouraging positive innovation and competitiveness.[2]  For the most part, lawmakers continue to engage with a broad range of stakeholders on these issues, and there remains plenty of scope for companies operating in this space to participate in discussions around the legislative process and policymaking.

__________________________

Table of Contents

I.      Patent Eligibility for AI-Related Inventions

II.    Federal Government Agencies Seek to Leverage the Benefit of AI and Innovative Technologies

III.  Autonomous Vehicles

IV.   Rising Concerns for AI’s Potential to Create Bias and Discrimination

V.    Use of AI in Criminal Proceedings

VI.   Legal Technology

__________________________

I.   Patent Eligibility for AI-Related Inventions

As the adoption of AI technologies progresses rapidly, AI-related patent applications have kept pace, blooming to over 154,000 worldwide since 2010.[3]  However, several U.S. court decisions in recent years have left technology and pharmaceutical companies unsure of whether their inventions are patentable under U.S. federal law.  One source of great unpredictability in the patent field is subject matter eligibility under 35. U.S.C. § 101—and in particular the abstract idea exception to patent eligibility—based primarily on the U.S. Supreme Court’s decision in Alice Corp. v. CLS Bank International,[4] resulting in well-documented frustration in the lower courts trying to make sense of the precedent[5] and ultimately running the risk of impeding innovation in the United States in AI and machine learning and the growth of commerce.[6]

This confusion and unpredictability has prompted lawmakers to take action, inviting industry leaders and representatives of the American Bar Association’s IP law section as well as retired members of the judiciary to a closed-door roundtable discussion on December 12, 2018 to discuss potential legislation to rework 35 U.S.C. § 101 on patent eligibility.[7]

Against this backdrop, on January 4, 2019 the United States Patent and Trademark Office (“USPTO”) guidance announced updated to help clarify the process that examiners should undertake when evaluating whether a pending claim is directed to an abstract idea under the Supreme Court’s two-step Alice test and thus not eligible for patent protection under 35 U.S.C. § 101.  The USPTO’s guidelines may arm applicants for AI-related inventions with a roadmap of how to avoid or overcome Section 101 rejections, but it remains to be seen how examiners interpret the guidelines.  For further details on the USPTO’s guidelines, please see our recent Client Alert on The Impact of the New USPTO Eligibility Guidelines on Artificial Intelligence-related Inventions.

In November 2018, the European Patent Office (“EPO”) also issued new guidelines that set out patentability criteria for AI technologies and, in particular, provide a range of examples for what subject matter is exempt from patentability under Articles 52(1), (2) and (3) of the Convention on the Grant of European Patents.[8]  Under the EPO guidance, which was well-received by companies in the field,[9] an inventor of an AI technology must show that the claimed subject matter has a “technical character.”  While this approach does not deviate from the EPO’s long-held position on exclusions to patentability, the guidelines provide welcome clarity and specific examples.  For instance, the classification of digital images, videos, audio or speech signals based on low-level features (e.g., edges or pixel attributes for images) are typical technical applications of classification algorithms, but classifying text documents solely in respect of their textual content is not regarded to be per se a technical purpose but a linguistic one.  Notably, the guidelines also potentially open the door to patent protection for training methodologies and mechanisms for generating training datasets.  In sum, a claim to an AI algorithm based upon a mathematical or computational model on its own is likely to be considered non-technical.  Accordingly, careful drafting will be required to impart onto the AI or machine learning component a technical character by reference to a specific technical purpose and/or implementation, rather than describing it as an abstract entity.  AI or machine learning algorithms in the context of non-technical systems are not likely to be patentable.

Much will depend on how the USPTO and the EPO enforce their new guidelines.  The EPO guidelines’ categorical exclusions of certain subject matter appears to stand in contrast to U.S. patent eligibility law (which may therefore prove more favorable to AI innovators seeking patent protection), but the EPO guidelines could offer a higher level of consistency and clarity as to what subject matter is exempt from patentability than the more fluid U.S. approach.  In the meantime, innovators in artificial intelligence and machine learning technologies should take note of these developments and exercise caution when making strategic decisions about which technologies should be patented and in which jurisdictions applications should be filed.

II.   Federal Government Agencies Seek to Leverage the Benefit of AI and Innovative Technologies

As noted in our Artificial Intelligence and Autonomous Systems Legal Update (3Q18), 2018 saw few notable legislative developments, but increasing federal government interest in AI technologies, a trend which has continued apace amid increasing appreciation by lawmakers of AI as a potent general purpose technology.

A.   Future of Artificial Intelligence Act

The legislative landscape has not been especially active in the AI sector this past quarter.  In December 2017, a group of senators and representatives introduced the Fundamentally Understanding the Usability and Realistic Evolution of Artificial Intelligence Act of 2017, also known as the FUTURE of Artificial Intelligence Act (the “Act”), which, if passed, will not regulate AI directly, but will instead form a Federal Advisory Committee on the Development and Implementation of Artificial Intelligence.[10]  The purpose of the Committee is to help inform the government’s response to the AI sector on several issues, including the competitiveness of the U.S. in regard to AI innovation, workforce issues including the possible effect of technological displacement of workers, education, ethics training, open sharing of data, and international cooperation.[11]  The Act’s definition of AI is broad and could encompass AI technologies in any number of fields and industries.  At present, the Act remains pending in Congress.

B.   FDA Releases New Rules for Medical Devices Incorporating AI

In early December 2018, the U.S. Food and Drug Administration (“FDA”) released a proposed rule that aimed to update the review process for certain medical devices before they enter the marketplace.  The proposed rule would clarify the applicable statutory language by establishing procedures and criteria for the so-called de novo pathway used to review the safety and effectiveness of innovative medical devices that do not have predicates, and would apply to certain medical devices incorporating AI.[12]  Back in April 2018, the FDA approved IDx_DR, a software program that uses an AI algorithm to detect eye damage from diabetes.[13]  Since then, the FDA has approved other AI devices at what appears to be an increasing rate.  If the rule is finalized, the FDA anticipates that companies developing novel medical devices will be able to take advantage of a more efficient process and clearer standards when seeking de novo classification.

C.   IRS Invests in AI to Detect Criminal Activity More Efficiently

Facing years of budget cuts and a declining number of employees, the IRS is increasingly investing in technology driven by AI to identify and prosecute tax fraud and rein in offshore tax evasion.  During an American Bar Association webcast in December 2018, Todd Egaas, Director of Technology, Operations and Investigative Services in the IRS’ Criminal Investigations office explained that, “We’ve been running thin on people lately and rich on data.  And so what we’ve been working on—and this is where we think data can help us—is how do we make the most use out of our people?”[14]

Part of the IRS’ strategy is a recently signed seven-year, $99 million deal with Palantir Technologies to help the agency “connect the dot in millions of tax filings, bank transactions, phone records, and even social media posts.”[15]  Palantirs’ technology will be used to assist the IRS in determining which cases to investigate and prosecute with its more limited manpower.  Egaas and Benjamin Herndon, the IRS’ Chief Analytics Officer, offered some specific insights into how the IRS is using these advanced technologies.  Not only is the speed of processing data anticipated to increase to become near real-time, but machine learning algorithms and AI “identify patterns in graphs where noncompliance might be present” and “prove particularly helpful in combating identity thieves fraudulently applying for tax refunds.”[16]

D.   Federal Agencies Urge Banks to Innovate

The Board of Governors of the Federal Reserve System, the Federal Deposit Insurance Corporation, the U.S. Treasury’s Financial Crimes Enforcement Network (“FinCEN”), the National Credit Union Administration, and the Office of the Comptroller of the Currency (collectively, the “Agencies”) are urging banks to use AI and other innovative technologies to combat money laundering and terrorist financing, according to a joint statement issued on December 3, 2018.[17]

The Agencies are attempting to foster innovation by encouraging banks to identify “new ways of using existing tools or adopting new technologies” to help “identify and report money laundering, terrorist financing, and other illicit financial activity by enhancing the effectiveness and efficiency” of existing Bank Secrecy Act/anti-money laundering (“BSA/AML”) compliance programs.[18]  The Agencies have seen how experimentations with AI and digital identity technologies have strengthened banks’ compliance approaches and have enhanced transaction monitoring systems.[19]

Companies in the financial sector considering the use of these innovative compliance programs should note the Agencies’ assurances that doing so “will not result in additional regulatory expectations,” and that banks “should not” be criticized if their efforts “ultimately prove unsuccessful,” nor will the agencies necessarily impose supervisory actions if innovative pilot programs “expose gaps” in a compliance program.[20]  Similarly, if “banks test or implement artificial intelligence-based transaction monitoring systems and identify suspicious activity that would not otherwise have been identified under existing processes, there will be no automatic assumption that the banks’ existing processes are deficient.”[21]  However, the Agencies clarified that banks must continue to meet their BSA/AML compliance obligations when developing and testing pilot programs.[22]

E.   FTC Hearings Address Consumer Protection and Antitrust Issues Sparked by AI

Over the course of 2018, the Federal Trade Commission held a series of hearings on “Competition and Consumer Protection in the 21st Century,” which provided a platform for industry leaders, academics, and regulators to discuss whether changes in the domestic and world economics and new technologies, among other developments, may require changes to competition and consumer protection law, enforcement priorities, and policy.[23]  On November 13-14, 2018, the conversation turned to algorithms, artificial intelligence, and predictive analytics.  The risk that AI and algorithms will perpetuate bias, discrimination, and existing socioeconomic disparities was a shared concern.[24]  And there was broad agreement that bias can be combatted at several different stages in the development and use of AI.  For example, panelists spoke about the need for good algorithmic design, high-quality data, rigorous design, and some degree of consumer understanding to help manage and combat bias.

Panelists also debated whether existing laws are robust enough to address ethical issues posed by the use of AI, or whether an AI consumer protection law is warranted.  There appeared to be consensus that existing laws, i.e., the Fair Credit Reporting Act and anti-discrimination laws, and the regulatory powers of the FTC (though Section 5 of the FTC Act, which prohibits unfair methods of conduct in commerce), were sufficient and that caution was warranted before proposing new laws.

Antitrust panelists addressed issues relating to competitors’ use of algorithms to set prices, suggesting that concerns about algorithmic collusion are overhyped: not only is it very difficult to program algorithms to respond optimally to actions generated by a competitors’ algorithm over a long period of time, but algorithms are typically designed to respond to changes in pricing—not achieve an economic competitive equilibrium.[25]  In addition, unilateral decisions to use price optimization algorithms may be lawful, regardless of the outcome—much like how conscious parallelism is permissible under existing law.  From an enforcement perspective, companies may wish to note that the panelists encouraged enforcers to study and then distinguish between algorithms that produce “follow the leader” programs and those that detect and punish other programs for failing to achieve a desired outcome, since only the latter behavior is indicative of unlawful collusion.

III.   Autonomous Vehicles

A.   Recent Developments

The global self-driving vehicle market is estimated to reach $42 billion by 2025,[26] and companies working on self-driving cars continue to aggressively compete for new talent and new technology.  And, given the fast pace of developments, governments worldwide are facing the challenge of regulating this novel and complex industry without stifling innovation and global competitiveness—some making more progress than others.

2019 could prove a watershed year for the fledgling autonomous vehicle industry as virtually every major automaker launches its own self-driving vehicle and some developers inch towards releasing “level 4 autonomy” vehicles—”unsupervised” cars that can operate within a limited domain.[27]  However, truly autonomous—level 4 or 5—vehicles won’t be found on public roads anywhere until countries succeed in rolling out uniform regulations nationwide, something no country has yet done.  Likewise, as public sensitivity to crashes and malfunctions remains high,[28] automakers continue to wrestle with the technological challenges of making commercially viable fully autonomous vehicles that can operate in all conditions (including, for example, darkness or inclement weather) and without set geographical boundaries.

For example, Audi has announced that it has plans to be the first company in the world to sell a Level 3 car directly to the public: an Audi A8 sedan with an autopilot option called Traffic Jam Pilot, which works only at speeds under 50 kilometers per hour (37 miles per hour), and requires the driver to be prepared to take back the wheel after a warning—the definition of Level 3.[29]  However, concern over the lack of clear federal regulations for autonomous driving technology means Audi will not offer this option in the U.S. market.[30]  Audi has said it does not envisage selling Level 4 cars for some time—perhaps well into the 2020s. Toyota has revealed plans to unveil its own experimental Level 4 car, called the Urban Teammate, at the 2020 Summer Olympics in Tokyo, a highly restricted environment.[31]

B.   DoT Releases Updated Guidance: “Preparing for the Future of Transportation: Automated Vehicles 3.0”

As outlined in our Artificial Intelligence and Autonomous Systems Legal Update (3Q18), the absence of a uniform federal regulatory regime in the U.S. means that companies testing and manufacturing autonomous vehicles (“AVs”) must comb through a complex patchwork of state rules that exist with limited federal oversight.

The continued absence of federal regulation renders the U.S. Department of Transportation’s (“DoT”) informal guidance increasingly important to companies operating in this space.  On October 3, the DoT’s National Highway Traffic Safety Administration (“NHTSA”) released its most significant 2018 road map on the design, testing and deployment of driverless vehicles: “Preparing for the Future of Transportation: Automated Vehicles 3.0” (commonly referred to as “AV 3.0”).[32]

While one of its core principles is to promote consistency among federal, state and local requirements in order to advance the integration of AVs in the national transportation system, the guidance also reinforces that federal officials are eager to take the wheel on safety standards and that any state laws on automated vehicle design and performance will be preempted.  State, local and tribal governments will be responsible for licensing human drivers, registering motor vehicles, enacting and enforcing traffic laws, conducting safety inspections, and regulating motor vehicle insurance and liability.  The guidance includes several best practices for states on adapting their policies and procedures for licensing and registering automated vehicles, assessing the readiness of their roads, and training their transportation workforces for the arrival of automated vehicles.

As in the previous iterations of the DoT’s guidance, the thread running throughout AV 3.0 is the commitment to voluntary, consensus-based technical standards and the removal of unnecessary barriers to the innovation of AV technologies.  AV 3.0 “builds upon — but does not replace” the DOT’s last AV guidelines, “Automated Driving Systems 2.0: A Vision for Safety,” which were released on September 12, 2017.[33]  AV 3.0 expands the applicability of the DOT’s AV guidance to include commercial vehicles, on-road transit and the roadways on which they operate.  In parallel, various other DOT agencies are gathering input from industry stakeholders on what sort of infrastructure improvements and strategic planning will be required to accommodate and coordinate autonomous vehicles operating in a variety of different modes of transportation.

Despite the lack of compliance requirements or enforcement mechanisms within the guidance, the DoT has proposed modernized regulations that specifically recognize that the “driver” and “operator” of a vehicle may include an automated system,[34] and highlighted that it would prepare proactively for automation through pilot programs, investments and other means, announcing a national pilot program to test and deploy autonomous vehicles.[35]

However, NHTSA will not abandon the traditional self-certification scheme, meaning that manufacturers can continue to self-certify the compliance of their products by reference to applicable standards.  Moreover, NHTSA will issue a proposed rule seeking comment on changes to streamline and modernize its procedures for processing applications for exemptions from the Federal Motor Vehicle Safety Standards (“FMVSS”).

AV 3.0 highlights the importance of cybersecurity and data privacy as AV technologies become increasingly integrated, and encourages a coordinated effort across the government and private sectors for a unified approach to cyber incidents and information sharing.  However, the guidance contains no firm rules on data sharing and privacy, leaving it up to state and local governments to determine standards and resources to counteract cybersecurity threats.[36]

IV.   Rising Concerns for AI’s Potential to Create Bias and Discrimination

One of AI’s most salient features is its ability to take noisy data sets and provide the targeted results by using criteria often beyond our anticipation or easy comprehension.  As noted above, there is a growing consensus that AI systems perpetuate and amplify bias, and that computational methods are not inherently neutral and objective.[37]  As AI models are deployed into politics, commerce and broader society, we face unprecedented challenges in understanding their disproportionate impacts and how to apply our existing ethical framework—concepts such as transparency, inequality and fairness—to apparently dispassionate technologies. While discussions about the ethics of AI remain largely in the policy realm, increasing public awareness has led to rising concerns among lawmakers for AI’s potential for create bias and discrimination, while companies making use of AI and machine learning systems have responded to increasing scrutiny with regard to the risks of inadvertently creating discriminatory processes and outcomes.[38]

The AI Now Institute recently observed the potential for a number of ethics and bias concerns that AI stakeholders should be aware of.  In the group’s recent AI Now Report 2018, the AI Now Institute drew attention to AI’s potential to create bias and called for a renewed focus on the state of equity and diversity in the AI field as well as increased accountability of AI firms as a potential solution to these issues.[39]  Companies making use of AI should anticipate that lawmakers may agree and be willing to scrutinize AI systems for “algorithmic fairness.”  Indeed, in December 2018, Members of the House Judiciary Committee questioned Google CEO Sundar Pichai—albeit with varying degrees of technical accuracy—about the potential for bias in search results.[40]  AI stakeholders would be remiss to not take note, and consider approaches for eliminating or mitigating potential bias beginning early in the design process and throughout the life cycle of products and services.

V.   Use of AI in Criminal Proceedings

The potential use and abuse of AI in the judicial context is palpable, particularly as the use of forensic and risk assessment software in criminal proceedings is on the rise.  Transparency appears to be a key due process issue, as criminal defendants urge courts to permit their review of such software, yet courts have so far been reluctant to hear challenges on these grounds.

In 2017, the United States Supreme Court declined to hear a case coming out of the Wisconsin Supreme Court[41] which challenged the use of algorithmic risk assessment technology in criminal sentencing due to concerns that the software harbored gender biases and was disclosed to neither the court nor the defendant.  The broader debate over the use of such software, however, was recently revived by two prominent law firm partners who held a mock trial at New York University School of Law based on the case.  The mock trial centered on the due process concerns that arise from lack of judicial and defendant access to and scrutiny of the underlying source code of risk assessment tools—needed to determine if the source code incorporate flaws or biases—due to the software’s proprietary nature.[42]  In civil cases, a party may be compelled to share a proprietary algorithm pursuant to a protective order.  But in a criminal case, it may be too costly for a defendant to seek such review.  (And, in Loomis, the court denied the defendant’s request to access the algorithm.).

A related issue was presented to the Ninth Circuit in United States v. Joseph Nguyen, a case in which a defendant was connected to a certain IP address, and the government argued that it identified and isolated the IP address as the sole source for a download of illegal material using a forensic software program.[43]  The defendant sought to review the evidence against him, including the forensic software’s source code, in order to challenge the prosecution’s claim.  In an amicus brief, the Electronic Frontier Foundation, a civil liberties organization focusing on digital rights, urged that “[w]here the government seeks to use evidence generated by forensic software owned by a third party, disclosure of the software’s source code is required by the Constitution and by the strong public interest in the integrity of court proceedings.”[44]  The Ninth Circuit, however, denied the defendant’s petition for rehearing en banc.[45]

While courts, so far, appear reluctant to require the production of proprietary source code or other trade secret information in the criminal context, it still behooves companies providing products and services in highly-visible and important contexts, such as governmental law enforcement, to consider what information they would be willing to provide to ensure sufficient operational transparency and accountability to garner and maintain public trust.

VI.   Legal Technology

The legal industry continues to expand its use of AI technology to assist it with various tasks.  For example, corporate tax departments are beginning to use AI to sift large volumes of contracts to determine if an entity qualifies for research and development tax credits as well as to analyze court decisions to predict potential outcomes of litigation.  Indeed, some of the large accounting firms in the U.S. have reported that they expect to make large investments in AI in connection with the tax function budget.[46]

The USPTO entered the AI sector in late 2018 when it began to develop AI tools to improve the agency’s search capabilities and to make the patent prosecution process more efficient.[47]  When a patent application is examined, the patent examiners must search through a “complex and vast corpus of human knowledge.”  USPTO hopes that a system that incorporates AI will help expedite patent examinations by allowing examiners to review applications more quickly and effectively, as well as to fill in any gaps that might exist with regard to prior art searches.  In November 2018, the agency unveiled a beta test of new software called “Unity” which incorporates AI to help patent examiners with prior art searches and announced it continues to work on other initiatives designed to streamline the patent prosecution process.[48]


    [1]    See, e.g., Paul Nemitz, Constitutional Democracy and Technology in the Age and Artificial Intelligence, Phil. Trans. R. Soc. A 376: 20180089 (Nov. 15, 2018), available at https://royalsocietypublishing.org/doi/full/10.1098/rsta.2018.0089.

    [2]    See, e.g., the German government’s new AI strategy, published in November 2018, which promises an investment of €3 billion before 2025 with the aim of promoting AI research, protecting data privacy and digitalizing businesses (https://www.bundesregierung.de/breg-en/chancellor/ai-a-brand-for-germany-1551432); and the European Commission’s new Draft Ethics Guidelines For Trustworthy Artificial Intelligence (Dec. 18, 2019), available at https://ec.europa.eu/digital-single-market/en/news/draft-ethics-guidelines-trustworthy-ai.

    [3]    Louis Columbus, Microsoft Leads The AI Patent Race Going Into 2019, Forbes (Jan. 6, 2019), available at https://www.forbes.com/sites/louiscolumbus/2019/01/06/microsoft-leads-the-ai-patent-race-going-into-2019/#765459af44de; Jeff John Roberts, IBM Tops 2018 Patent List as A.I. and Quantum Computing Gain Prominence, Fortune (Jan. 8, 2019), available at http://fortune.com/2019/01/07/ibm-tops-2018-patent-list-as-ai-and-quantum-computing-gain-prominence/.

    [4]    See Alice Corp. Pty. Ltd. v. CLS Bank Int’l, 134 S. Ct. 2347 (2014).

    [5]    James Fussell, Alice Must Be Revisited In View Of Emerging Technologies, Law360 (Dec. 5, 2018), available at https://www.law360.com/articles/1107964/alice-must-be-revisited-in-view-of-emerging-technologies.

    [6]    See id. (noting that although the number of AI patent applications has been growing dramatically in the past several years in the United States, the growth trajectory in Chinese applications spiked in 2014, the year of the Supreme Court’s Alice decision, and for the first time in 2016 surpassed U.S. filings).

    [7]    Malathi Nayak, Google, Amazon Invited to Talk Patent Eligibility With Lawmakers, Bloomberg Law (Dec. 4, 2018), available at https://news.bloomberglaw.com/ip-law/google-amazon-invited-to-talk-patent-eligibility-with-lawmakers-1; one example of potential legislation is a bill introduced in June 2018 by Rep. Thomas Massie, R-Ky., and Marcy Kaptur, D-Ohio, which would amend Section 101 to “effectively abrogate[] Alice Corp. v. CLS Bank International, 134 S. Ct. 2347 (2014) and its predecessors to ensure that life sciences discoveries, computer software, and similar inventions and discoveries are patentable, and that those patents are enforceable.” (H.R. 6264 (Sec. 7(b)(3))).

    [8]    Eur. Patent Office, Guidelines for Examination in the European Patent Office (Nov. 2018), G-II, 3.3.1, available at https://www.epo.org/law-practice/legal-texts/html/guidelines2018/e/g_ii_3_3_1.htm.

    [9]    Patrick Wingrove, EPO AI Guidelines “Give Clarity and Direction,” Say In-House Counsel, Managing Intellectual Property (Jan. 15, 2019), available at http://www.managingip.com/Blog/3853828/EPO-AI-guidelines-give-clarity-and-direction-say-in-house-counsel.html.

    [10]    H.R. 4625.

    [11]    In November 2018, the Little Hoover Commission—a bipartisan, independent California state oversight agency—also published a comprehensive report analyzing the economic impact of AI technologies on the state of California between now and 2030.  The report “Artificial Intelligence: A Roadmap for California” calls for immediate action by the governor and legislature to prepare strategically for and take advantage of AI, while minimizing its risks.  Among the Commission’s recommendations are the creation of an AI special advisor in state government, an AI commission and the promotion of apprenticeships and training opportunities for employees whose jobs may be displaced by AI technologies.  See Little Hoover Comm’n, “Artificial Intelligence: A Roadmap for California” (Nov. 2018), available at https://lhc.ca.gov/sites/lhc.ca.gov/files/Reports/245/Report245.pdf.

    [12]    Emily Field, FDA Issues Proposed Rule For Novel Medical Devices, Law360 (Dec. 4, 2018), available at https://www.law360.com/articles/1107804/fda-issues-proposed-rule-for-novel-medical-devices

    [13]    FDA Approves Marketing For First AI Device for Diabetic Retinopathy Detection, EyeWire News (Apr. 11, 2018), available at https://eyewire.news/articles/fda-permits-marketing-of-ai-based-device-to-detect-certain-diabetes-related-eye-problems/

    [14]    Vidya Kauri, AI Helping IRS Detect Tax Crimes With Fewer Resources, Law360 (Dec. 5, 2018), available at https://www.law360.com/articles/1108419/ai-helping-irs-detect-tax-crimes-with-fewer-resources.

    [15]    Siri Bulusu, Palantir Deal May Make IRS ‘Big Brother-ish’ While Chasing Cheats, Bloomberg Tax, (Nov. 15, 2018), available at https://news.bloombergtax.com/daily-tax-report/palantir-deal-may-make-irs-big-brother-ish-while-chasing-cheats?context=article-related.  Indeed, in 2018, the IRS Criminal Investigation division collected 1.67 petabytes—or 1 million gigabytes of data.  According to the IRS’ Strategy Plan for FY 2018-2022 (http://src.bna.com/C54), the agency is handling an influx of data that is 100 times larger than what it received a decade ago.

    [16]    Vidya Kauri, AI Helping IRS Detect Tax Crimes With Fewer Resources, Law360 (Dec. 5, 2018), available at https://www.law360.com/articles/1108419/ai-helping-irs-detect-tax-crimes-with-fewer-resources.

    [17]    Board of Governors of the Federal Reserve System, Federal Deposit Insurance Corporation, Financial Crimes Enforcement Network, National Credit Union Administration, and Office of the Comptroller of the Currency, Joint Statement on Innovative Efforts to Combat Money Laundering and Terrorist Financing (Dec. 3, 2018), available at https://www.federalreserve.gov/newsevents/pressreleases/files/bcreg20181203a1.pdf.

    [18]    Id. at 1.

    [19]    Id.

    [20]    Id. at 2.

    [21]    Id.

    [22]    Id.

    [23]    Fed. Trade Comm’n, Hearings on Competition and Consumer Protection in the 21st Century, available at https://www.ftc.gov/policy/hearings-competition-consumer-protection.

    [24]    Kestenbaum, Reingold & Bradshaw, What We Heard At The FTC Hearings: Days 12 And 13, Law360 (Nov. 28, 2018), available at https://www.law360.com/articles/1105676/what-we-heard-at-the-ftc-hearings-days-12-and-13; see also infra at IV.

    [25]    Id.

    [26]    Jeff Green, Driverless-Car Global Market Seen Reaching $42 Billion by 2025, Bloomberg, Jan. 8, 2015, available at https://www.bloomberg.com/news/articles/2015-01-08/driverless-car-global-market-seen-reaching-42-billion-by-2025.

    [27]    Philip E. Ross, In 2019, We’ll Have Taxis Without Drivers—or Steering Wheels, IEEE Spectrum (Jan. 3, 2019), available at https://spectrum.ieee.org/transportation/self-driving/in-2019-well-have-taxis-without-driversor-steering-wheels (On the SAE autonomy scale, a Level 0 car has no autonomous capability or driver aids, while a Level 5 car is fully autonomous, to the point that manual controls are unnecessary.  Level 3 cars allow human drivers to take their eyes off the road and their hands off the wheel in certain situations, but still require humans to take over at other times and when prompted.).

    [28]    See, e.g., the MIT Moral Machine experiment, an online platform by the Massachusetts Institute of Technology where MIT Media Lab researchers publish results from its global survey on autonomous driving ethics.  The survey generated the largest dataset on public attitudes in artificial intelligence ethics—asking questions such as whether it is better, for example, to kill two passengers or five pedestrians.  The experiment is available at http://moralmachine.mit.edu/.

    [29]    Audi Technology Portal, World Debut For Highly Automated Driving: The Audi AI Traffic Jam Pilot, available at https://www.audi-technology-portal.de/en/electrics-electronics/driver-assistant-systems/audi-a8-audi-ai-traffic-jam-pilot.

    [30]    Stephen Edelstein, 2019 Audi A8 Won’t Get Traffic Jam Pilot Driver-Assist Tech In The U.S., Digital Trends (May 16, 2018), available at https://www.digitaltrends.com/cars/2019-audi-a8-traffic-jam-pilot-not-coming-to-us/.

    [31]    Supra, n. 25.

    [34]    AV 3.0 recognizes that certain current FMVSS were drafted with human drivers in mind (in that they include requirements for a steering wheel, brakes, mirrors, etc.), creating an unintended barrier to the innovation of AV technologies, and notes that NHTSA will issue a proposed rule seeking comment on proposed changes to certain FMVSS to accommodate AV technology innovation.  FMVSS will also be tweaked to be “more flexible and responsive, technology-neutral, and performance-oriented to accommodate rapid technological innovation.”

    [35]    The “Pilot Program for Collaborative Research on Motor Vehicles With High or Full Driving Automation” is designed to facilitate, monitor and learn from the testing and development of AV technology and prepare for the impact of highly automated and autonomous vehicles on the roads under a variety of driving conditions.  The comment period lasted through December 10, 2018, but the timing on a final decision regarding the pilot program remains open.  NHTSA intends to rely on its “Special Exemption” authority in 49 U.S.C. § 30114 to provide exemptions for manufacturers seeking to engage in research, testing and demonstration projects.

    [36]    Linda Chiem, 3 Takeaways From DOT’s New Automated Vehicles Policy, Law360 (Oct. 10, 2018), available at https://www.law360.com/articles/1090829/3-takeaways-from-dot-s-new-automated-vehicles-policy.

    [37]    Meredith Whittaker, Et Al., AI Now Report 2018, AI Now Institute, 2.2.1 (Dec., 2018), available at https://ainowinstitute.org/AI_Now_2018_Report.pdf.

    [38]    The AI Now Report 2018 notes that several technology companies—including IBM, Google, Microsoft and Facebook—have begun operationalizing fairness definitions, metrics, and tools.  See supra, n. 35 at 2.2.

    [39]    Id.

    [40]    Russell Brandom, Congress Thinks Google Has a Bias Problem—Does It?, The Verge (Dec. 12, 2018), available at https://www.theverge.com/2018/12/12/18136619/google-bias-sundar-pichai-google-hearing.

    [41]    State v. Loomis, 881 N.W.2d 749 (Wis. 2016).

    [42]    Natalie Rodriguez, Loomis Look-Back Previews AI Sentencing Fights to Come, Law360 (Dec. 9, 2018), available at https://www.law360.com/articles/1108727/loomis-look-back-previews-ai-sentencing-fights-to-come.

    [43]    U.S.A. v. Joseph Nguyen, No. 17-50062 (9th Cir.).

    [44]    Id. at 2.

    [45]    Order, U.S.A. v. Joseph Nguyen, No. 17-50062 (9th Cir. Dec. 20, 2018).

    [46]    Vidya Kauri, Artificial Intelligence To Revolutionize Tax Planning, Law360 (Sept. 18, 2018), available at https://www.law360.com/articles/1083886/artificial-intelligence-to-revolutionize-tax-planning.

    [47]    Suzanne Monyak, USPTO Seeks Help Building AI For Faster Prior Art Searches, Law360 (Sept. 17, 2018), available at https://www.law360.com/articles/1083487/uspto-seeks-help-building-ai-for-faster-prior-art-searches.

    [48]    Jimmy Hoover, USPTO Testing AI Software To Help Examiners ID Prior Art, Law360 (Nov. 15, 2018), available at https://www.law360.com/articles/1095703/uspto-testing-ai-software-to-help-examiners-id-prior-art.


The following Gibson Dunn lawyers prepared this client update: H. Mark Lyon, Frances A. Waldmann, Claudia M. Barrett, Tony Bedel and Haley S. Morrisson.

Gibson Dunn’s lawyers are available to assist in addressing any questions you may have regarding these developments.  Please contact the Gibson Dunn lawyer with whom you usually work, or any of the following lawyers in the firm’s Artificial Intelligence and Automated Systems Group:

H. Mark Lyon – Chair, Palo Alto (+1 650-849-5307, [email protected])
Lisa A. Fontenot – Palo Alto (+650-849-5327, [email protected])
Frances A. Waldmann – Los Angeles (+1 213-229-7914,[email protected])

© 2019 Gibson, Dunn & Crutcher LLP
Attorney Advertising:  The enclosed materials have been prepared for general informational purposes only and are not intended as legal advice.