Keeping Up with the EEOC: AI Focus Heats Up with Title VII Guidance

May 23, 2023

Click for PDF

On May 18, 2023, the U.S. Equal Employment Opportunity Commission (“EEOC”) announced the release of its second set of guidance regarding employers’ use of artificial intelligence (“AI”).[1]  The EEOC’s technical, non-binding guidance outlines key considerations that, in the EEOC’s view, help ensure that automated employment tools do not violate Title VII of the Civil Rights Act of 1964 (“Title VII”).[2]

This guidance comes on the heels of reports that the EEOC is training staff on how to identify discrimination caused by automated systems and AI tools,[3] and the EEOC’s joint statement with officials from the Department of Justice (“DOJ”), the Consumer Financial Protection Bureau (“CFPB”), and the Federal Trade Commission (“FTC”) emphasizing the agencies’ commitment to “vigorously” enforce existing civil rights laws against biased and discriminatory AI systems.[4]

AI and Title VII Guidance

The EEOC’s guidance centers on the potential risk that, in the EEOC’s view, AI tools used in employment decision making could give rise to disparate impact under Title VII.  The guidance provides that a disparate impact could arise if an automated tool disproportionately excludes individuals based on protected characteristics, without being job related or consistent with business necessity.  Below we summarize the key aspects of the EEOC’s guidance.

5 Key Takeaways for Employers:

  1. Coverage of AI: The guidance emphasizes that an automated decision-making tool would be treated as a “selection procedure” subject to the EEOC’s Uniform Guidelines on Employee Selection Procedures (the “Uniform Guidelines”)[5] when used to “make or inform decisions about whether to hire, promote, terminate, or take similar actions toward applicants or current employees.”
  2. Joint Liability: The guidance provides that “if an employer administers a selection procedure, it may be responsible under Title VII if the procedure discriminates on a basis prohibited by Title VII, even if the test was developed by an outside vendor.”  Specifically, the guidance notes that liability could arise where an employer relies on the results of a selection procedure that is administered on its behalf or if a vendor’s assessment of the tool is incorrect and results in discrimination.  Notably, this is in alignment with what the New York City Department of Consumer and Worker Protection (“DCWP”) underscored during its May 22, 2023 roundtable regarding New York City’s Local Law 144, which will govern the use of automated employment decision tools in hiring and promotion beginning July 5, 2023.[6]  Specifically, DCWP asserted that Local Law 144 places all compliance responsibility on the employer and does not permit employers to merely rely on a vendor’s representations.
  3. Four-Fifths Rule of Thumb: The four-fifths rule is a measure of adverse impact that determines whether the selection rate of one group is substantially (e., less than 80%) different than that of another group.  Under the rule, a selection procedure could be found to have a disparate impact if the selection rate of a protected group is less than 80% of the rate of the non-protected group.  The guidance echoes the Uniform Guidelines in stating that the four-fifths measure is “merely a rule of thumb” and should be used to draw preliminary inferences and prompt further assessment of the underlying processes.  Accordingly, compliance with the rule is not necessarily sufficient to show that a tool is lawful under Title VII.
  4. EEOC Charges: In a footnote, the guidance asserts that the Uniform Guidelines “do not require the Commission to base a determination of discrimination on the four-fifths rule when resolving a charge.”
  5. Auditing: The EEOC encourages employers to routinely conduct self-assessments of their AI tools to monitor for potentially disproportionate effects on individuals subject to the automated selection procedure.  The guidance also states that if an employer fails to take steps to adopt a less discriminatory algorithm that was considered during the development process, this might give rise to liability.  Based on the guidance, the EEOC’s expectation is that employers will conduct bias audits of their AI tools even in jurisdictions that do not require them.

Joint Statement

On April 25, 2023, Charlotte A. Burrows, Chair of the EEOC, joined officials from the DOJ, CFPB, and the FTC to release a joint statement emphasizing the agencies’ pledge “to vigorously use [their] collective authorities to protect individuals’ rights regardless of whether legal violations occur through traditional means or advanced technologies.”

Highlighted Risk Areas.  The statement noted the agencies’ concern with AI tools’ reliance on “vast amounts of data to find patterns or correlations” in making recommendations or predictions and flagged the following three aspects of AI as potential sources of discrimination:

(1) Model Opacity and Access:  The agencies note that where automated systems lack transparency, it becomes difficult for all stakeholders to ascertain whether the system is fair.

(2) Data and Datasets:  The statement emphasizes that an AI tool’s outcomes may be impacted by unrepresentative and imbalanced datasets as well as data that incorporates historical biases and other errors.

(3) Design and Use:  When developers design an AI tool without understanding the underlying practices, context, and users, the statement warns that the tools might be based on flawed assumptions.

Sustained Focus.  In an accompanying statement, EEOC Chair Burrows said that the EEOC would “continue to raise awareness on this topic; to help educate employers, vendors, and workers; and where necessary, to use our enforcement authorities to ensure AI does not become a high-tech pathway to discrimination.”[7]  She also noted that the agency is also looking “down the road” and considering establishing “some guardrails” to regulate AI in the future.

This message from the EEOC is not new.  Rather, it reiterates the agency’s stance that AI systems and tools will be subject to existing equal employment opportunity laws and regulations.

Gibson Dunn’s “Keeping Up with the EEOC” series launched nearly a year ago in May 2022 when the EEOC filed its first complaint alleging algorithmic discrimination and issued guidance with the DOJ on how AI tools might violate the Americans with Disabilities Act (“ADA”).[8]  Since then, the EEOC has taken a number of steps that indicate its increased focus on the use of automated employment decision-making systems and tools, including its draft strategic enforcement plan’s prioritization of AI issues[9] and its algorithm-rewriting settlement with a job search website operator.[10]

*    *    *

Given this increased attention, vendors of automated employment decision-making tools and employers using or considering the use of AI tools should ensure that they are keeping up with the rapid-fire developments from the EEOC and other regulators, the White House, Congress, and the flurry of proposed and forthcoming laws at the state and local level.[11]  Indeed, on April 13, 2023, Senate Majority Leader Chuck Schumer announced a high-level framework outlining a new regulatory regime for AI,[12] and on May 1, 2023, the White House announced that it will be releasing a request for information to learn more about AI tools being used by employers to monitor, evaluate, and manage an array of workers, including those in call centers, warehouses, offices, and rideshare and delivery services.[13]

Together, these announcements from Congress and the White House as well as the EEOC’s ongoing focus on AI suggest that vendors and employers could face regulatory oversight by multiple federal authorities, and there are indications that state authorities, including State Attorneys General, are looking at AI as a potential new area for enforcement as well.[14]


[1] EEOC Releases New Resource on Artificial Intelligence and Title VII (May 18, 2023),

[2] EEOC, Select Issues: Assessing Adverse Impact in Software, Algorithms, and Artificial Intelligence Used in Employment Selection Procedures Under Title VII of the Civil Rights Act of 1964 (May 18, 2023),

[3] Rebecca Rainey, EEOC to Train Staff on AI-Based Bias as Enforcement Efforts Grow, Bloomberg Law (May 5, 2023),

[4] For more information on the EEOC’s first enforcement action and conciliation agreement, please see Gibson Dunn’s Client Alert Keeping Up with the EEOC: Artificial Intelligence Guidance and Enforcement Action (May 23, 2022), and Keeping Up with the EEOC: 5 Takeaways from its Algorithm Rewriting Settlement (Mar. 23, 2023),

[5] 29 C.F.R. part 1607; EEOC, Questions and Answers to Clarify and Provide a Common Interpretation of the Uniform Guidelines on Employee Selection Procedures (March 1, 1979),

[6] 10 Ways NYC AI Discrimination Rules May Affect Employers (Apr. 19, 2023),

[7] EEOC, EEOC Chair Burrows Joins DOJ, CFPB, And FTC Officials to Release Joint Statement on Artificial Intelligence (AI) and Automated Systems (Apr. 25, 2023),

[8] Gibson Dunn’s Client Alert, Keeping Up with the EEOC: Artificial Intelligence Guidance and Enforcement Action (May 23, 2022), available at

[9] For more information, please see Gibson Dunn’s Client Alert, Keeping Up with the EEOC: 10 Key Takeaways from its Just-Released Draft Strategic Enforcement Plan (Jan. 13, 2023),

[10] For more information, please see Gibson Dunn’s Client Alert, Keeping Up with the EEOC: 5 Takeaways from its Algorithm Rewriting Settlement (Mar. 23, 2023),

[11] For more information about the laws in New York City and California, please see Harris Mufson, Danielle Moss, and Emily Lamm, 10 Ways NYC AI Discrimination Rules May Affect Employers (Apr. 19, 2023),; Cassandra Gaedt-Sheckter, Danielle Moss, and Emily Lamm, What Employers Should Know About Proposed Calif. AI Regs (Apr. 12, 2023),

[12] Senate Democrats, Schumer Launches Major Effort To Get Ahead Of Artificial Intelligence (Apr. 13, 2023),

[13] The White House, Hearing from the American People: How Are Automated Tools Being Used to Surveil, Monitor, and Manage Workers? (May 1, 2023),

[14] Paul Singer, Abigail Stempson, and Beth Chun, State AGs “Regulating Algorithms – The How and Why” (Apr. 24, 2023),

The following Gibson Dunn attorneys assisted in preparing this client update: Jason Schwartz, Danielle Moss, Harris Mufson, Naima Farrell, Molly Senger, and Emily Lamm.

Gibson Dunn’s lawyers are available to assist in addressing any questions you may have regarding these developments. To learn more about these issues, please contact the Gibson Dunn lawyer with whom you usually work, any member of the firm’s Labor and Employment practice group, or Jason Schwartz and Katherine Smith.

Jason C. Schwartz – Co-Chair, Labor & Employment Group, Washington, D.C.
(+1 202-955-8242, [email protected])

Katherine V.A. Smith – Co-Chair, Labor & Employment Group, Los Angeles
(+1 213-229-7107, [email protected])

© 2023 Gibson, Dunn & Crutcher LLP

Attorney Advertising:  The enclosed materials have been prepared for general informational purposes only and are not intended as legal advice. Please note, prior results do not guarantee a similar outcome.