President Trump’s Latest Executive Order on AI Seeks to Preempt State Laws

Client Alert  |  December 15, 2025


The practical impact of the EO in the short- and medium-term is likely to be limited, and companies are likely well-advised to continue to operate under the expectation that states will legislate – and enforce – their AI-related laws.

Please join Helgi WalkerFrances WaldmannVivek MohanEric Brooks, and Hugh Danilack from Gibson Dunn’s Artificial Intelligence and Administrative Law and Regulatory Practices in a webinar to discuss the EO and its implications on Wednesday, December 17, at 2pm ET. Please register here.

On December 11, 2025, President Trump signed an Executive Order (Order or EO) titled “Ensuring a National Policy Framework for Artificial Intelligence,” which aims to promote “United States leadership in Artificial Intelligence” by preempting state AI laws and regulations.  The stated purpose of the EO is to “sustain and enhance the United States’ global AI dominance through a minimally burdensome national policy framework for AI.”

The Order directs several federal government initiatives designed to blunt, preempt, or challenge state laws, under the thesis that a patchwork of state-level regimes would result in compliance cost burdens and variance that would disadvantage developers and slow U.S. innovation in the global AI race.

The Order explicitly calls out Colorado’s AI Act as an example of a problematic state law.  The Order also directs the Administration to work with Congress to pass federal legislation that preempts state laws that conflict with the EO’s policy goals while ensuring that “children are protected, censorship is prevented, copyrights are respected, and communities are safeguarded.”  The EO builds on earlier federal efforts (such as America’s AI Action Plan) to link federal funding allocation to a state’s AI regulatory climate and seeks to use preemption as a practical lever to standardize rules and align AI policy with industry and federal-level priorities.

I. Key Takeaways

  • Policy Drivers: The EO is best viewed as a reiteration of the Trump Administration’s deregulatory approach to AI development – particularly its view that state-level safety, bias-mitigation, and transparency requirements risk imposing ideologically driven constraints on AI outputs and hamstringing the innovation needed for the United States to win the AI race.
  • Limited Legal Basis: Although the EO directs multiple agencies to assert broad federal authority in service of a “uniform national policy framework for AI,” these initiatives are unlikely to find a legal basis for broad preemption of state AI laws.
  • Stay the Course: As a practical matter, businesses subject to state AI laws should plan to continue to comply with those laws for the time being.
  • Low Likelihood of Injunctions Against State AI Laws: Although DOJ could theoretically seek preliminary relief in individual cases challenging state AI laws, courts are unlikely to grant injunctions unless the Administration makes a more defined legal showing than would be apparent from the face of the EO.
  • Monitor DOJ Challenges: While, as analyzed below, DOJ challenges to state laws appear unlikely to be successful, companies should monitor DOJ-initiated challenges for the potential impact and influence on how states approach near-term enforcement.
  • Monitor for State Challenges: Businesses should also monitor for affirmative litigation by states challenging the Order.
  • Monitor FTC Policy Statement: Companies should monitor the FTC policy statement directed by the EO for indications of whether and how aggressively the FTC will deem compliance with state AI laws as potential Section 5 violations.
  • Preemption Exemptions, Including for State Regulation of Child Safety: The EO requires the preparation of a legislative recommendation for the preemption of state AI laws, but unlike the leaked draft, exempts state AI laws relating to child safety, AI compute and data center infrastructure, state government procurement and use of AI, and “other topics as shall be determined” from any preemption proposal.
    • The addition of these exemptions reflects intense pressure from Republican-controlled states such as Florida that have been working to pass laws to protect consumers and residents from the negative impacts of AI and indicates that resistance to the EO could come from a bi-partisan coalition of states.
  • EO Targets “AI Centric” Laws: While the direct preemptive effect of the EO is relatively likely to be limited, the focus of the EO is clearly on “AI centric” legislation – as opposed to state laws, such as so-called comprehensive privacy laws or biometrics laws – that may directly or indirectly impact AI development and deployment.  This is evident from the exemptions noted above, as well as the limited mandate for the FTC’s policy statement.  Based on this, it is unlikely that the legislative recommendation that the EO calls for will seek to wholesale displace state privacy and biometrics laws, which have been enacted with broad support in states across the political spectrum.  For similar reasons, at this time, it would appear unlikely that the Order’s DOJ AI Litigation Task Force will seek to challenge such laws.
  • Debate Among the States: Even apart from the EO, states are engaged in fierce debates about whether and how to regulate AI. For example, governors and state legislators, including those from the same political party, have clashed in CaliforniaColorado, and New York over the scope of AI regulation.  The EO also does not appear to reflect an intraparty consensus among Republicans, with the governor of Florida publicly calling for state AI regulation even after the issuance of the EO.  The outcomes of these debates – and the attendant burden from state regulation – remain a key area for AI developers and deployers alike to follow.

II. Summarizing the EO’s New Initiatives

  • DOJ Challenges to State AI Laws. Section 3 directs the Attorney General to establish an “AI Litigation Task Force . . . whose sole responsibility shall be to challenge State AI laws.”  The EO directs the Task Force to challenge laws that
    • (1) “unconstitutionally regulate interstate commerce,”
    • (2) “are preempted by existing Federal regulations,” or
    • (3) “are otherwise unlawful in the Attorney General’s judgment.”
  • Restrictions on Funding. Section 5 directs the Secretary of Commerce to issue a Policy Notice specifying that States with AI laws determined to be contrary to federal policy should be ineligible for “non-deployment” funding under the Broadband Equity, Access, and Deployment (BEAD) Program.  Other agencies shall also take immediate steps to assess their discretionary grant programs and determine whether they may condition grants on states either not enacting an AI law contrary to federal policy or agreeing not to enforce such a law.
  • FCC Reporting and Disclosure Standard. Section 6 directs the FCC to “initiate a proceeding to determine whether to adopt a Federal reporting and disclosure standard for AI models that preempts conflicting State laws.”
  • FTC Policy Statement to Preempt Certain State AI Laws. Section 7 directs the FTC to issue a policy statement explaining that 15 U.S.C. § 45’s prohibition on deceptive acts or practices preempts state laws that “require alterations to the truthful outputs of AI models,” which is likely a reference to Colorado’s algorithmic discrimination law.
    • The Order’s position is that an AI model that, for example, adjusts its outputs to comply with state law prohibitions on algorithmic discrimination when recommending the best candidate for a job would be departing from the model’s “truthful” output. Its framing appears intended to establish a consistent baseline—the model’s unmodified, training-data-derived output—against which any subsequent adjustments or compliance-driven interventions can be measured.
    • This builds and expands on the Trump Administration’s July 2025 Executive Order, which stated that the federal government should be “hesitant to regulate the functionality of AI models in the private marketplace” but, in the context of federal procurement, should not procure models that “sacrifice truthfulness and accuracy to ideological agendas.”
    • The proposed FTC policy statement is framed to recast such state-mandated adjustments as a deceptive act, reflecting the EO’s broader skepticism of state efforts to impose fairness or bias-mitigation requirements on model outputs.
  • Section 8 directs the Special Advisor for AI and Crypto and the Assistant to the President for Science and Technology to prepare draft legislation that preempts State AI laws that conflict with the Order’s policy.  Notably, however, the EO directs that the recommended legislation exempt (1) child safety protections; (2) AI compute and data center infrastructure, other than generally applicable permitting reforms; (3) state government procurement and use of AI; and (4) other topics as shall be determined.

III. Analyzing the Limits of the EO’s New Initiatives

  • DOJ Challenges Are Unlikely to be Successful. Although the EO broadly directs the new DOJ task force to challenge AI laws that are “unlawful in the Attorney General’s judgment,” as a practical matter, the likely bases that the DOJ would have to challenge state AI laws are those stated in the EO: preemption or unconstitutional regulation of interstate commerce.  Neither would likely succeed.
    • Likely No Basis for Preempting Significant AI Laws. The EO does not identify any specific federal laws as capable of preempting the most significant state AI laws – particularly, those in Colorado or California.  We have previously investigated the scope of possible federal preemption of significant state AI laws, and conclude that it is unlikely that any federal regime would support a preemption challenge to such significant state AI laws.
    • Likely No Grounds for a Dormant Commerce Clause Challenge. Under the dormant commerce clause doctrine, states are sometimes prohibited from interfering with interstate commerce.  But neither of the two main strands of dormant commerce clause doctrine is likely to condemn the most significant state AI laws.
      • First, the dormant commerce clause forbids state laws that discriminate against out-of-state commerce. National Pork Producers Council v. Ross, 598 U.S. 356, 369 (2023).  But there is no evidence that significant state AI laws facially discriminate against out-of-state commerce, were intended to do so, or have that effect.  To the contrary, these laws generally regulate interactions between businesses and the state’s own citizens.
      • Second, in theory, a neutral law may run afoul of the dormant commerce clause because it imposes burdens on interstate commerce that are clearly excessive in relation to their local benefits. at 377.  DOJ could argue that a law regulating AI model outputs in one state would unduly burden the commercial efforts to produce that model in another state.  But such a theory is unlikely to prevail, especially after the Supreme Court’s 2023 rejection of an analogous theory in National Pork Producers Council.  It is unlikely that DOJ would be able to obtain a preliminary injunction of state laws based on these theories, given that the injunction analysis is likely to be driven by these merits considerations.
  • The Threats to Funding Are Unlikely to Make a Difference.
    • BEAD Funds Unlikely to Change State Decisions. The EO names one source of funding that the Administration wishes to hold over states to get them to change their AI laws: “non-deployment” funding for the Broadband Equity Access and Deployment (BEAD) Program.  But the total amount of funding available, on the order of $20 billion for all states, is unlikely to be sufficient to influence the behavior of many states, especially California.
    • Other Funds Unlikely to Move the Needle. The EO also directs agencies to assess whether they can withhold other grants if states continue to adopt or enforce disfavored laws.  It is unlikely that the government will find a significant amount of money that it can lawfully withhold.  When exercising its authority to disburse congressionally appropriated funds, the government must follow any criteria set forth by Congress for disbursing those funds, and it is likely that those criteria will often not fairly encompass whether a recipient state has adopted an AI law disfavored by the administration.
    • Federalism Limits the Use of Funding to Determine State AI Policy. Moreover, under federalism principles, there are significant limits on the federal government’s power to coerce states to adopt particular legislation.  The government, to be sure, may put conditions on how states use the funding it offers.  Nat’l Fed’n of Indep. Bus. v. Sebelius, 567 U.S. 519, 579 (2012).  But the EO does not appear to contemplate that.  The government may also condition funds on states’ adopting “related” restrictions, but only as a nudge: the “financial inducement” cannot be “so coercive as to pass the point at which pressure turns into compulsion.”  at 580.  Even assuming that the federal government could identify current spending “related” to state AI policy, the government would be sharply limited in its power to put “a gun to the head” of states by threatening to withhold vast sums of that funding to coerce their behavior, especially funding for entrenched programs.  Id. at 581.
  • The FCC is Unlikely to Have a Basis to Adopt a New Standard. The FCC’s ability to adopt the federal reporting and disclosure standard envisioned by the EO is limited at two levels: the scope of FCC jurisdiction, and the provisions of the Communications Act that allow for preemption of state law.
    • The FCC Cannot Credibly Assert Jurisdiction Over AI Providers. The FCC has not previously asserted jurisdiction over AI providers.  The FCC predominantly has the power to regulate “telecommunications services” such as cellphone service, and can only incidentally regulate “information services” – such as “edge providers” who generate, store, and manipulate information like “Amazon, Facebook, and Google.”  In re MCP No. 185, 124 F.4th 993, 999 (6th Cir. 2025).  AI services typically will fall into the latter category.  Where the FCC cannot regulate, it lacks the power to preempt state law too.  Mozilla Corp. v. FCC, 940 F.3d 1, 75 (D.C. Cir. 2019).
    • The Communications Act Has No Applicable Preemption Provision. Even if the FCC were to justify an assertion of jurisdiction over AI providers, no provision of the Communications Act appears to provide the authority to create a reporting requirement that would preempt state law.  The FCC has previously indicated that it would consider two provisions when determining whether it can preempt state law – Sections 253 and 332.  Neither of those provisions is applicable here.
      • Section 253 gives the FCC the ability to preempt a state law that “prohibit[s] the ability of any entity to provide any interstate or intrastate telecommunications service,” which, as noted above, does not typically encompass AI providers. Nor does this provision provide a basis for developing a reporting requirement; instead Section 253 contemplates that the FCC would affirmatively declare the relevant state law preempted following notice and comment.
      • Section 332 gives the FCC authority specific to “private mobile services” and “commercial mobile services,” categories that the FCC has not previously attempted to extend to “information services.” Even if those categories could be extended to cover AI providers, Section 332 only preempts state laws regulating “the entry of or the rates charged” by a mobile service, which is neither relevant to a reporting and disclosure standard, nor likely to be implicated by most AI laws.
    • An FTC Policy Statement Would Be Insufficient to Preempt. Although under existing precedent the FTC can issue certain regulations preempting conflicting state laws, a policy statement is not a regulation and does not have the force of law or the ability to preempt.  Nevertheless, the FTC does use its policy statements to guide its enforcement actions.
      • While such a policy statement is unlikely to formally preempt obligations under state law, depending on how it is framed – and, ultimately how it impacts the enforcement priorities of the FTC – may present challenges for companies that are perceived to have adjusted model behavior in response to state requirements.
      • Attempts by the FTC to undermine state law are likely to face challenges, in some form, from states. Indeed, states have already indicated that they are considering legal challenges.
      • The Order does not specify how other, non-AI specific laws that may apply to AI systems (such as broader consumer protection or product liability laws) would be impacted. State privacy and biometrics laws are not likely to fall within the ambit of the FTC’s mandate under the EO, which is to “explain the circumstances under which State laws that require alterations to truthful outputs of AI models . . . are preempted.” (emphasis added).

IV. What to Watch Out For 

  • States may try to affirmatively test the Order. State lawmakers on both sides of the aisle have already indicated an interest in fighting the EO.  It is possible that a bipartisan coalition will attempt to sue.  But despite the EO’s legal weaknesses, such a lawsuit may not succeed initially.  Given that little of the EO is self-executing, and that it mostly calls for further agency action, the federal government will argue that the EO is not yet ripe for challenge.  Accordingly, there may be a waiting period until specific agencies attempt to implement the EO to get a definitive judicial answer on its preemptive power.
  • Meanwhile, debate over the scope of AI regulation is likely to continue in the states. Governors and state legislators continue to negotiate over how quickly and how far to move on regulation, which has, in several states, resulted in the modification and/or delay of such regulation.  For example, in response to Governor Gavin Newsom’s letter, the California Privacy Protection Agency (CPPA) narrowed its proposed regulations regarding Automated Decision-Making Technology to remove direct references to AI.  In Colorado, although Governor Jared Polis signed the Colorado Artificial Intelligence Act, he issued a signing statement expressing his reservations about the law and a post-signing statement urging legislators to revise the law.  In response, Colorado delayed the law’s effective date to June 30, 2026 to give legislators more time to negotiate possible revisions.  And in New York, Governor Kathy Hochul proposed a comprehensive rewrite of the RAISE Act back to the legislature for further review.  The outcomes of these negotiations and the ongoing debates at the state level regarding the scope of regulation remain important for AI developers and deployers to follow.

The following Gibson Dunn lawyers prepared this update: Vivek Mohan, Helgi Walker, Cassandra Gaedt-Sheckter, Frances Waldmann, Eric Brooks, Hugh Danilack, and Evan Kratzer.

Gibson Dunn lawyers are available to assist in addressing any questions you may have about these developments. Please contact the Gibson Dunn lawyer with whom you usually work, the authors, or any of the following leaders and members of the firm’s Artificial Intelligence or Administrative Law & Regulatory practice groups:

Artificial Intelligence:
Keith Enright – Palo Alto (+1 650.849.5386, kenright@gibsondunn.com)
Cassandra L. Gaedt-Sheckter – Palo Alto (+1 650.849.5203, cgaedt-sheckter@gibsondunn.com)
Vivek Mohan – Palo Alto (+1 650.849.5345, vmohan@gibsondunn.com)
Robert Spano – London/Paris (+33 1 56 43 13 00, rspano@gibsondunn.com)
Eric D. Vandevelde – Los Angeles (+1 213.229.7186, evandevelde@gibsondunn.com)
Frances A. Waldmann – Los Angeles (+1 213.229.7914,fwaldmann@gibsondunn.com)

Administrative Law & Regulatory:
Stuart F. Delery – Washington, D.C. (+1 202.955.8515, sdelery@gibsondunn.com)
Eugene Scalia – Washington, D.C. (+1 202.955.8673, escalia@gibsondunn.com)
Helgi C. Walker – Washington, D.C. (+1 202.887.3599, hwalker@gibsondunn.com)

© 2025 Gibson, Dunn & Crutcher LLP.  All rights reserved.  For contact and other information, please visit us at www.gibsondunn.com.

Attorney Advertising: These materials were prepared for general informational purposes only based on information available at the time of publication and are not intended as, do not constitute, and should not be relied upon as, legal advice or a legal opinion on any specific facts or circumstances. Gibson Dunn (and its affiliates, attorneys, and employees) shall not have any liability in connection with any use of these materials.  The sharing of these materials does not establish an attorney-client relationship with the recipient and should not be relied upon as an alternative for advice from qualified counsel.  Please note that facts and circumstances may vary, and prior results do not guarantee a similar outcome.