Toward a National AI Policy? The Trump Administration Releases Proposed Framework for Federal Legislation
Client Alert | March 24, 2026
The Trump Administration’s National Policy Framework for Artificial Intelligence is a legislative recommendation, not a regulation or executive action with independent legal force, but it sets the stage for another potential fight in Congress over the preemption of state regulation of artificial intelligence.
I. Introduction
On March 20, 2026, the Trump Administration released its anticipated “National Policy Framework for Artificial Intelligence” (Framework), a four-page blueprint called for by Executive Order 14365 that calls on Congress to enact a unified federal AI standard and preempt state laws.
As covered in our prior client alert here, on December 11, 2025, President Trump signed Executive Order 14365 (Order or EO) titled “Ensuring a National Policy Framework for Artificial Intelligence.” Section 8 of the Order directed the Special Advisor for AI and Crypto and the Assistant to the President for Science and Technology to prepare a “legislative recommendation establishing a uniform Federal policy framework for AI” that preempts State AI laws that conflict with the Order’s policy. The Framework is the result of that directive.
II. Key Takeaways
- The Framework is a legislative recommendation, not a regulation or executive action with independent legal force.
- The Framework opposes the creation of any new federal AI regulatory body, instead endorsing sector-specific oversight through existing agencies complemented by industry-led standards.
- The Framework calls on Congress to “preempt state AI laws that impose undue burdens” in favor of a minimally burdensome national standard. The framework carves out three categories from preemption: (a) traditional state police powers, including generally applicable laws protecting children, preventing fraud, and protecting consumers; (b) state zoning authority over data center siting; and (c) requirements governing a state’s own use of AI through procurement or public services. However, the Framework asserts that states should be barred from regulating AI development or imposing developer liability for third-party conduct.
- The Framework states the Administration’s belief that training AI models on copyrighted material does not violate copyright law, but expressly defers to the courts to resolve the issue.
- In accordance with the “Ratepayer Protection Pledge,” the Framework directs Congress to ensure residential consumers do not bear increased electricity costs from new data center construction. The goal of such a provision would be to codify the Pledge’s principle that hyperscalers and AI companies should shoulder the “full cost of their energy and infrastructure.”
- The prospects for near-term passage of comprehensive federal AI legislation face significant headwinds, including a narrow window before midterm elections, bipartisan opposition to state preemption, differing views between the House and Senate, and the sheer scope and complexity of such an endeavor.
III. Context and Congressional Reactions
The release of the Framework comes against the backdrop of continued legislative developments at the state level, including laws that would seek to oversee the activity of frontier model developers, such as California’s SB 53 and New York’s RAISE Act, as well as continued negotiations regarding potential revisions to Colorado’s more comprehensive AI Act, which is slated to come into effect in June 2026.
The Administration’s accompanying press release states that winning the AI race requires a “national policy framework that both enables American industry to innovate and thrive and ensures that all Americans benefit from this technological revolution.” The press release also asserts that any legislation should be “applied uniformly across the United States” and that a “patchwork of conflicting state laws would undermine American innovation and our ability to lead in the global AI race”—a clear signal that the Administration plans to push Congress to pass legislation that preempts state AI laws.
Some in Congress have already responded to the Framework’s arrival, largely along party lines. House Republican leaders released a statement describing the Framework as a “critical step” that “gives Congress a roadmap for legislation” and committed to “working across the aisle to enact a national framework” for AI. And, two days before the Framework was unveiled, Senator Marsha Blackburn introduced a discussion draft of a bill titled the “TRUMP AMERICA AI Act” that is similar to the Framework’s approach.
But on the other side of the aisle, a handful of House Democrats introduced the GUARDRAILS Act to repeal President Trump’s December AI Executive Order which ordered the development of the Framework. While the measure faces significant hurdles, it presages another bitter fight in Congress over whether and how the Framework will translate to national legislation.
IV. The Framework in Brief
The Framework lays out seven pillars for national legislation:
- Section I urges Congress to require “commercially reasonable, privacy protective, age-assurance requirements” for AI platforms likely used by minors, along with features to reduce risks of sexual exploitation and self-harm.
- Section II calls for relevant national security agencies to have enough technical capacity to assess frontier AI models and related risks, in consultation with developers. It also recommends protecting residential taxpayers from higher electricity costs tied to new AI data centers.
- Section III says the Administration supports leaving copyright questions about AI training to the courts, while expressing its view that such training is lawful. It also recommends federal protections against unauthorized AI-generated digital replicas, subject to First Amendment carve-outs, and supports licensing or collective rights systems for compensation negotiations without antitrust liability, while stating that any legislation should not decide when licensing is required.
- Section IV calls on Congress to bar federal coercion of tech and AI providers to moderate content based on partisan or ideological agendas, and to create a cause of action for Americans harmed by agency censorship efforts.
- Section V supports sector-specific AI regulation through existing agencies and industry-led standards, as well as regulatory sandboxes and AI-ready federal datasets.
- Section VI urges Congress to use non-regulatory means to incorporate AI training into existing education and workforce programs.
- Section VII calls on Congress to preempt state AI laws that impose undue burdens in favor of a national standard. As noted above, it exempts three areas from preemption. The Framework also recommends barring states from regulating AI development, which it describes as inherently interstate and tied to foreign policy and national security; from unduly burdening lawful uses of AI; and from penalizing AI developers for third-party misuse of their models.
V. Practical Guidance
- Continue complying with state AI laws, and reviewing state activities. No preemption has occurred. California, Utah, Texas, and other states’ AI requirements remain enforceable, and other states’ laws and proposals (e.g., Colorado’s AI Act, and its revisions, if any) could soon be relevant.
- Track DOJ Task Force activity. The framework may invigorate the DOJ AI Litigation Task Force to bring additional challenges to state laws that conflict with the Administration’s vision.
- Watch for preemption developments. If federal legislation gains traction, companies should evaluate how potential preemption provisions would impact state law compliance requirements, including whether carve-outs in areas like protecting children result in a significant change in obligations.
Gibson Dunn lawyers are available to assist in addressing any questions you may have about these developments. Please contact the Gibson Dunn lawyer with whom you usually work, the authors, or any of the following leaders and members of the firm’s Artificial Intelligence practice group:
Ashlie Beringer – Palo Alto (+1 650.849.5327, aberinger@gibsondunn.com)
Cassandra L. Gaedt-Sheckter – Palo Alto (+1 650.849.5203, cgaedt-sheckter@gibsondunn.com)
Vivek Mohan – Palo Alto (+1 650.849.5345, vmohan@gibsondunn.com)
Robert Spano – London/Paris (+33 1 56 43 13 00, rspano@gibsondunn.com)
Eric D. Vandevelde – Los Angeles (+1 213.229.7186, evandevelde@gibsondunn.com)
Frances A. Waldmann – Los Angeles (+1 213.229.7914,fwaldmann@gibsondunn.com)
© 2026 Gibson, Dunn & Crutcher LLP. All rights reserved. For contact and other information, please visit us at www.gibsondunn.com.
Attorney Advertising: These materials were prepared for general informational purposes only based on information available at the time of publication and are not intended as, do not constitute, and should not be relied upon as, legal advice or a legal opinion on any specific facts or circumstances. Gibson Dunn (and its affiliates, attorneys, and employees) shall not have any liability in connection with any use of these materials. The sharing of these materials does not establish an attorney-client relationship with the recipient and should not be relied upon as an alternative for advice from qualified counsel. Please note that facts and circumstances may vary, and prior results do not guarantee a similar outcome.