GSA AI Procurement Rules Would Introduce New Disclosure and Use-Rights Requirements for Federal Contractors
Client Alert | March 12, 2026
Gibson Dunn is monitoring these developments and is available to discuss how these proposed rules may impact GSA contractors, subcontractors, and their vendors.
Artificial intelligence (AI) companies that make their models available to the federal government may soon be subject to new rules that would require them to grant the government an irrevocable license to their AI systems for “any lawful” use. The proposed deviation to the General Services Acquisition Regulation (GSAR) is part of a broader push by the Trump Administration to reshape federal AI procurement and to use contract terms to influence how AI systems used in connection with federal procurement are designed, deployed, and governed.
Key Points:
- GSA’s draft AI clause would mark a significant shift in federal procurement practices by imposing new operational requirements on AI models used in contract performance, including by adding neutrality expectations, disclosure of model modifications, broad government use rights, and strict data-handling rules.
- If adopted, the clause would require contractors to disclose all AI tools used in contract performance, and to ensure compliance by supporting vendors and subcontractors.
- GSA currently plans to add the clause to existing GSA Multiple Award Schedule (MAS) through a mass-modification in late March or early April 2026. The clause would also apply to new contracts going forward.
Scope:
The draft terms would impose new requirements for government contractors and “Service Providers,” which are defined to include entities that directly and indirectly provide, operate, or license AI systems to the government. By broadly defining “Service Providers,” the draft language would require contractors to flow down the requirements to its vendors and subcontractors. The draft terms would also require contractors to disclose all AI systems used to perform their contracts—not just those they sell to the government.
GSA’S DRAFT REQUIREMENTS
“Any lawful” purpose. According to a draft GSAR deviation (“Basic Safeguarding of Artificial Intelligence Systems”) shared by GSA in connection with its planned issuance of GSA Multiple Award Schedule (MAS) Solicitation Refresh No. 31, the new requirements would require contractors holding GSA contracts to grant the government an irrevocable license to use AI systems for “any lawful” purpose. The draft also provides that an AI system “must not refuse to produce data outputs or conduct analyses based on the Contractor’s or Service Provider’s discretionary policies,” which would impact the extent to which the system may refuse or adjust outputs based on filters or other guardrails at the inference level beyond what applicable law requires. The draft clause does note that this requirement “must not be construed to require retraining of the model or alteration of model weights, suggesting the prohibition targets inference-level refusals rather than base model behavior—though the practical distinction may be thin. The “any lawful” use requirement could significantly constrain an AI contractor’s or Service Provider’s ability to limit or condition government use of its AI model, which may create tension between proposed government use and standard model safety controls.
Intellectual property. The government receives an irrevocable license to use the AI System, while the contractor or Service Provider “retains ownership of the underlying AI System and base models.” However, the government would own all “Custom Developments,” which the clause defines broadly to include any modifications, customizations, configurations, or enhancements to the AI System—including modifications to models “as a result of model training or fine-tuning,” as well as associated workflows, work product, and deliverables. The clause does exclude pre-existing background IP, but the line between background IP and government-specific enhancements may be difficult to draw in practice, creating potential deployment challenges.
Neutrality. The draft would require AI companies to provide “a neutral, non-partisan tool that does not manipulate responses in favor of ideological dogmas such as Diversity, Equity, Inclusion,” and it notes that “[t]he Contractor must not intentionally encode partisan or ideological judgments into the AI Systems [sic] Data Outputs.”
Data segregation and training prohibition. The draft rule would require government data to be “logically segregated from the Data of any non-Government customer or client, and [to not be] commingled with Data of other customers.” That requirement could raise significant implementation questions for contractors and Service Providers that rely on shared enterprise environments or common infrastructure across government and commercial workloads. The draft also imposes “eyes off” data handling procedures that restrict human review of government data, requires tools enabling the government to maintain detailed records of all processing activities, and prohibits the use of government data for training, fine-tuning, or otherwise improving AI models for any other customers or any commercial or non-commercial purposes.
Disclosure of non-U.S. regulatory or commercial compliance. The draft would require contractors to disclose all AI systems used in the performance of the contract to the ordering contracting officer within 30 days of award, and to disclose whether any such AI systems have been “modified or configured to comply with any non-U.S. federal government or commercial compliance or regulatory framework.” That would mean, for example, that contractors would be required to disclose whether the AI systems have been modified to comply with existing regulatory frameworks, such as the European Union’s Digital Services Act and Artificial Intelligence Act, as well as U.S. state laws. At a minimum, such a requirement would raise significant questions about the extent of disclosure expected from contractors that provide AI models or AI-driven products across multiple jurisdictions.
Prohibition on non-U.S. AI systems. The draft also bars federal contractors from using any non-U.S.-made AI systems in performance of their contracts, including “any AI components manufactured, developed, or controlled by non-U.S. entities.” The draft defines “American AI Systems” as “AI systems developed and produced in the United States,” referencing OMB Memorandum M-25-22. That prohibition could have implications for models if any part of the AI stack has non-U.S. development, ownership, or control.
Broad contractor responsibility. The draft contemplates broad application. For example, it defines “Service Provider” broadly to include any entity that “directly or indirectly provides, operates, or licenses an AI system but is not a party to the contract.” Service Providers “may or may not be subcontractors.” Notably, the clause would make the contractor responsible for the Service Provider’s compliance with the clause’s requirements. As a result, a contractor that uses a third-party AI model when performing a government contract could face exposure tied to the model developer’s compliance with requirements related to data segregation, training prohibitions, incident reporting, and permissible use. The clause also provides that:
- Its terms control in the event of conflict with “any policies, requirements, terms, conditions, or commercial agreements of the quote, the Contractor, or the Service Provider,” effectively subordinating an AI vendor’s commercial terms of service and safety policies to the government’s contract requirements.
- Consequences of non-compliance may include the government’s suspension of its use of the AI system, and contractor liability for reasonable decommissioning costs if the contracting agency terminates the contract for cause based on failure to comply with the clause’s unbiased AI principles.
The government reserves its right to conduct automated assessments of the AI system, as deployed for government use, at any time using its own benchmarks, including to evaluate bias, truthfulness, safety, and “unsolicited ideological content.” The government could attempt to make broad use of this audit right, and invoke findings to support suspensions or removal of certain AI systems from the technical stack, as well as contract terminations.
Timeline
GSA is soliciting comments on the proposed deviation by email to maspmo@gsa.gov or in the comments section of its Advanced Notice for MAS Refresh 31 blogpost by March 20, 2026. GSA indicated that it intends to formally publish Refresh 31—which would incorporate the draft rule into GSA contracts—in March or April of 2026.
Key Takeaways for Government Contractors
Although the GSA proposal is subject to change, the draft provisions suggest that procurement of AI tools may increasingly be conditioned on substantive government expectations regarding model behavior, permitted use, and license requirements. That would represent a notable shift for contractors that have traditionally relied on negotiations with the government to set contract terms, and it signals that the government is preparing to take a firmer stance in negotiating terms for AI procurement.
The draft terms would impose disclosure obligations that extend beyond AI systems sold directly to the government. If adopted, these requirements could reach any company that relies on a third-party AI tool in connection with government contract work, effectively requiring such companies to (a) confirm that the underlying AI provider can satisfy each of the clause’s requirements, (b) wall off the AI tool entirely from government-related work, or (c) transition to an alternative provider. The draft requirements to disclose model modifications made to comply with non-U.S. legal frameworks could also create complex compliance questions for contractors that operate across multiple jurisdictions.
Gibson Dunn is monitoring these developments and is available to discuss how these proposed rules may impact GSA contractors, subcontractors, and their vendors.
Gibson Dunn lawyers are available to assist in addressing any questions you may have regarding these issues. Please contact the Gibson Dunn lawyer with whom you usually work, the authors, or any leader or member of the firm’s Government Contracts or Artificial Intelligence practice groups:
Government Contracts:
Lindsay M. Paulin – Washington, D.C. (+1 202.887.3701, lpaulin@gibsondunn.com)
Dhananjay S. Manthripragada – Los Angeles/Washington, D.C. (+1 213.229.7366, dmanthripragada@gibsondunn.com)
Sarah-Jane Lorenzo – Washington, D.C. (+1 202.887.3580, slorenzo@gibsondunn.com)
Artificial Intelligence:
Cassandra L. Gaedt-Sheckter – Palo Alto (+1 650.849.5203, cgaedt-sheckter@gibsondunn.com)
Vivek Mohan – Palo Alto (+1 650.849.5345, vmohan@gibsondunn.com)
Robert Spano – London/Paris (+33 1 56 43 13 00, rspano@gibsondunn.com)
Eric D. Vandevelde – Los Angeles (+1 213.229.7186, evandevelde@gibsondunn.com)
Frances A. Waldmann – Los Angeles (+1 213.229.7914,fwaldmann@gibsondunn.com)
Hugh N. Danilack – Washington, D.C. (+1 202.777.9536, hdanilack@gibsondunn.com)
© 2026 Gibson, Dunn & Crutcher LLP. All rights reserved. For contact and other information, please visit us at www.gibsondunn.com.
Attorney Advertising: These materials were prepared for general informational purposes only based on information available at the time of publication and are not intended as, do not constitute, and should not be relied upon as, legal advice or a legal opinion on any specific facts or circumstances. Gibson Dunn (and its affiliates, attorneys, and employees) shall not have any liability in connection with any use of these materials. The sharing of these materials does not establish an attorney-client relationship with the recipient and should not be relied upon as an alternative for advice from qualified counsel. Please note that facts and circumstances may vary, and prior results do not guarantee a similar outcome.