January 27, 2023
On Thursday, January 26, 2023, the National Institute for Standards and Technology (NIST) released the first version of its Artificial Intelligence Risk Management Framework (AI RMF 1.0). The framework is intended for voluntary use to help incorporate trustworthiness considerations into the design, development, use, and evaluation of AI products, systems, and services.
AI RMF 1.0 was released after more than 18 months of drafting and workshops, which we have tracked in previous legal updates. The document reflects about 400 sets of formal comments NIST received from more than 240 different organizations on draft versions of the framework. Speaking at the launch event, Dr. Alondra Nelson, Deputy Assistant to the President and Principal Deputy Director for Science and Society in the White House Office of Science and Technology Policy (OSTP), indicated that OSTP provided “extensive input and insight” into the development of AI RMF 1.0.
As in previous drafts of the AI RMF, the framework is made up of four core “functions”:
AI RMF 1.0 also encourages the use of “profiles” to illustrate how risk can be managed through the AI lifecycle or in specific applications using real-life examples. Use-case profiles describe in detail how AI risks for particular applications are being managed in a given industry sector or across sectors (such as large language models, cloud-based services or acquisition) in accordance with RMF core functions. Temporal profiles illustrate current and target outcomes in AI risk management, allowing organizations to understand where gaps may exist. And cross-sectoral profiles describe how risks from AI systems may be common when they are deployed in different use cases or sectors.
AI RMF 1.0 is accompanied by:
Comments on AI RMF 1.0 will be accepted until February 27, 2023, with an updated version set to launch in spring 2023.
 NIST Artificial Intelligence Risk Management Framework (AI RMF 1.0), available at https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.100-1.pdf.
 Artificial Intelligence and Automated Systems Legal Update (1Q22), available at https://www.gibsondunn.com/artificial-intelligence-and-automated-systems-legal-update-1q22/; Artificial Intelligence and Automated Systems Legal Update (2Q22), available at https://www.gibsondunn.com/artificial-intelligence-and-automated-systems-legal-update(2q22); Artificial Intelligence and Automated Systems Legal Update (3Q22), available at https://www.gibsondunn.com/artificial-intelligence-and-automated-systems-legal-update-3q22/.
 Roadmap for the NIST Artificial Intelligence Risk Management Framework (AI RMF 1.0), available at https://www.nist.gov/itl/ai-risk-management-framework/roadmap-nist-artificial-intelligence-risk-management-framework-ai.
 Crosswalks to the NIST Artificial Intelligence Risk Management Framework (AI RMF 1.0), available at https://www.nist.gov/itl/ai-risk-management-framework/crosswalks-nist-artificial-intelligence-risk-management-framework.
 Perspectives about the NIST Artificial Intelligence Risk Management Framework, available at https://www.nist.gov/itl/ai-risk-management-framework/perspectives-about-nist-artificial-intelligence-risk-management.
The following Gibson Dunn lawyers prepared this client update: H. Mark Lyon, Frances Waldmann, and Evan Kratzer.
Gibson Dunn’s lawyers are available to assist in addressing any questions you may have regarding these developments. If you would like assistance in submitting comments on AI RMF 1.0, please contact the Gibson Dunn lawyer with whom you usually work, or any of the following members of Gibson Dunn’s Artificial Intelligence and Automated Systems Group:
© 2023 Gibson, Dunn & Crutcher LLP
Attorney Advertising: The enclosed materials have been prepared for general informational purposes only and are not intended as legal advice. Please note, prior results do not guarantee a similar outcome.