Skip to main content
Menu Icon
Close

InfoBytes Blog

Financial Services Law Insights and Observations

NIST, Department of Commerce announce new guidance to help AI developers

Federal Issues Agency Rule-Making & Guidance NIST Department of Commerce Artificial Intelligence

Federal Issues

On July 26, the U.S. Department of Commerce announced new guidance and tools to enhance the safety, security, and trustworthiness of AI systems pursuant to President Biden’s Executive Order on AI (covered by InfoBytes here). NIST issued three final guidance documents, which include two documents to help manage the risks of generative AI and a plan for U.S. stakeholders to work globally on AI standards, as well as a draft guidance document from the U.S. AI Safety Institute to help AI developers mitigate risks associated with generative AI and dual-use foundation models. Additionally, NIST introduced Dioptra, a software package designed to test AI systems against adversarial attacks.

The newly released draft guidance from the U.S. AI Safety Institute focuses on managing the risks for misuse of dual-use foundation models, offering seven approaches to prevent harm. The Dioptra software helps AI developers and users evaluate how adversarial attacks can degrade AI system performance. The three finalized documents include (i) AI RMF Generative AI Profile, which identifies risks and proposes actions for managing generative AI; (ii) Secure Software Development Practices for Generative AI and Dual-Use Foundation Models, which addresses risks related to training AI systems; and (iii) A Plan for Global Engagement on AI Standards, which aims to foster international cooperation on AI standards.

These initiatives are part of a broader effort to ensure AI technologies are developed and deployed responsibly and to support innovation while mitigating potential risks.