Key Areas for Comment in the Request for Information

  • Developing guidelines, standards, and best practices for AI safety and security.
  • Developing a companion resource to the Risk Management Framework for generative AI.
  • Creating guidance and benchmarks for evaluating and auditing AI.
  • Procedures for conducting AI red-team testing.
  • Standards and practices for reducing the risk of synthetic content.
  • Plans for advancing responsible global technical standards for AI development.

Overview

On December 21, 2023, NIST issued a request for information ("RFI") seeking input from interested parties on key aspects of developing anticipated standards, guidelines, procedures, and processes for NIST to implement key aspects of the Biden Administration's AI Executive Order (the "AI EO").[1] NIST has a leading role in implementing several significant aspects of the AI EO's mandates to establish safe, secure and trustworthy AI systems, including a risk management framework for generative AI, guidance for AI audits, and red-team testing procedures, among other directives under the AI EO. However, this RFI only addresses certain of the important tasks NIST must accomplish in 2024:

  1. guidelines, standards, and best practices for AI safety, and the development of a companion resource to the AI Risk Management Framework (AI RMF) for generative AI;
  2. procedures for developers of dual-use foundation models to conduct red-team testing;
  3. standards for reducing risk in synthetic and potentially harmful content; and
  4. advancing responsible global technical standards for AI development.

Commenters are encouraged to take into consideration contributions from the NIST Generative AI Public Working Group in their submissions. Comments are due by February 2, 2024.

Request for Information Seeks Data to Support Development of Significant New Guidelines and Standards for AI

1. Developing Guidelines, Standards, and Best Practices for AI Safety and Security

Under one of the many AI EO directives, NIST must develop guidelines, standards and best practices with consensus from industry to enable the development and deployment of safe, secure, and trustworthy AI systems. Safety measures include:

AI Risk Management Framework for Generative AI (GAI RMF). The GAI RMF will be a companion resource to the risk management standards under the existing AI RMF and will include:

  • mapping, measuring, and managing trustworthiness characteristics and harms;
  • identifying gaps in industry standards;
  • recommending governance practices for industry to manage generative AI risks; and
  • identifying the skillsets and expertise needed for effective generative AI governance.

Guidance and Benchmarks for Evaluating and Auditing AI Capabilities. NIST is also tasked with creating guidance and benchmarks for evaluating and auditing AI capabilities. A key theme of the input NIST is seeking from commenters is identifying audit and evaluation practices or processes that can determine both capabilities and limitations of AI uses. Comments are invited on proposals for metrics, benchmarks, protocols, and methods of measuring AI system functionality, capabilities, safety, security, privacy, effectiveness, and trustworthiness throughout the AI system lifecycle and supply chain, taking into consideration rigorous measures against unsafe and harmful aspects of generative AI and its impacts, including:

  • Negative effects of AI system interactions and reliability issues;
  • Introduction of bias into data, models, and AI lifecycle practices;
  • Value chain risks, where developers refine models created by others; and
  • Mechanisms for gathering human feedback, model benchmarking and testing, considering impacts on society and human rights, among other considerations.

Guidelines, Procedures and Processes to Enable Developers of AI, Especially Dual-Use Foundation Models, to Conduct AI Red-Team Tests. The agency is also seeking information on current red-team testing practices and procedures that can be leveraged to develop guidelines to enable the deployment of safe, secure and trustworthy systems. The inclusion of red-team testing protocols in the voluntary commitments made by leading AI providers to the White House in 2023 has elevated the value of these processes. NIST's work is likely to lead to standards and guidance that will enable a broader use of these processes. Comments are invited on a range of issues and questions, including:

  • Use cases where AI red-teaming would be most beneficial for AI risk assessment and management;
  • Capabilities, limitations, risks, and harms that AI red-teaming can help identify, considering possible dependencies such as degree of access to AI systems and relevant data;
  • Current red-teaming best practices for AI safety, including identifying threat models and associated limitations or harmful or dangerous capabilities;
  • Internal and external review across the different stages of the AI life cycle that are needed for effective AI red-teaming; and
  • Limitations of red-teaming and additional practices that can fill identified gaps.

2. Reducing Risks of Synthetic Content

NIST is seeking information to reduce the risk of synthetic and potentially harmful content in both open and closed source models, with an understanding that various stakeholders – including scientists, researchers, civil society, and the private sector – should be involved. NIST is interested both in existing tools and the potential development of future tools, in addition to measurement methods, best practices, active standards work, exploratory approaches, challenges, and framework gaps.

The non-exhaustive list of topics and accompanying use cases identified by NIST related to synthetic content creation, detection, labeling, and auditing include:

  • authenticating content and provenance tracking;
  • techniques for labeling synthetic content, such as using watermarking;
  • detecting synthetic content;
  • resilience of techniques for labeling synthetic content to content manipulation;
  • economic feasibility of adopting such techniques for enterprises across all sizes; and
  • auditing and maintaining tools for analyzing synthetic content labeling and authentication.

3. Advancing Responsible Global Technical Standards for AI Development

A final key component of the RFI is NIST's work on establishing a strategy for developing consensus-based international best practices and coordinating and cooperating to advance AI system development by implementing global technical standards. Possible topics to address in comments include best practices regarding data capture, AI nomenclature and terminology, and assurance and verification of AI systems. An aspect of shared practices to address in comments includes how to:

  • best develop uniform AI standards;
  • measure international engagement;
  • explore mechanisms to promote international collaboration; and
  • consider strategies to drive adoption of uniform standards while addressing potential competition and international trade risks in developing uniformity in AI best practices.

Looking Ahead

This RFI is focused only on several of NIST's responsibilities under the AI EO. Future RFIs will address NIST's directives related to cybersecurity and privacy, synthetic nucleic acid sequencing, and supporting agencies' implementation of minimum risk-management practices.

NIST's Due Dates on Certain Future AI EO Deliverables:

June 26, 2024: Preliminary report due to OMB on authenticating, labeling or detecting synthetic content.

July 26, 2024: Publish:

  • AI Risk Management Framework for generative AI
  • Secure Software Framework for GAI and dual-use models
  • Benchmarks for evaluating AI capabilities
  • Red-teaming guidelines
  • Plan for global engagement on AI standards
  • Guidelines on efficacy of differential-privacy-guarantee protections

And:

  • Launch initiative to create guidance on AI evaluation benchmarks
  • Initiate engagement with synthetic nucleic acid sequencing providers

Dec. 24, 2024: Publish guidance on synthetic content authentication.

Jan. 26, 2025: Submit report to the President on global AI standards priority action items.

+++

DWT's AI team regularly advises clients on the rapidly evolving AI regulatory landscape. We are closely monitoring and providing clients with guidance to address potential impacts on the private sector, as federal agencies begin to implement their respective AI EO directives. Many key RFIs, including additional policy implementations from NIST and other agencies will approach deadlines on their various action items due within the first two quarters of 2024.



[1] DWT provided an overview of key AI EO directives for various federal agencies and corresponding potential industry impacts when the Order was released. Changes implemented by NIST are likely to be some of the most important changes at the federal agency level for private industry and technological innovations of AI systems.