On July 21, the White House announced that seven leading AI companies (Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI) have agreed to make voluntary commitments around three key areas of their AI systems: safety, security, and trust (the "AI Commitments"). Notably, these companies agreed to undertake several actions to mitigate risks and increase transparency of their AI systems, including: significant new security testing, treating model weights as core intellectual property, sharing and incorporating a process for watermarking AI-generated audio or visual content, and publishing "transparency reports" that detail system capabilities, limitations, and domains of appropriate and inappropriate use.

The AI Commitments are generally framed to be forward-looking, and will apply only to the next generation of generative AI models. Specifically, the AI Commitments document states that "where commitments mention particular models, they apply only to generative models that are overall more powerful than the current industry frontier," which means models that are more powerful than any currently released models, including GPT-4, Claude 2, PaLM 2, Titan, and, in the case of image generation, DALL-E 2.

Because the commitments are voluntary, there are no terms in the announcement referencing any enforcement or compliance mechanism to ensure that the companies adhere to their commitments. However, by making their commitments public, failing to adhere to the AI Commitments could expose these companies to the Federal Trade Commission's general Section 5 authority to enforce the prohibition on unfair and deceptive acts or practices. The FTC has previously found that companies' failure to adhere to certain public promises or statements about their operations may be an unfair or deceptive practice violative of Section 5, and has warned that it will be carefully monitoring the public statements of generative AI companies.

Looking ahead, the announcement also stated that the Administration is developing an executive order, presumably on similar AI governance issues, and will pursue bipartisan legislation on responsible AI development. The AI Commitments will remain in effect until regulations "covering substantially the same issues" are implemented.

This announcement further demonstrates the Administration's prioritization of efforts to ensure the responsible development of AI systems. Two federal agencies have issued recent requests for information about AI governance issues (the National Telecommunications and Information Administration and the White House Office of Science and Technology Policy), the FTC is actively engaged in rulemaking and enforcement activities, and the White House, in May of this year, announced several initiatives to support the development of a National Artificial Intelligence Strategy.

The AI Commitments include eight specific obligations, grouped into the categories of safety, security, and trust of AI systems. Each commitment is set out in detail below.

Safety

  • The companies commit to internal and external red teaming of models or systems in areas including misuse, societal risks, and national security concerns, such as bio, cyber, and other safety areas. ("Red teaming" refers to the process of testing the safety and security of systems via adversarial processes and techniques intended to exploit any identified deficiencies in the system.)
  • The companies will work to enhance information sharing regarding trust and safety risks, dangerous or emergent capabilities, and attempts to circumvent safeguards by establishing a mechanism through which they can develop shared standards and best practices for frontier AI safety.

Security

  • The companies will invest in cybersecurity and insider threat safeguards to protect proprietary and unreleased model weights.

    As part of this commitment, the companies will treat unreleased AI model weights for models in scope as "core intellectual property" for their business, will limit access to model weights to those whose job function requires access, and will establish a robust insider threat detection program consistent with protections provided for their most valuable intellectual property and trade secrets.
  • The companies will incentivize third-party discovery and reporting of issues and vulnerabilities, including by establishing bug bounty systems for responsible disclosure of weaknesses.

Trust

  • The companies commit to develop and deploy mechanisms that enable users to understand if audio or visual content is AI-generated, including robust provenance, watermarking, or both, for AI-generated audio and visual content.
  • The watermark should include an identifier of the service or model that created the content (but not user information), and any audiovisual content that is easily distinguishable from reality, including default voices of AI assistants, is excluded from this commitment.
  • For all new significant model releases, the companies will publicly report model capabilities, limitations, and domains of appropriate and inappropriate use, including discussion of societal risks, such as effects on fairness and bias.

    The reports should include:

    • safety evaluations conducted (including in areas such as dangerous capabilities, to the extent that these are responsible to publicly disclose);
    • significant limitations in performance that have implications for the domains of appropriate use;
    • discussion of the model's effects on societal risks such as fairness and bias; and
    • results of adversarial testing.
  • The companies commit to prioritizing research on societal risks posed by AI systems, such as harmful bias and discrimination, and protecting privacy. As part of this commitment, the companies agree "generally" to empowering trust and safety teams, advancing AI safety research, advancing privacy, protecting children, and working to proactively manage the risks of AI.
  • The companies commit to develop and deploy frontier AI systems to help address society's greatest challenges, such as climate change mitigation and adaptation, early cancer detection and prevention, and combating cyber threats.

Conclusion

The White House has indicated that it intends to utilize a number of policy-making approaches to address concerns around the development of responsible AI. However, binding and enforceable new requirements for companies developing and deploying AI systems in the US, whether in the form of Executive Orders, regulations, or legislation, will take time to be finalized and wend their way through the legislative process. In the meantime, these AI Commitments provide some measure of accountability and may be used as a framework for future government action geared to ensuring the safe and trustworthy development of AI systems.

DWT's AI Team regularly advises clients on the rapidly evolving AI regulatory landscape and will continue to monitor developments in US and international policy decisions regarding AI governance.