Overview

The White House, through its Office of Management and Budget (OMB), has issued final Guidance for Regulation of Artificial Intelligence Applications (Guidance) which establishes a framework for federal agencies to assess potential regulatory and non-regulatory approaches to emerging Artificial Intelligence (AI) issues. Consistent with its initial draft guidance, the final Guidance encourages federal agencies to consider ways to both reduce barriers to the development and adoption of AI and address potential regulation of technologies enabled by AI.

The Guidance reaffirms this Administration's light-touch regulatory framework for AI, while at the same time directing further development of formal plans by regulatory agencies that may be active in this area. Given the impending transition to a new Administration next year, the scope and direction of future agency plans could deviate from the framework in the Guidance.

Background

Earlier this year the White House, through Executive Order 13859, directed OMB to issue guidance to all federal agencies and executive offices outlining a unified light-touch framework for AI regulation.1 In the final Guidance, OMB reemphasizes that Federal agencies must avoid action that will "needlessly hamper AI innovation and growth" and directs agencies to develop "narrowly tailored evidence based regulations that address specific and identifiable risks."

The Administration believes that use of this framework will ensure that agencies avoid an approach that is unnecessarily precautionary or prescriptive. The Administration further advances its light-touch approach by directing agencies to examine whether regulation of AI is essential and, if not, Federal agencies should consider foregoing regulatory action. Indeed, the Guidance expressly endorses voluntary frameworks or standards as a viable non-regulatory outcome that would be consistent with the public interest.

Key Principles of Regulation and Non-Regulation

The Guidance stays largely true to the 10 principles outlined in the draft framework, which we discussed in a prior blog post. However, the final Guidance reflects a slight shift towards possible regulations. For example, the Guidance reaffirms that specific risks—including those to privacy, individual rights, personal choice, and civil liberties—must be assessed in any rulemaking process but specifically expands the scope of these potential risks to include "public health, safety and security risks."

However, the Guidance reaffirms that any proposed regulations must be supported by a "regulatory impact analysis" that articulates a clear public policy need for any new regulations. Key elements of the Guidance are as follows:

  • Regulatory Impact Analysis – Any agency action must be preceded by a regulatory impact analysis, which must include a clear explanation of the need for the regulatory action, including a description of the problem that the agency seeks to address.

    For example, the Guidance explains that agencies should consider whether the action is intended to address a market failure (e.g., asymmetric information), clarify uncertainty related to existing regulations, or address other factors, such as protecting privacy or civil liberties, preventing unlawful discrimination, or advancing the United States' economic and national security. In the case of AI, a regulatory impact analysis supporting a proposed regulatory approach should articulate a clear public policy need for federal regulation.
  • Risk Assessments – While signaling the potential for limited, focused regulations in certain areas, the Guidance promotes a governance philosophy that requires the use of rigorous risk-benefit analyses, the use of voluntary framework and best practices, and consideration of "non-regulatory" solutions when addressing emerging AI issues.

    At the same time, the Guidance acknowledges certain issues may require some formal action, including regulatory mandates that may help build public trust in AI, reduce accidents, or those necessary to "protect reasonable expectations of privacy" of individuals who interact with AI.

    The Guidance also explicitly directs agencies to approve AI applications that can demonstrably reduce risks while also expanding the scope of required risks that must be assessed and articulating a preference for the use of a "but for" risk analysis.
  • Transparency – With respect to transparency issues, the Guidance articulates a preference for written disclosures to increase public trust and suggests that they may be required to preserve the ability of human end users and other members of the public to make informed decisions. Such disclosures must be "written in a format that is easy for the public to understand."
  • Involuntary Guidance – The Guidance also directs agencies to develop mechanisms for industry to seek guidance and/or request clarification about regulations that may be creating uncertainty around the use of AI. These mechanisms should enable industry to request information about how an agency may interpret existing regulations and/or statutory authorities related to the use of AI.

Finally, the Guidance reaffirms the Administration's preference for using non-regulatory approaches to AI, including:

  • Sector-Specific Policy Guidance or Frameworks – Agencies should "issue non-regulatory policy statements, guidance, or testing and deployment frameworks."
  • Pilot Programs and Experiments – Agencies should "allow pilot programs that provide safe harbors for specific AI applications…" so that agencies may collect data from these programs that could improve their understanding of the risks and benefits.
  • Voluntary Consensus Standards – Agencies should consider voluntary consensus standards developed by the private sector and other stakeholders.
  • Voluntary Frameworks – Agencies should "consider how to promote, leverage, or develop datasets, tools, frameworks, credentialing, and guidelines to accelerate understanding, innovation, and trust in AI…" and consider what existing frameworks may be useful.

Agency Implementation in 2021

All federal agencies with authority over these issues are directed to develop plans to implement the Guidance no later than May, 2021. This going-forward mandate leaves open the potential for the new Administration to insert its own policy priorities and agenda into the implementation stage of this process. The Executive Order mandating the development of the final Guidance will remain in effect unless or until a new Administration modifies, revokes or cancels the existing directive.

Several agencies are already in the middle of developing new regulatory policies centered on AI and algorithmic decision-making, including FDA, PTO, Commerce, and HUD. DWT's AI Team will be closely following those proceedings and any new policies produced as a result of this Guidance. Please contact the authors to learn more about these and other AI regulatory and policy developments.

FOOTNOTE

1  OMB notes that this guidance is specifically for regulating AI deployed outside of the federal government—and it is not intended to inform or establish rules of engagement for agency use of AI within the federal government.