On April 11, 2023, the Department of Commerce, through the National Telecommunications and Information Administration (NTIA), issued a request for comments (RFC) on AI system accountability measures and policies. The “AI Accountability Policy Request for Comment” is intended to gather information on self-regulatory, regulatory, and other “measures and policies” designed to assure external stakeholders that AI systems are legal, effective, ethical, safe, and otherwise trustworthy. In particular, the RFC calls out current proposals to require impact assessments and audits, and other issues surrounding the effective development of regimes to implement audit or impact assessment duties. “The goal is to create policy that ensures safe and equitable applications of AI that are transparent, respect civil and human rights, and are compatible with democracy.”[1] 

NTIA intends to draft and issue a report on AI accountability policy “focusing especially on the AI assurance ecosystem.” NTIA seeks comments providing specific actionable proposals, rationales, and relevant facts but will not propose new obligations at this time. Instead, NTIA’s findings may include recommendations that will shape the federal government’s strategy for AI policymaking. NTIA appears to be considering a range of possible regulatory approaches, including potentially advocating for new federal laws regulating AI or instead seeking ways to incentivize the adoption of self-regulatory frameworks.

Artificial intelligence has been on the Biden Administration’s radar for some time, and various sectoral federal regulators have been actively publishing guidance on AI, pursuing enforcement actions relating to AI and initiating new rulemaking proceedings regarding AI. The recent explosion in popularity of generative AI tools and concerns over harmful bias and discrimination, as well as “the distortion of communications through misinformation, disinformation, deep fakes, [and] privacy invasions,” all have reinforced the government’s interest in regulating this space. Indeed, the RFC specifically asks how AI accountability mechanisms can inform people if generative AI tools comply with standards for trustworthy AI but also asks if certain trustworthy AI goals “might not be amenable to requirements or standards.”

Obstacles to AI Policymaking

The RFC acknowledges previous federal forays into AI policy and regulation, including the National Institute of Standards and Technology’s AI Risk Management Framework and the White House Blueprint for an AI Bill of Rights. NTIA displays a relatively sophisticated understanding of the current landscape of AI regulations and voluntary frameworks, as well as some of the primary challenges facing policymakers, including that:

  • The AI value chain, including data sources, AI tools, and the relationships among developers and customers, can be complicated and impact accountability.
  • For some trustworthy AI goals, it will be difficult to harmonize standards across jurisdictions or within a standard-setting body, particularly if the goal involves contested moral and ethical judgments.
  • Various goals for trustworthy AI, for example, transparency and accuracy, require tradeoffs.

Considerations for AI Accountability

The RFC poses a total of 34 questions, grouped into the following categories:

  • AI Accountability Objectives: What are the purposes of AI accountability mechanisms such as certifications, audits, and assessments, whether these mechanisms can be effective without legal obligations, and the interplay/tradeoffs between various accountability mechanisms. This section specifically asks whether AI accountability mechanisms can inform people about generative AI tools’ operation and compliance with standards.
  • Existing Resources and Models: What are the current AI accountability mechanisms, the best definitions of frequently used terms, and whether lessons can be learned from accountability processes in other sectors such as cybersecurity, privacy, finance, and ESG.
  • Accountability Subjects: Where in the AI value chain and development lifecycle should accountability efforts focus, whether accountability measures should be scoped based on risk of the technology or deployment context, whether AI systems should be released with quality assurance certificates, and how accountability practices should be implemented in public sector deployment of AI systems.
  • Accountability Inputs and Transparency: What sort of records should be maintained in order to support AI accountability, whether there are obstacles to the flow of information necessary for AI accountability, and how accountability processes should address “data voids,” e.g., instances when a vendor has access to data that the firm deploying a tool cannot access.
  • Barriers to Effective Accountability: What are the most significant barriers to effective AI accountability in the private sector, whether the lack of a general federal data protection or privacy law is a barrier, whether the lack of a federal AI law is a barrier, what is the role of intellectual property rights, terms of service, and contractual obligations, and what are the costs of AI audits and Assessments.
  • AI Accountability Policies. Asks whether government policies or regulations should be sectoral, horizonal, or a combination, whether a federal law focused on AI would be desirable, which agency or agencies should be responsible for enforcement, and what activities and incentives the government could pursue to achieve a strong AI accountability ecosystem.

DWT’s AI team regularly advises on emerging AI regulatory frameworks and will continue to monitor the development of NTIA’s AI policy proposals.



[1] Comments of NTIA Senior Advisor for Algorithmic Justice Ellen P. Goodman at the University of Pittsburgh’s Institute of Cyber Law, Policy, and Security. https://techpolicy.press/ntia-launches-ai-accountability-request-for-comment/