The White House Office of Science and Technology Policy (OSTP)1 announced a plan to develop a "bill of rights" to protect against what the OSTP perceives to be potentially harmful consequences of AI, including anticipated and unanticipated risks arising from AI applications developed using biometric data such as facial recognition, voice analysis, and heart rate trackers.

According to OSTP's Eric Lander and Alondra Nelson,2 the OSTP "plans to create a bill of rights for an automated world, with input from the American public." In support of this initiative the OSTP also issued a Request for Information (RFI) seeking information on the development and past, current, or planned use of AI-enabled biometric technologies to identify or verify individuals, or draw inferences from an individual's emotions, actions, or mental state.

This initiative is premised on purported concerns about the risks of AI-enabled biometric technologies. In particular, Lander and Nelson wrote of potential negative consequences of AI using incomplete datasets that embed past prejudice and enable present-day discrimination, including discriminatory arrests, digital assistants not recognizing particular accents, and discriminatory impacts from mortgage approval tools, reasoning:

Powerful technologies should be required to respect our democratic values and abide by the central tenet that everyone should be treated fairly. Codifying these ideas can help ensure that.

Thus, it appears that the OSTP intends to use this process to "codify" certain values and tenets that Lander and Nelson identify in their article. Exactly how or when that codification process would occur remains to be seen.

As explained below, the RFI focuses on collecting data on the development and use of applications utilizing biometric technologies, including current and future uses of biometric technologies to identify and verify individuals, or draw inferences from an individual's emotions, actions, or mental state. The OSTP is likely to use the data received in response to the RFI as the "record" upon which any proposed bill of rights would rely. Although this process is still at a very early stage, it could provide a basis for the adoption of additional regulatory duties for developers and users of AI and biometrics in the United States.

Proposed AI Bill of Rights Premised on Asserted Risks

The proposal to establish an AI bill of rights is based on the premise that current biometric technologies have "led to serious problems." Several alleged problems provide the basis for this proposal; according to Lander and Nelson these include datasets that fail to represent American society, technologies that raise questions about privacy and transparency, and certain deliberate abuse of biometric technologies.

Some of the examples cited include virtual assistants that do not understand Southern accents and facial recognition technology that leads to wrongful, discriminatory arrests. Another example refers to healthcare algorithms that discount the severity of kidney disease in African Americans, ultimately preventing people from receiving kidney transplants. As to abuse of biometric technologies,

Lander and Nelson refer to some autocracies using facial recognition technologies as a tool of state-sponsored oppression, division, and discrimination. Lander and Nelson assert that such issues arise, in part, because AI developers are not using "appropriate" data sets and do not conduct comprehensive audits. According to Lander and Nelson, these problems are exacerbated by the lack of diverse perspectives amongst developers—those around the table who can and should anticipate and fix problems before products are used, or who terminate product development when products cannot be fixed.

Likely Framework for Addressing Identified Issues

Generally, the AI bill of rights is meant to "clarify the rights and freedoms" of individuals using, or subject to, data-driven biometric technologies:

Throughout history we have had to reinterpret, reaffirm, and periodically expand our rights. We should clarify the rights and freedoms we expect data-driven technologies to respect. We need a bill of rights to guard against the powerful technologies we have created.

Potential affirmative rights are yet to be determined, but Lander and Nelson suggest the following:

  • (i)  Right to know when and how AI is influencing a decision that affects an individual's civil rights and civil liberties;
  • (ii)  Freedom from being subjected to AI that has not been "carefully" audited to ensure that it is accurate and unbiased;
  • (iii)  Right to be secure in systems being trained on "sufficiently representative" datasets;
  • (iv)  Freedom from pervasive or discriminatory surveillance and monitoring in the home, community, and workplace; and
  • (v)  Right to "meaningful recourse" should the use of an algorithm result in harm.

Significantly, these potential new rights suggest that OSTP will look closely at certain obligations around transparency and explainability, an issue that other regulators are examining closely. In addition, potential audits of AI systems may also be on the table, as are opt-out rights.

Finally, the OSTP appears to be considering possible "recourse" or redress against organizations using AI to make decisions or take certain actions. If codified, any of these ideas alone would introduce significant new obligations; collectively, they could significantly recast the regulatory duties for developers and users of AI systems, potentially introducing considerable challenges to AI technology development.

While there is no clear indication as to how these rights would be codified or enforced, Lander and Nelson suggest several possibilities, including using federal government contracting power and enforcing these rights as conditions of securing contracts. Other possibilities discussed include requiring federal contractors to use technology that adheres to these principles, or adopting new laws and regulations to formally codify these concepts.

Proposal Represents Latest Effort by White House to Develop AI Framework With Public Engagement

OSTP's proposal is the latest attempt by the White House and Executive Branch to develop a framework for governing AI and biometric technology.3 At the local level, a New York City biometrics law became effective in July 2021. In 2020, the Trump Administration established the American AI Initiative by Executive Order and issued final Guidance for Regulation of Artificial Intelligence Applications.4

Now, OSTP invites the public to weigh in by focusing the RFI on AI-enabled biometrics technologies that are used to identify people and infer attributes, including technologies like facial recognition, and voice, gait, and other physical attributes recognition and analysis. OSTP believes that these technologies often represent the vanguard of AI-enabled biometric applications that impact the general public.

OSTP invites feedback from stakeholders across government, academia, civil society, the private sector, and the general public, including parties developing, acquiring, or using biometric technologies, and communities impacted by the use of such technologies. Impacted communities include individuals whose faces have been scanned before boarding a plane or whose employers issued fitness trackers to monitor employee fatigue, teachers whose software shows which students are not paying attention, and anyone else who has interacted with, built, or used these technologies.

RFI Puts Biometric-based Technology in the Spotlight

The RFI seeks to understand how facial and voice recognition systems, gait recognition, interference, and keystroke analysis are used across a variety of contexts, including employment, education, and advertising. The RFI specifically seeks feedback in these six areas:

  • 1.  How biometric information is or may be used for individual recognition or inferences, including the goals of such use, the source of data used, and any impacted communities;
  • 2.  Procedures for and results of scientific validation of biometric technologies. OSTP also welcomes information about user research, impact assessment, and socio-contextual evaluation;
  • 3.  Security considerations associated with biometric technology, such as how technology security is validated and any known vulnerabilities in the technology or to the underlying data. Examples of efficacious security safeguards are encouraged;
  • 4.  Proven and potential harms of biometric technology, including harms due to potential issues with the validity of systems used to generate biometric data or inferences, disparate impact on various demographic groups, selective profiling outcomes, misapplication, or misuse of technology, and privacy risks;
  • 5.  Benefits of biometric technology, including comparisons with existing systems and potential cost, consistency, and reliability improvements; and
  • 6.  Governance programs, practices, or procedures applicable to the use of biometric technology, including those related to (a) stakeholder engagement in systems design, (b) technology trials, (c) data collection, management, and storage, (d) safeguards for technology use, (e) auditing and post-deployment impact assessments, (f) use of biometric technologies in conjunction with other surveillance technologies, (g) admissibility in court of biometric information generated or augmented by AI systems, and (h) public transparency practices.

Public Comments Accepted Until January 2022

Comments are due January 15, 2022. Interested parties should consider submitting comments to ensure that OSTP receives broad-ranging feedback and has a sufficient basis to make informed, risk-based decisions, especially in crafting an AI and biometrics "bill of rights."


This article was originally featured as a privacy and security advisory on DWT.com on October 21, 2021, and also appeared in the AI and Faith December 2021 Newsletter. Our editors have chosen to include this article here for its coinciding subject matter.


FOOTNOTES

1  The OSTP acts as an advisor to "the President and others within the Executive Office of the President on the effects of science and technology on domestic and international affairs" and leads "interagency efforts to develop and implement sound science and technology policies."
2  On June 2, 2021, Eric Lander was sworn in as the current director of the OSTP, colloquially known as the Science Advisor to the President. Alondra Nelson is the Deputy Director for Science and Society within the OSTP.
3  In addition to AI framework development efforts, the OSTP is working to develop a shared research infrastructure to enable collaboration across scientific disciplines. In July 2021, the OSTP and the National Science Foundation (NSF) issued a research-focused "Request for Information (RFI) on an Implementation Plan for a National Artificial Intelligence Research Resource." The comment period ended October 1, 2021 and input received will "inform the work of the National Artificial Intelligence Research Resource (NAIRR) Task Force."
4  In 2016, the Obama Administration released two reports, Preparing for the Future of Artificial Intelligence and National Artificial Intelligence Research and Development Strategic Plan, and issued a Request for Information (RFI), which focused broadly on AI policy but emphasized "safety and control issues for AI" and "the social and economic implications of AI" – similar to the Biden Administration RFI released this month.