As momentum builds to address race-based injustices in America, the National Institute of Standards and Technology (NIST) last week announced a workshop focused on understanding and addressing bias in Artificial Intelligence (AI) systems. The event will bring together members of the public and private sector to seek consensus on what 'bias' means in the context of AI and how to measure it.

NIST believes that finding common ground on these questions 'will lay important groundwork for upcoming efforts in NIST's AI work more broadly, including the development of standards and recommendations for achieving trustworthy AI.' The workshop will be held virtually on August 18, 2020, and organizations looking to take concrete actions to reduce biases based on race, ethnicity, gender, sexuality, and other protected characteristics in their products should consider participating.

Algorithms Are Only as Objective as Their Data

The benefits and utility of AI are now well established—implementing systems and processes that leverage AI and machine learning can enhance efficiency, increase output, deliver insights, and much more. In addition, using AI to make or facilitate decisions provides an opportunity for organizations to eliminate or reduce explicit bias that may arise from humans making certain decisions, but algorithms will still reflect prejudices in the data they see.

Uncritically implementing facial recognition AI, for example, can result in high error rates when trying to identify racial minorities if almost all of the faces in the data that trained the AI were white. In other contexts, AI can unwittingly reproduce prejudices against women, members of the LGBTQ community, or other marginalized groups. And because AI systems are not always transparent—some systems do not explain the reasons for their decisions—it can be difficult to detect any inherent biases that may arise from incomplete or inaccurate training data.

The potential for bias in AI systems has become a significant concern in recent years, recognized by privacy advocates, academics, and many private companies. Some states and the federal government have also addressed the issue in a variety of ways:

  • Late last year, New York City issued a detailed report recommending a number of policies (some of which have already been adopted) regarding the city's own use of AI, including the development of criteria to identify, evaluate, and remediate uses of AI with a disproportionate impact on different ages, races, genders, religions, or other protected attributes.
  • In March, Washington passed a law regulating the use of facial recognition technology by state and local government agencies. The law, which takes effect July 1, 2021, requires agencies to prepare a detailed 'accountability report' before using such technology, including information about its potential impacts on civil rights and liberties as well as 'potential disparate impacts' on marginalized communities.
  • An Illinois law went into effect this year that focuses specifically on the use of AI to evaluate job applicants' video interviews, requiring employers to take certain actions related to notice, transparency, and consent when using such tools.
  • At the federal level, the Office of Management and Budget's 'Guidance for Regulation of Artificial Intelligence Applications' released in January recommends that federal agencies consider issuing regulations that could require some companies to mitigate potential biases in their AI systems—and to transparently disclose what steps were taken to do so.

What Can NIST Add?

Given the plethora of government and private-sector efforts to address bias in AI, what might NIST's role be? As its name suggests, NIST specializes in creating standards, and the announcement of the workshop this August states that it will focus on (1) how to define 'bias' in the context of AI and (2) how to measure such bias.

Making progress on the definition of bias in AI would be a significant achievement since there is so little consensus regarding the meaning of many terms in this field. For example, New York City's AI report uses the term 'disproportionate impact' without defining it; Washington State's legislation similarly uses the terms 'bias' and 'disparate impact' without specifying their meaning; and Illinois's Artificial Intelligence Video Interview Act does not even define 'artificial intelligence.'

NIST views this workshop as one step in its ongoing work on AI issues. As noted above, one of the long-term goals mentioned in the announcement is to lay important groundwork for upcoming efforts in NIST's AI work more broadly, including the 'development of standards and recommendations for achieving trustworthy AI.'

This language suggests that NIST may hope eventually to produce a framework for organizations to use to mitigate bias in their AI systems, similar to the highly influential NIST Cybersecurity Framework. Ideally, NIST could produce another framework for developing 'trustworthy AI' that is flexible enough to be implemented by both startups and large multinational corporations; imaginative enough to remain relevant in the rapidly changing world of AI for years to come; and comprehensive enough that state, local, and federal governments do not feel the need to pass significant additional top-down laws or regulations in this space.

Participation Needed

The NIST Cybersecurity Framework was developed with significant feedback from industry, and many organizations now voluntarily comply with it due to its strong reputation and flexibility.

Companies that want to provide input during NIST's process of developing standards and recommendations related to bias in AI—whatever form those may eventually take—should consider participating in the Bias in AI Workshop this August 18. And attending the workshop can help organizations ensure that they maintain their commitment to fighting prejudice even after injustices based on race, gender, or other statuses fade from the front page.