New York City moved closer to adopting formal rules governing the city’s use of algorithms enabled by Artificial Intelligence (AI) and Machine Learning (ML) following the release of a task force report on the city’s use of automated decision systems (ADS). A product of more than 18 months’ work, the report makes it clear that AI, ML, and ADS exist in governments’ present and near future, not in science fiction.

The report is mandated by Local Law 49 of 2017 (LL49), which recognized the city’s use of AI/ML-enabled ADS on a range of issues, and directed the development of criteria to identify, evaluate and remediate ADS decisions that disproportionately impact persons based upon age, race, gender, religion and other protected attributes.

If implemented, the task force recommendations could become a model for how government agencies ensure that these systems are explainable, non-biased, and fair. The foremost recommendation was to streamline and centralize decision making. The mayor’s office has already acted on this recommendation by creating the Algorithms Management and Policy Officer (AMPO), who will be responsible for implementing many of the task force recommendations.

Education, Transparency, and Implicit Bias

The city should establish a strong public education and transparency program, the report says. The city should not only explain in plain language what algorithmic decision making currently exists but also create processes by which individuals can request more information about the decisions made by those systems, why those decisions are made, and potentially challenge those decisions.

According to the report, a major AMPO focus should be upon the risk of disproportionate impact on groups on the basis of protected characteristics.1 Any AMPO should create internal processes to assess any ADS for these biases. These processes should include guidelines for systematic ADS review, which should itself include a description of the ADS.

Key Questions for AMPO

Descriptions in evolving technologies are difficult, which the report acknowledges by investing significant time and thought into a descriptive process that accounts for: 

  1. The task for which the ADS tool is intended;
  2. Whether the ADS tool is intended to be part of a decision making process or simply a pre-decisional exploratory tool;
  3. What specific AI/ML technologies are being deployed in the ADS; and
  4. The extent to which the ADS employs personally identifiable information.

Through these questions, the AMPO should begin to articulate the nature of the ADS even in a fast-changing world. With that description in hand, the AMPO should endeavor to articulate: the task that the ADS performs; the urgency, benefits, efficiencies, or cost-savings of the ADS; and the impact of the ADS upon personal liberty or financial interests across time.

Duty to Address Biased Outcomes

However, a detailed description of the ADS does not appear from thin air. The report asks the AMPO to “develop a process for responding to instances of negative disproportionate impact on the basis of protected characteristics.” These protocols should be triggered when one of the two assessments listed above concludes that there may be an unintended or unjustifiable disproportionate impact or harm upon any individual, group, or community.

While the content of the protocols are left largely unexplored, the report expressly states that they should include a prompt meeting of New York officials and agency personnel to develop a plan to halt or minimize the negative effects of the ADS.

Public Inquiries; Accountability; and Justification

Next, the AMPO should provide an explanation of ADS. Among other ideas, the report suggests that ADS information should be integrated into existing informational channels for city projects. The AMPO should also develop guidelines on how to respond to public inquiries about ADS, including:

  • How much to disclose about the ADS, how to respond internally to public challenges or questions about ADS; and
  • What mechanisms should exist to permit public challenges to ADS-related decisions by the city?

Finally, the AMPO should help to establish a single point of contact for all specific ADS inquiries. These ideas all support the stated goal that the AMPO provide a single point of contact for responding to questions and challenges about the nature of ADS use in New York City.

The individual New York City sub-agencies advocating for a specific ADS must conduct an internal assessment. In this manner, the AMPO can receive an insider’s view of the cost-benefits and exigencies related to a specific ADS. The AMPO must also provide opportunities for impacted communities and outside experts to assess a given ADS.

While this approach can foster the inclusion of disparate social and academic communities’ concerns about a particular ADS, the actual implementation is left intentionally vague. Every ADS and its related use will be different.

A Starting Point for Addressing Impacts of AI/ML?

We may currently find ourselves on the cresting wave of AI/ML use. This New York City AMPO framework provides a very interesting starting point for other governmental agencies that will be tasked with evaluating the benefits and harms connected to each potential AI/ML use. Perhaps most important, the report acknowledges that not all use cases will be created equall

Instead, each government will be well served to engage in a disciplined, inclusive discussion about the costs and benefits of each AI/ML use. When unintended consequences can be discussed in advance of implementation, the technology will be given an opportunity to live up to its promise while still protecting everyone’s civil liberties and financial interests.


1  The report itself does not define “protected characteristics.” However, LL49 lists characteristics for consideration of possible “disproportionate impact” as “age, race, creed, color, religion, national origin, gender, disability, marital status, partnership status, caregiver status, sexual orientation, alienage [and] citizenship status.”