New York City has issued the first guidance related to its new "AI audit" law that will require employers and employment agencies to conduct bias audits of automated decision-making technologies used to evaluate job candidates or employees. Although the law will take effect January 1, 2023, there are still many unanswered questions about how the law will be interpreted and applied.
At least one of those questions has been answered by The Department of Consumer and Worker Protection's (DCWP) publication of new rules expanding on how the agency will apply statutory penalties for violations of the new law. Pursuant to § 6-81 "Automated Employment Decision Tools Penalty Schedule," each day an employer uses an automated employment decision tool in violation of N.Y.C. Admin Code § 20-871(a) counts as a separate violation. Additionally, an employer's failure to provide notice to a candidate or employee in violation of N.Y.C. Admin Code § 20-871(b) constitutes a separate (also daily) violation. Further, the DCWP rules make clear that the agency will view potential violations quite broadly, explaining that unless otherwise specified, "the penalties set forth for each section of law or rule shall also apply to all subdivisions, paragraphs, subparagraphs, clauses, items or any other provision contained therein."
The following is the penalty schedule promulgated by DCWP.
|Citation||Violation Description||First Violation||First Default||Second Violation||Second Default||Third and Subsequent Violation||Third and Subsequent Default|
|Admin Code § 20-871(a)||Failure to comply with requirements for use of an automated employment decision tool
|Admin Code § 20-871(b)
||Failure to comply with notice requirements related to automated employment decision tools
While the penalty schedule provides some new information about the City's planned enforcement of the law, many substantive questions remain about the law's practical implications for employers and developers of automated employment decision tools. For example, the law requires "an impartial evaluation by an independent auditor" that assesses the AI employment decision tool's "disparate impact" on people of a particular gender or race/ethnicity. The rules do not address who would be considered an "independent auditor," what standard applies to determine "disparate impact," and what exact populations and demographic data should serve as the basis for the impact analysis.
We anticipate further (and more substantive) guidance in the coming months and will post updates.
* Ardie Ermac, a rising 3rd year law student at Seattle University School of Law, is a 2022 Summer Associate at DWT.
This article was originally featured as an artificial intelligence advisory on DWT.com on July 21, 2022. Our editors have chosen to feature this article here for its coinciding subject matter.