The California Fair Employment and Housing Council (FEHC) recently took a major step towards regulating the use of artificial intelligence (AI) and machine learning (ML) in connection with employment decision-making. On March 15, 2022, the FEHC published Draft Modifications to Employment Regulations Regarding Automated-Decision Systems, which specifically incorporate the use of "automated-decision systems" in existing rules regulating employment and hiring practices in California.

The draft regulations seek to make unlawful the use of automated-decision systems that "screen out or tend to screen out" applicants or employees (or classes of applicants or employees) on the basis of a protected characteristic, unless shown to be job-related and consistent with business necessity. The draft regulations also contain significant and burdensome recordkeeping requirements.

Before the proposed regulations take effect, they will be subject to a 45-day public comment period (which has not yet commenced) before FEHC can move toward a final rulemaking.

"Automated-Decision Systems" are defined broadly

The draft regulations define "Automated-Decision Systems" broadly as "[a] computational process, including one derived from machine-learning, statistics, or other data processing or artificial intelligence techniques, that screens, evaluates, categorizes, recommends, or otherwise makes a decision or facilitates human decision making that impacts employees or applicants."

The draft regulations provide the following examples of Automated-Decision Systems:

  • Algorithms that screen resumes for particular terms or patterns;
  • Algorithms that employ face and/or voice recognition to analyze facial expressions, word choices, and voices;
  • Algorithms that employ gamified testing used to make predictive assessments about an employee or applicant, or to measure characteristics including but not limited to dexterity, reaction-time, or other physical or mental abilities or characteristics; and
  • Algorithms that employ online tests meant to measure personality traits, aptitudes, cognitive abilities, and/or cultural fit.

Similarly, "algorithm" is broadly defined as "[a] process or set of rules or instructions, typically used by a computer, to make a calculation, solve a problem, or render a decision."

Notably, the scope of this definition is quite broad and will likely cover certain applications or systems that may only be tangentially related to employment decisions. For example, the term "or facilitates human decision making" is ambiguous. A broad reading of that term could potentially allow for the regulation of technologies designed to aid human decision-making in small or subtle ways.

The draft regulations would make it unlawful for any covered entity to use Automated-Decision Systems that "screen out or tend to screen out" applicants or employees on the basis of a protected characteristic, unless shown to be job-related and consistent with business necessity

The draft regulations would apply to employer (and covered third-party) decision-making throughout the employment lifecycle, from pre-employment recruitment and screening, through employment decisions including pay, advancement, discipline, and separation of employment. The draft regulations would incorporate the limitations on Automated-Decision Systems to apply to characteristics already protected under California law.

  • For example, an Automated-Decision System that measures an applicant's reaction time may unlawfully screen out individuals with certain disabilities. Unless an affirmative defense applies (e.g., an employer demonstrates that a quick reaction time while using an electronic device is job-related and consistent with business necessity), employment actions that are based on decisions made or facilitated by that Automated-Decision System may constitute unlawful discrimination.
  • Additionally, an Automated-Decision System that analyzes an applicant's tone or facial expressions during a video-recorded interview may unlawfully screen out individuals based on race, national origin, gender, or a number of other protected characteristics. Again, unless an affirmative defense applies to such use, employment actions that are based on decisions made or facilitated by that Automated-Decision System may constitute unlawful discrimination.

The precise scope and reach of the draft regulations are ambiguous in that key definitions define Automated-Decision Systems as those systems that screen out "or tend to screen out" applicants or employees on the basis of a protected characteristic. No clear explanation of the scope of the phrase "tend to screen out" is offered in the proposed regulations, and the inherent ambiguity of the language itself presents a real risk that these regulations will extend to certain systems or processes that are not involved in screening applicants or employees on the basis of a protected characteristic.

The draft regulations apply not just to employers, but also to "employment agencies," which could include vendors that provide AI/ML technologies to employers in connection with making employment decisions

The draft regulations apply not just to employers, but also to "covered entities," which include any "employment agency, labor organization[,] or apprenticeship training program." Notably, "employment agency" is defined to include, but is not limited to, "any person that provides automated-decision-making systems or services involving the administration or use of those systems on an employer's behalf."

Therefore, any third-party vendors that develop AI/ML technologies and sell those systems to third-parties using the technology for employment decisions are potentially liable if their automated-decision system screens out or tends to screen out an applicant or employee based on a protected characteristic.

The draft regulations require significant recordkeeping

Covered entities are required to maintain certain personnel or other employment records affecting any employment benefit or any applicant or employee. Under FEHC's draft regulations, those recordkeeping requirements would increase from two to four years. And, as relevant here, those records would include "machine-learning data."

Machine-learning data includes "all data used in the process of developing and/or applying machine-learning algorithms that are used as part of an automated-decision system." That definition expressly includes datasets used to train an algorithm. It also includes data provided by individual applicants or employees. And it includes the data produced from the application of an automated-decision system operation (i.e., the output from the algorithm).

Given the nature of algorithms and machine learning, that definition of machine-learning data could require an employer or vendor to preserve data provided to an algorithm not just four years looking backward, but to preserve all data (including training datasets) ever provided to an algorithm and extending for a period of four years after that algorithm's last use.

The regulations add that any person who engages in the advertisement, sale, provision, or use of a selection tool, including but not limited to an automated-decision system to an employer or other covered entity, must maintain records of "the assessment criteria used by the automated-decision system for each such employer or covered entity to whom the automated-decision system is provided."

Additionally, the draft regulations would add causes of action for aiding and abetting when a third party provides unlawful assistance, unlawful solicitation or encouragement, or unlawful advertising when that third party advertises, sells, provides, or uses an automated-decision system that limits, screens out, or otherwise unlawfully discriminates against applicants or employees based on protected characteristics.

Conclusion

The draft rulemaking is still in a public workshop phase, after which it will be subject to a 45-day public comment period, and it may undergo changes prior to its final implementation. Although the formal comment period has not yet opened, interested parties may submit comments now if desired.

Considering what we know about the potential for unintended bias in AI/ML, employers cannot simply assume that an automated-decision system produces objective or bias-free outcomes. Therefore, California employers are advised to:

  • Be aware of where and how automated-decision systems are used in connection with employment decision-making to prepare for these potential new regulations;
  • Strive to understand the specific inputs to and assessments made by the algorithms that underpin automated-decision systems;
  • Be ready to demonstrate why the results (of automated-decision systems that screen out or tend to screen out applicants or employees based on protected characteristics) are tied to a job-related purpose and are consistent with business necessity; and
  • Review agreements with vendors that provide automated-decision systems.

This article was originally featured as an employment services advisory on DWT.com on April 5, 2022. Our editors have chosen to feature this article here for its coinciding subject matter.