As widespread use of artificial intelligence (AI) in the employment sector has surged throughout the country, federal and state lawmakers have been playing catch-up with their efforts to regulate this new technology. In California, the Civil Rights Council (CRC) (formerly the Fair Employment and Housing Council) first proposed modifications over one year ago to the state's employment regulations to incorporate the use of AI in connection with employment decision-making. The CRC updated its draft AI regulation in February 2023 and expects to soon issue a notice of proposed rulemaking and start the 45-day comment period. California lawmakers have also introduced multiple bills on AI-related topics, including a bill that will require audits of use of these tools by employers and developers. Given the scope and number of these initiatives, and the commitment of the CRC to move its rule forward, employers deploying this technology are advised to continue to monitor developments concerning use of AI and to consider these potential developments as they determine when and how to use this technology.
CRC's Proposed Modifications to Employment Regulations Regarding Automated-Decision Systems (ADS)
On February 10, 2023, the CRC published Proposed Modifications to Employment Regulations Regarding Automated-Decision Systems. This document is a revision of the CRC's draft modifications initially issued in March 2022 which proposed sweeping changes to existing rules regulating employment and hiring practices in California to incorporate the use of automated-decision systems (ADS). (Previously reported by DWT here.) An additional revision to the original modifications previously was published by the CRC in July 2022.
In general, the CRC's proposed regulations state it is unlawful for an employer to use selection criteria (including qualification standards, employment tests, ADS or its "proxy" (defined in the next paragraph)), if such use has an adverse impact on, or constitutes disparate treatment of, an applicant or employee (or classes of applicants or employees) on the basis of a protected characteristic (as traditionally listed in the Fair Employment & Housing Act), unless the employer can show the selection criteria to be job-related and consistent with business necessity.
New and Revised Definition Regarding Automated-Decision Systems: The latest modifications contain a separate section on ADS. That section provides an expanded list of tasks, including those to be excluded ("word processing software, spreadsheet software, and map navigation systems") in its definition of ADS. What had previously been named as "machine-learning data" is now referred to as "automated-decision system data." The new definitions include "adverse impact" which is defined as being synonymous with disparate impact; "artificial intelligence" which is defined as "[a] machine-learning system that can, for a given set of human-defined objectives, make predictions, recommendations, or decisions;" and "proxy" which is defined as "[a] technically neutral characteristic or category correlated with a basis protected by the Act."
Recordkeeping: The initial rule had already increased the applicable time period for retention of records from two to four years; the revised regulation expands both the scope of who must retain these records and what records must be maintained. The latest version of the regulations expands the scope of entities that must retain documents to "[a]ny person who sells or provides an automated-decision system or other selection criteria to an employer or other covered entity, or who uses an automated-decision system or other selection criteria on behalf of an employer or other covered entity." (Emphasis added.) The scope of the documents to be preserved for four years has been expanded to include "[a]ny personnel or other employment records created or received by any employer or other covered entity dealing with any employment practice and affecting any employment benefit of any applicant or employee (including all applications, personnel, membership or employment referral records or files and all automated-decision system data)."
Employer Defense: The CRC's latest version of the modifications provides employers with additional and clarifying language regarding potential defenses that would be available against a claim of unlawful use of selection criteria. In addition to defending themselves by showing that the selection criteria "is job-related for the position in question and consistent with business necessity," employers could also defend themselves by showing that "there is no less discriminatory policy or practice that serves the employer's goals as effectively as the challenged policy or practice." The CRC further provides that "[r]elevant to this inquiry is evidence of anti-bias testing or similar proactive efforts to avoid unlawful discrimination, including the quality, recency, and scope of such efforts, the results of such testing or other effort, and the response to the results." While an impact assessment, or outcome audit, is not specifically required by the initial or revised regulation, a bill working through the California Legislature, AB 331, would specifically require impact assessments for use of automated decision tools.
Assembly Bill 331: Employer Use of Automated Decision Tools
Introduced on January 30, 2023, Assembly Bill 331 proposes certain requirements and restrictions upon employer use of what it refers to as "automated decision tools" (ADT). ADT is defined in the proposed legislation as "a system or service that uses artificial intelligence and has been specifically developed and marketed to, or specifically modified to, make, or be a controlling factor in making, consequential decisions." AB 331's proposed requirements include the following:
- Deployers and developers of ADT would be required to perform an impact assessment by January 1, 2025, and annually thereafter. An impact assessment is defined as "a documented risk-based evaluation of an automated decision tool" that includes, among other things, a statement of purpose of the ADT and its benefits, a summary of the data collected, an analysis of the potential adverse impacts on protected characteristics, and a description of how the ADT would be used by a person. The impact assessments would be required to be provided to the Civil Rights Department within 60 days of completion with a potential administrative fine not to exceed $10,000.
- Deployers would be required to provide notice to persons who are the subject of the consequential decisions that ADT is being used to make, at the time the ADT is used. Additionally, if technically feasible, deployers would be required to accommodate a person's request to not be subject to the ADT and to be subject to an alternative selection process or accommodation.
- Developers would be required to provide deployers with a statement regarding the intended uses of the ADT and documentation regarding any known limitations, a description of the type of data used to program or train the ADT and a description of how the ADT was evaluated for validity.
- Prohibits the use by deployers of ADT that contributes to algorithmic discrimination.
- Deployer and developers would be required to establish, document, implement and maintain a governance program that contains safeguards against algorithmic discrimination associated with the use of ADT.
Additional ADS and AI-Related Legislation
In addition to the ADS and AI-related proposed legislative changes discussed above, additional bills worth tracking by employers have also been recently introduced in the California Legislature:
- AB 302. Introduced on January 26, 2023, AB 302 would require the Department of Technology to conduct, by September 1, 2024, a comprehensive inventory of all high-risk ADS that have been proposed for use or are being used by state agencies.
- SB 313. Introduced on February 6, 2023, SB 313 proposes the creation of an Office of Artificial Intelligence within the Department of Technology that would oversee the use of artificial intelligence by state agencies and ensure compliance with state and federal laws and regulations.
- SB 721. Introduced on February 16, 2023, SB 721 proposes the creation of a California Interagency AI Working Group to study the implications of the usage of AI and provide the Legislature with a comprehensive report by January 1, 2025 (and every two years thereafter until 2030) regarding AI.
While the proposed regulatory changes and bills described above are not yet law, they are part of a national and state enforcement trend concerning these tools, and California seems poised to continue to assert a leadership role concerning regulating these tools. Because the requirements of these California laws may also vary from laws in other states and cities, including New York City, national employers will likely be faced with complex decisions on how to comply effectively with conflicting laws. In addition to continually monitoring developments in California and elsewhere, employers may also want to be thinking now about how to effectively comply with these potential requirements, whether and how AI or ADS are used in sourcing candidates or making other employment decisions, how they select vendors, and the terms of any agreements between the employer and their vendors.
As always, DWT will continue to monitor these issues and provide updates on ADS and AI regulations as needed. In the meantime, if you have any questions about your company's compliance, please contact a member of DWT Employment Services group.
 The February 10, 2023, Proposed Modifications to Employment Regulations Regarding Automated-Decision Systems defines ADS as "[a] computational process that screens, evaluates, categorizes, recommends, or otherwise makes a decision or facilitates human decision making that impacts applicants or employees. An Automated-Decision System may be derived from an/or use machine-learning, algorithms, statistics, and/or other data processing or artificial intelligence techniques."
 FEHA protects the following characteristics from discrimination: race, color, ancestry, national origin, religion, creed, age, disability (mental or physical), sex, gender (including childbirth, breastfeeding or related medical conditions), sexual orientation, gender identity, gender expression, medical condition, genetic information, marital status and military or veteran status.
 AB 331 defines deployer as "a person, partnership, state or local government agency, or corporation that uses an automated decision tool to make a consequential decision."
 AB 331 defines developer as "a person, partnership, state or local government agency, or corporation that designs, codes, or produces an automated decision tool, or substantially modifies an artificial intelligence system or service for the intended purpose of making, or being a controlling factor in making, consequential decisions, whether for its own use or for the use by a third party."
 AB 331 defines "algorithmic discrimination" as "the condition in which an automated decision tool contributes to unjustified differential treatment or impacts disfavoring people based on their actual or perceived race, color, ethnicity, sex, religion, age, national origin, limited English proficiency, disability, veteran status, genetic information, reproductive health, or any other classification protected by state law."