On October 1, 2025, California’s new regulations on the use of artificial intelligence (AI) in the workplace will come into force. The amendments extend the non-discrimination provisions of FEHA to explicitly cover automated-decision systems.
According to the announcement by the Civil Rights Department, “[a]utomated-decision systems — which may rely on algorithms or artificial intelligence — are increasingly used in employment settings to facilitate a wide range of decisions related to job applicants or employees, including with respect to recruitment, hiring, and promotion. While these tools can bring myriad benefits, they can also exacerbate existing biases and contribute to discriminatory outcomes. Whether it is a hiring tool that rejects women applicants by mimicking the existing features of a company’s male-dominated workforce or a job advertisement delivery system that reinforces gender and racial stereotypes by directing cashier ads to women and taxi jobs to Black workers, there are numerous challenges that may arise with the use of artificial intelligence in the workplace.”
What do the new regulations do? According to the announcement, the AI regulations:
- Make it clear that the use of an automated-decision system may violate California law if it harms applicants or employees based on protected characteristics, such as gender, race, or disability.
- Ensure employers and covered entities maintain employment records, including automated decision data, for a minimum of four years.
- Affirm that automated-decision system assessments, including tests, questions, or puzzle games that elicit information about a disability, may constitute an unlawful medical inquiry.
- Add definitions for key terms used in the regulations, such as “automated-decision system,” “agent,” and “proxy.”
In other words, these regulations set up a framework for employers to follow when implementing AI within their various HR processes.
More specifically, the regulations define the terms of AI that will be regulated including:
- Automated-Decision System (ADS) - a computational process—including those using artificial intelligence, machine learning, or algorithms—that make or assist in decisions about employment.
- Algorithm - A set of rules or instructions a computer follows to perform calculations or other problem-solving operations.
- Artificial Intelligence - A machine-based system that infers, from the input it receives, how to generate outputs.
- Machine Learning - A system’s ability to use and learn from its own analysis of data or experience and apply this learning automatically to future calculations or tasks.
- Automated-Decision System Data - Includes any data used in or resulting from the application of an ADS and/or used to develop or customize an ADS.
The HR applications using AI that is regulated includes:
-
- Resume Screeners
- Online Assessments and Games
- Video Interview Analysis
- Personality or Behavioral Profiling
- Targeted Job Advertising
- Third Party Data Analysis
- Automated Ranking or Scoring
The regulations make clear that employers must provide reasonable accommodations to applicants and/or employees if an ADS evaluates factors that could impact individuals based on disability, religion, or any other protected traits (e.g., facial recognition or reaction-time tests). This requirement would cover the Intuit/HireVue case that allegedly discriminated against a deaf indigenous woman for a position she was told she was qualified for but when asked for an accommodation because of the AI tool for assessment was used, was turned down and lost out on the job opportunity.
The regulations limit liability under the disability side by making personality questions impermissible only if they are “likely to elicit information about a disability.”
It should be noted that the regulations expand the definition of employer to include agents and third parties who act on behalf of employers; specifically, to those who “exercise a function traditionally exercised by the employer,” thus limiting the scope to active participants in the hiring process. Further, it defines employment agency as those “undertaking, for compensation, the procurement of job applicants . . . including persons undertaking those services through the use of an automated-decision system.” Presumably, this could expand liability to the tools that embed AI such as applicant tracking systems or HRIS systems.
In addition, employers must maintain detailed records of selection criteria, ADS data, and applicant flow logs and retain them for at least four years. These records must be available for inspection and preserved in the event of a complaint. Further, the regulations codify the Uniform Guidelines on Employee Selection Procedures (UGESP) so any employer using AI (ADS), will need to test for bias in the selection process, which also acts as an affirmative defense.
HR needs to know which, if any, tools they use have ADS (AI) embedded and are being used for any type of selection from hiring to promotions to training to compensation. HR also needs to understand the datasets used that the AI model is learning from and the outcomes that have been identified from the tool. Finally, HR needs to test the model to ensure that biases are minimized. The issue that is unclear is whether these regulations will apply to any ADS (AI) tool that a California resident may use, regardless of location, i.e. out of state.
If you need assistance with any of the above, ASE has the experts available to review, evaluate and test your ADS for vulnerabilities. Contact Anthony Kaylin at akaylin@aseonline.org or 248-223-8012.
ASE Connect
AI Micro-Certification – ASE offers a Micro-Certification in AI offering an overview of how to utilize Artificial Intelligence in the business world including in HR and recruitment. This certification consists of 1.5 core credits (3 half-day core courses) and .5 elective credits (1 half-day elective course). Request more information here.
Source: Proskauer 8/4/25, Littler 7/30/25, DCI 7/29/25