Colorado’s governor signed the Artificial Intelligence Act, Senate Bill (SB) 24-205, on May 17, 2024. This law is a comprehensive AI law that follows the Equal Employment Opportunity Commission’s technical assistance documents, the Department of Labor’s recent guidance, the New York City automated employment decision tools law, Tennessee’s regulation of deepfakes, Illinois’ Artificial Intelligence Video Interview Act, and Maryland’s facial recognition law. The law is effective as of February 1, 2026.
With this law and others, pundits have stated that employers will have difficulty avoiding liability for adverse results simply because they used a particular piece of technology. Employers, regardless, need to test systems before implementing on the dataset the systems are based to determine adverse consequences, if any. Unfortunately, for most, the systems are added without HR knowledge but then once told about it, generally allows AI as an opt-out as opposed to an opt-in. HR needs to talk to their vendors to see if AI is implemented in any module.
It appears that the bill was signed reluctantly by the governor over the objection of various business groups. Specifically, the governor expressed concern “about the impact this law may have on an industry that is fueling critical technological advancements” and observed how regulation – like SB205 – “applied at the state level in a patchwork across the country can have the effect to tamper innovation and deter competition in an open market.”
The law is very broad and applies beyond the scope of basic employment practices. It applies to developers and deployers of “high-risk” AI systems. High-risk AI systems are ones that can make or significantly influence “consequential decisions” in areas such as employment, housing, credit, education, and healthcare. “Consequential decisions” in the employment context is broadly defined and includes decisions that have a “material legal or similarly significant effect on the provision or denial to any consumer of … employment or an employment opportunity.” Potentially, as one commentator stated, this law could lead to claims in performance management, disciplinary action, or even workplace surveillance.
This law also is broadly interpreted to cover not only applicants and employees, but Colorado consumers in the normal sense of the word definition.
Under the law, users (called deployers) of AI systems must provide notice to applicants of why an adverse consequential decision may be made by a high-risk AI system, including the degree and manner in which the AI contributed to the decision, the type and source of data processed in making the decision, and an opportunity to correct any personal data used. Further, they are required to post a public statement on their website, summarizing the types of high-risk AI systems they currently deploy, how they manage risks of algorithmic discrimination, and the nature, source, and extent of information collected and used.
Under the law, there is a rebuttable presumption that deployers used reasonable care if they comply with the following:
- Implement a risk management policy and program for high-risk AI systems;
- Complete an impact assessment of high-risk AI systems;
- Notify consumers of specified items if the high-risk systems make decisions about a consumer;
- Make a publicly available statement summarizing the types of high-risk systems that the deployer currently deploys; and
- Disclose to the attorney general the discovery of algorithmic discrimination within 90 days of discovery.
And there are reasonable exemptions to the law including:
- Employers employ fewer than 50 full-time equivalent employees;
- They do not use their own data to train the AI system;
- The AI system is used for the intended uses disclosed by the developer; and
- They make certain information related to impact assessment available to consumers.
Finally, there is no private cause of action. Therefore, only the Colorado Attorney General could enforce this law.
Time will tell if this law will be copied in other jurisdictions, reduced in impact as the governor would desire, or put a stop to much of the use of AI in the HR workplace. It is difficult to identify what HR actions fall under this law, although recruitment is an easy lock, but what about assignments to work schedules or other things in that nature. Only the courts will likely tell us. It is possible congress could pass a comprehensive law which could preempt the law, but as HR knows, the growing patchwork of laws from the various jurisdictions is just growing and leading to a variety of risk exposures, for which HR, without the resources necessary, must plan to prevent. Therefore, it would be wise to work with legal counsel to inventory risks and prepare policies and procedures and training.
Source: Seyfarth Shaw 5/23/24, Jackson Lewis May 20, 2024, HR Dive 5/20/24, Law360 5/22/24