Colorado just became the first U.S. state to pass a law (Senate Bill 24-205 “SB 24-205” or the “CAIA”) regulating consumer harms arising out of artificial intelligence (“AI”). While the CAIA will not go into effect until February 2026, it is part of a growing trend in the U.S., including, most notably, the White House’s guidance on “Algorithmic Discrimination Protections” published at the end of 2023.
Similar to the White House’s guidance, the CAIA defines “Algorithmic Discrimination” as “any condition in which the use of an Artificial Intelligence System… results in an unlawful differential treatment or impact that disfavors an individual or group of individuals on the basis of their… classification protected under the laws of [Colorado] or federal law” such as national origin, race, religion, or sex.
The CAIA requires Developers of high-risk Artificial Intelligence Systems (“high-risk systems”) to use reasonable care to avoid algorithmic discrimination. Per the CAIA, there is a rebuttable presumption that a Developer used reasonable care if the Developer maintains compliance documentation including, but not limited to:
- Making available a general statement describing the reasonably foreseeable uses and known harmful or inappropriate uses of the high-risk system;
- Making available information and documentation necessary to complete an impact assessment of the high-risk system, and assist in understanding the outputs and monitoring performance for risks of algorithmic discrimination.
- Making a publicly available statement summarizing the types of high-risk systems that the Developer has developed or modified, and how the Developer manages foreseeable risks of algorithmic discrimination; and
- Disclosing to the Colorado attorney general and applicable Deployers any known or reasonably foreseeable risk of algorithmic discrimination.
Additionally, like Developers, the CAIA outlines requirements for Deployers of a high-risk system to use reasonable care to avoid algorithmic discrimination. Per the CAIA, there is a rebuttable presumption that a Deployer used reasonable care if the Deployer executes the following including, but not limited to:
- Implementing a risk management policy and program for the high-risk system;
- Completing an impact assessment and annually reviewing the deployment of the high-risk systems;
- Notifying consumers of specified items if the high-risk system makes a consequential decision concerning a consumer; and
- Providing consumers with an opportunity to 1) correct any incorrect personal data that a high-risk system processed in making a consequential decision or 2) appeal such decision.
Deployers must also disclose Artificial Intelligence Systems intended to interact with consumers unless it would be obvious to a reasonable person that the person is interacting with such a system.
Finally, in accordance with the CAIA, the Colorado attorney general has exclusive enforcement authority. However, the CAIA provides for affirmative defenses, in which the Developer or Deployer bears the burden of demonstrating the requirements are satisfied. The affirmative defenses include if the Developer or Deployer:
- discovers and cures a violation as a result of:
- feedback from other deployers or users;
- adversarial testing or red teaming; or
- an internal review process; and
- is otherwise in compliance with:
- the latest version of the “Artificial Intelligence Risk Management Framework” published by NIST;
- another nationally or internationally recognized risk management framework, if the standards are substantially equivalent to or more stringent than the requirements of the CAIA; or
- any risk management framework that the Colorado attorney general, in the attorney general’s discretion, may designate and, if designated, shall publicly disseminate.
Key Takeaways:
- Holistic and comprehensive approach to AI Regulation: SB 24-205 establishes a legal framework in Colorado to mitigate consumer harm and target algorithmic discrimination, ensuring AI systems do not unjustly impact individuals based on protected characteristics. As the first U.S. state to enact such comprehensive AI legislation, more U.S. states are likely to follow suit. While most of these laws may have a common framework, different compliance schemes may also develop.
- Defined Responsibilities for AI Stakeholders: the CAIA imposes specific obligations on Developers and Deployers of high-risk AI systems. Developers must document foreseeable risks and provide impact assessments, while Deployers are required to implement risk management policies, perform annual reviews, and offer consumer protections such as notifications and appeals for adverse decisions.
- Enforcement and Compliance: The Colorado attorney general has exclusive authority to enforce the CAIA. However, the CAIA provides affirmative defenses if Developers and Deployers comply with nationally and internationally recognized AI risk management frameworks and violations are identified and rectified promptly; emphasizing proactive and continuous risk mitigation efforts.