On May 17, 2024, Governor Jared Polis signed the Colorado Artificial Intelligence Act  (SB 24-205) (CAIA), regulating the development, deployment, and use of artificial intelligence (AI) systems.  

Summary 

  • CAIA, which will be effective February 1, 2026, will apply to developers and deployers of “high risk AI systems” (HRAIS), which are defined as AI systems that make, or are a substantial factor in making, a “consequential decision” (a decision that  

  • CAIA will impose a duty of reasonable care on developers and deployers to avoid “algorithmic discrimination” in high-risk AI systems. There will be a presumption that the developer or deployer has used reasonable care to avoid algorithmic discrimination if a developer or deployer complies with the disclosure, risk assessment, and governance requirements in the statute, 

  • CAIA does not include a private right of action, but Colorado Attorney General (AG) will be able to enforce the law as an unfair or deceptive trade practice, with a penalty of up to $20,000 per violation. 

Key Takeaways and Scope 

  • The law is largely limited to AI systems that involve automated decision making about consumers. Most requirements under CAIA do not apply to general purpose AI systems, such as popular generative AI products currently used by consumers, as these are not considered high-risk AI systems. These types of systems will only be required to transparently disclose the use of AI. 

  • CAIA provides an affirmative defense to enforcement for companies that have 1) cured violations as a result of external feedback or red teaming; and 2) complied with the latest version of the NIST AI risk management framework or an equivalent framework (nationally or internationally recognized or designated by the AG). Establishing robust AI governance programs will be crucial to compliance and can help protect against enforcement actions. 

  • The law also exempts covered entities subject to the Health Insurance Portability and Accountability Act, certain financial institutions, and government contractors. 

Analysis of CAIA Duties  

Developer Duties 

The CAIA broadly requires developers of HRAIS to “use reasonable care to protect consumers from any known or reasonably foreseeable risks of algorithmic discrimination arising from the intended and contracted uses of the high-risk artificial intelligence system.” “Algorithmic discrimination” is defined as “any condition in which the use of an artificial intelligence system results in an unlawful differential treatment or impact that disfavors an individual or group of individuals on the basis of their actual or perceived age, color, disability, ethnicity, genetic information, limited proficiency in the English language, national origin, race, religion, reproductive health, sex, veteran status, or other classification protected under the laws of this state or federal law.” 

Developers can establish a rebuttable presumption that they complied with the “reasonable care” standard if they demonstrate their compliance with the following requirements: 

  1. Developers must make available to deployers or other developers of HRAIS: 

    • A general statement describing the reasonably foreseeable uses and known harmful or inappropriate uses of the HRAIS; 

    • High-level summaries of the training data for the HRAIS, known or reasonably foreseeable limitations of the HRAIS, purpose of the HRAIS, intended benefits and uses of the HRAIS, and any other information necessary to allow the deployer to comply with its own disclosure requirements; 

    • Documentation about i) how the HRAIS was evaluated for performance and mitigation for algorithmic discrimination, ii) the data governance measures used to cover the training datasets, iii) the intended outputs of the HRAIS; iv) measures to mitigate risks; and v) how the HRAIS should be used, not used, or monitored by an individual when it is being used to make a consequential decision. 

  2. Developers must also disclose on their website or in a public use case inventory, a statement summarizing: the types of HRAIS that the developer has developed or intentionally and substantially modified and currently makes available to a deployer or other developer; and how the developer manages known or reasonably foreseeable risks of development or intentionally and substantially modification of HRAIS. 

  3. If a developer learns that its HRAIS has been deployed and has caused or is reasonably likely to have caused algorithmic discrimination or if the developer receives a credible report from a deployer that its HRAIS has caused algorithmic discrimination, it must disclose that to the AG and all known deployers or other developers within ninety days of discovery. The AG may also request the documentation described above, which the developer of the HRAIS must provide within 90 days.  

Deployer Duties

Similarly, the CAIA requires deployers of HRAIS to “use reasonable care to protect consumers from any known or reasonably foreseeable risks of algorithmic discrimination.” Deployers can establish a rebuttable presumption that they meet this standard if they can demonstrate their compliance with the following requirements: 

  • Developing a risk management policy and governance program 

  • Completing an impact assessment for the HRAIS 

  • Notifying consumers if the deployer uses a HRAIS to make, or be a substantial factor in making, a consequential decision concerning a consumer.  

  • If the HRAIS is used to make a consequential and adverse decision to the consumer, the deployer must provide the consumer with i) a statement disclosing the reasons for the consequential decisions, ii) an opportunity to correct any incorrect personal information that was processed the HRAIS in making a decision, and iii) an opportunity to appeal the adverse decision. 

  • Making available on their websites a statement summarizing information such as the types of HRAIS that are currently deployed by the deployer and how they manage known or reasonably foreseeable risks of algorithmic discrimination. 

  • If a deployer discovers that its HRAIS has been deployed and has caused algorithmic discrimination, it must disclose that to the AG within 90 days of discovery. The AG is also entitled to ask for the risk management policy and impact assessments described above. 

Deployer businesses do not need to comply with the risk management, impact assessment, and website disclosure requirements, if they 1) employ fewer than 50 employees and do not use their own data to train the HRAIS; and 2) make certain disclosures to consumers. 


Note that several other states are considering AI legislation, which may add challenges as companies seek to develop an America-wide compliance framework. Contact PRIVATECH for assistance with state specific privacy and AI regulation compliance. 

 

Previous
Previous

PRIVACY LAW UPDATES: ONTARIO AND QUEBEC

Next
Next

PRIVACY INTERESTS IN IP ADDRESSES CONFIRMED BY THE SUPREME COURT OF CANADA