On Wednesday, May 8, the Colorado state legislature passed SB 205, a bill that is focused on preventing the discriminatory effects of high-risk artificial intelligence (AI). Although purportedly narrow, if the bill is signed into law by the Governor of Colorado, Colorado would be the first state to pass a broad AI law that imposes substantive requirements on businesses that develop and use AI systems. This would create the potential for Colorado to set a baseline for the steps businesses must take to protect consumers from problematic uses of AI, even outside of high-risk systems where there is the potential for discriminatory effects. Also notable is the fact that the bill includes a general AI disclosure requirement for any business that uses AI to interact with consumers.
The passage of SB 205 reflects the growing desire to implement guardrails to curtail the potential harms that AI models present. For example, the European Parliament recently adopted the Artificial Intelligence Act, the world’s first comprehensive horizontal legal framework for AI. Similarly, officials from the Federal Trade Commission, including Chair Lina Khan, have voiced their plans to leverage the commission’s existing authority to minimize the harmful effects of AI.
This post summarizes the history of SB 205 and the bill’s requirements, and provides a look ahead at the potential enactment of the bill. The text of SB 205 is available here. For more updates and analyses of current developments in AI, data privacy and cybersecurity, please subscribe to the WilmerHale Privacy and Cybersecurity Law blog.
Background
SB 205 was first introduced in the Colorado Senate on April 10, 2024. At the time, the bill was similar to Senate Bill 2 from Connecticut, a proposed law that, while it came close, was not enacted this legislative session. That bill also aimed to minimize discrimination in the development and deployment of AI systems. Since its introduction, there have been a few revisions to SB 205. Most notably, the bill’s focus has been narrowed to only regulate developers and deployers of high-risk AI systems, as opposed to both high-risk and general-purpose AI models, which the bill had previously covered.
Who Does the Bill Apply To?
The bill applies to “developers” and “deployers” of “high-risk AI systems.”
A “developer” is any person doing business in Colorado that develops or intentionally and substantially modifies an AI system, not necessarily a “high-risk” AI system. The term “deployer” is broadly defined to include any person doing business in Colorado that uses a high-risk AI system. An AI system is deemed to be high-risk if it makes, or is a substantial factor in making, a consequential decision. Consequential decisions are defined to be those that have a material legal or similarly significant effect on the provision or denial to any Colorado resident of, or on the cost or terms of, education enrollment or opportunity, employment, financial or lending services, essential government services, healthcare services, housing, insurance, or legal services. This focus on creating guardrails around the use of AI for consequential decisions is similar to the approach taken by comprehensive state privacy laws that generally allow people to opt out of the use of their personal information for profiling in furtherance of certain important decisions.
What Does the Bill Require?
The bill places a duty on developers and deployers of high-risk AI to avoid algorithmic discrimination. The bill also places substantive requirements on developers and deployers around risk management and transparency.
Developers
Developers are required to use reasonable care to protect consumers from any known or reasonably foreseeable risks of algorithmic discrimination arising from the intended and contracted uses of high-risk AI systems. Developers are also required to make available certain documentation. Specifically, developers are required to provide a general statement to a deployer of the high-risk system or another developer of the high-risk AI system describing the reasonably foreseeable uses and known harmful or inappropriate uses of such systems. They are also required to provide documentation on the system, covering the type of training data used, known or reasonably foreseeable limitations of the system, the purpose of the system, and the intended benefits and uses of the system. The documentation also must describe performance evaluation procedures, data governance measures, intended outputs, measures taken by the developer to mitigate risks, how the high-risk system should be used, not used, and monitored by a human, and any other documentation necessary to assist the deployer in understanding outputs. Developers are also to provide deployers with the information they need to conduct impact assessments under the legislation.
Developers are also to make certain disclosures about their high-risk AI systems available on their websites or in a public use case inventory. Additionally, developers are to disclose to the attorney general, and to all known deployers of the high-risk AI system, the risks of algorithmic discrimination arising from the intended uses of the system where the developer discovers the system has been deployed and caused or is reasonably likely to have caused algorithmic discrimination or the developer receives a credible report to that effect.
A developer may also be required to disclose to the attorney general the general statement provided to developers or deployers describing the reasonably foreseeable uses and known harmful or inappropriate uses of the high-risk AI system. A developer may also be required to disclose to the attorney general the documentation describing the system, covering the type of training data used, known or reasonably foreseeable limitations of the system, the purpose of the system, and the intended benefits and uses of the system. For this disclosure, a developer may designate the statement or documentation as including propriety information or a trade secret. To the extent any information in the statement or documentation includes information subject to attorney-client privilege or work-product protection, the disclosure does not constitute a waiver of the privilege or protection.
The bill allows for a rebuttable presumption that the developer used reasonable care if they followed the bill’s requirements.
Deployers
Deployers are also to use reasonable care to avoid algorithmic discrimination for the high-risk systems they use. They are to create a risk management policy and governance program that specifies and incorporates the principles, processes and personnel that they use to identify, document and mitigate risks of algorithmic discrimination. The bill specifically references the National Institute of Standards and Technology AI Risk Management Framework as providing a model for a reasonable risk management policy and program. The bill also requires that deployers conduct impact assessments at least annually or after any intentional and substantial modification, and includes minimum requirements for those assessments. In addition, the deployer must monitor the system after deployment to ensure it is not causing discrimination, notify consumers if a consequential decision is made using a high-risk AI system, provide consumers with details about the decision, and provide consumers with certain redress, including the ability to correct their personal data and an opportunity to appeal an adverse consequential decision. Furthermore, deployers are to make certain disclosures about their high-risk AI systems publicly available on their websites.
Additionally, deployers are to report any instances of discrimination using their systems to the attorney general. The attorney general may also require that a deployer, or a third party contracted by the deployer, disclose to the attorney general their risk management policy, impact assessment or other impact assessment records. As a protection, the deployer may designate the statement or documentation as including proprietary information or a trade secret. Similar to the rule for developers, to the extent any information contained in the risk management policy, impact assessment or records includes information subject to attorney-client privilege or work-product privilege, the disclosure does not constitute a waiver of the privilege or protection.
As it does for developers, the bill allows for a rebuttable presumption that the deployer used reasonable care if they followed the bill’s requirements.
General Disclosure Requirement for AI Systems
More generally, the bill also requires that businesses making any AI systems intended to interact with Colorado residents disclose to consumers that they are interacting with an AI system, unless obvious to a reasonable person.
Exemptions
The bill provides for a few exemptions that permit developers and deployers to avoid SB 205’s requirements. For example, small businesses in Colorado using a high-risk AI system would be exempt from several of the bill’s requirements if, while deploying the high-risk AI system, (1) the business employs fewer than 50 full-time employees, (2) the business does not use its own data to train the high-risk AI system, (3) the high-risk AI system is used for the intended purposes previously disclosed to the business by the developer, (4) the high-risk AI system continues learning based on data that is not derived from the business’s own data and (5) the business makes certain impact assessments available to consumers. Other notable exemptions that permit developers and deployers to avoid SB 205’s requirements include if their high-risk AI system has been approved or follows standards established by a federal agency or if they are conducting research to support an application for approval from a federal agency.
Enforcement
The Colorado attorney general has exclusive enforcement authority. SB 205 also grants the attorney general rule-making authority to further implement and enforce the bill.
In the case of an enforcement action, the bill creates an affirmative defense for businesses that can show they have taken steps to address any discovered violations, or that they are in compliance with a national or international risk management framework for AI.
Looking Ahead
With the bill’s passage by the Colorado state legislature, the legislative process is almost complete. Colorado Governor Jared Polis must now decide whether the bill should become law. If enacted, SB 205 will go into effect on February 1, 2026, and could serve as a model for other state legislatures in the United States.