Colorado Passes New AI Law to Protect Consumer Interactions
On Friday, May 17, 2024 Colorado Governor Jared Polis signed SB205 (Consumer Protections for Interactions with Artificial Intelligence) into law with an effective date of February 1, 2026.
Unlike the artificial intelligence (AI) laws enacted in other states (such as Utah and Florida), the new law is the first comprehensive legislation in the United States targeted towards “high-risk artificial intelligence systems.” In particular, the law requires that both developers and entities that deploy high-risk AI systems use reasonable care to prevent algorithmic discrimination and presents a rebuttable presumption that reasonable care was used if they meet certain requirements and publicly disclose certain information about such high-risk AI systems.
Although the law does little to regulate the use of AI systems that may not be deemed “high-risk,” it could nonetheless provide a model for other legislatures that are contemplating regulation.
Scope
Unlike the Colorado Privacy Act, the law applies to all developers and deployers (i.e., companies) using “high-risk artificial intelligence systems” that do business in Colorado, regardless of the number of consumers effected.
High-Risk AI Systems
The law defines “high-risk AI systems” as those that make or are a substantial factor in making “consequential” decisions. It defines a “substantial factor” as a factor that assists in making a consequential decision, is capable of altering the outcome of a substantial decision, or is generated by an AI system. Some AI systems that are explicitly considered “high-risk AI systems” under the law include AI systems used in:
- Education enrollment or opportunity
- Employment or employment opportunity
- Financial or lending services
- Essential government services
- Health care services
- Housing
- Insurance
- Legal services
The law excludes AI systems that either: (i) perform narrow procedural tasks; or (ii) detect decision-making patterns or deviations from prior decision-making patterns and are not intended to replace or influence human assessment or review. The statute also excludes certain technologies, such as cybersecurity software and spam filtering, when they are not a substantial factor in making consequential decisions.
Algorithmic Discrimination
The law requires that both developers and deployers of high-risk AI systems use reasonable care to avoid algorithmic discrimination i.e., any condition that results in unlawful differential treatment or impact based on actual or perceived age, color, disability, ethnicity, genetic information, language barriers, national origin, race, religion, reproductive health, sex, veteran status, or other classifications. However, the law excludes any discrimination that may result from the use of a high-risk AI system by a developer or company deploying the system for the sole purposes of:
- Self-testing their own systems to identify and rectify incidents or risks of discriminatory behavior/outputs.
- Expanding an applicant, customer, or participant pool to increase diversity or redress historical discrimination.
The law also excludes any acts or omissions by a private club or other establishment that is not open to the public.
Developer Responsibilities
The law creates a rebuttable presumption that the developer of an AI system exercised reasonable care to avoid algorithm discrimination if the developer follows certain procedures in its development, including:
- Providing deployers of such high-risk AI systems with information about the system, including its purpose, intended benefits, intended use, operation, potential risks, and any known or foreseeable algorithmic discrimination. The developer must also provide information about how the system was trained and evaluated for performance and evidence of algorithmic discrimination.
- Providing deployers all documentation necessary to conduct an impact assessment of the high-risk AI system.
- Publicly making available information that summarizes the types of high-risk systems developed or intentionally modified by the developer.
- Providing deployers with information on how the developer themselves manages any reasonable or foreseeable risk of algorithmic discrimination in the development and in the event that the high-risk AI system is modified later.
- Disclosing to the Colorado State Attorney General and deployers any known or reasonably foreseeable risks of algorithmic discrimination within 90 days of being informed of such risks.
Developers who do not follow such procedures may face an uphill battle to prove that they have used reasonable care to avoid algorithmic discrimination.
User Responsibilities
Companies deploying these AI systems are also tasked with using reasonable care to protect consumers from any known or foreseeable risks of discrimination. The law creates a two-tier rebuttable presumption that the deployer of an AI system exercised reasonable care to avoid algorithmic discrimination if the organization follows certain procedures including:
For all users:
- Reviewing the deployment of each high-risk AI system at least annually for any evidence of algorithmic discrimination.
- Providing information to a consumer about consequential decisions concerning that consumer made by high-risk AI systems and providing consumers with an opportunity to correct any incorrect personal data that may be used in making such a consequential decision. Deployers must also provide consumers with an opportunity to appeal an adverse consequential decision made by a high-risk AI system through human review (if technically feasible).
- Disclosing that the high-risk AI system has or is reasonably likely to have caused algorithmic discrimination to the Colorado State Attorney General within 90 days of discovery.
Only for companies who, at all times that the high-risk AI system is deployed, meet all of the following requirements: (a) have more than fifty (50) full-time employees, (b) does not use the deployer’s own data to train the high-risk AI system; and (c) uses the high-risk AI system only for the intended uses disclosed by the deployer.
- Implementing risk management policies and programs for the high-risk AI system.
- Conducting an impact assessment of each high-risk AI system.
- Publicly making available information that summarizes the types of high-risk systems deployed, along with information on how the deployer manages any known or foreseeable risk of algorithmic discrimination.
- Publicly making available information regarding the nature, source, and extent of the information collected and used by the deployer in the high-risk AI system.
Additional Requirements
The law further requires that any developer or deployer that makes available any AI system that is intended to interact with consumers disclose to consumers that they are interacting with an AI system and not a live person.
Other Exclusions
The law does not apply to a developer or a deployer engaging in specified activities, including to comply with other federal, state, or municipal laws, cooperating and conducting specified investigations, taking steps to protect the life or physical safety of a consumer, and conducting certain research activities.
Enforcement
The law does not allow for a private right of action, and instead leaves exclusive enforcement to the Colorado State Attorney General. The Attorney General also has discretion under the statute to implement further rulemaking, including additional documentation and requirement rules for developers, requirements for notices and disclosures, requirements for risk management policies and impact assessments, scope and guidance adjustments related any rebuttable presumptions, and the requirements for the defenses to enforcement.
However, the law also provides developers and deployers with an affirmative defense if they are in compliance with other nationally or internationally recognized AI risk management frameworks specified either in the bill or by the Attorney General. Currently, this includes the NIST AI Risk Management Framework and ISO/IEC 42001.
Impacts to Business
The new law ultimately provides developers and companies deploying high-risk AI systems with a framework for how they can evaluate and use reasonable care to avoid algorithmic discrimination. While the new law doesn’t go into effect until February 1, 2026, developers and deployers of high-risk AI systems may need to devote significant resources to meeting their documentation and other obligations before the new law goes into effect.
Developers should begin the following actions to prepare for the new law:
- Begin to compile, or better yet, create at the time the high-risk AI system is conceptualized or created, all necessary documentation that needs to be disclosed to consumers or made available to deployers for them to conduct an impact assessment. Important to this documentation should be a description of how the developer trains the high-risk AI system and how it tests for and remediates potential algorithmic discrimination.
- Be prepared to respond and provide additional documentation to deployers of such high-risk AI systems who may require more detailed documentation or otherwise question the content of the documentation.
- Begin drafting the public statements regarding the types of high-risk systems that the developer has developed (or intentionally and substantially modified) and be prepared to describe and potentially defend how the developer manages known or foreseeable risk of algorithmic discrimination.
- Begin putting policies in place (with appropriate legal and other stakeholder review) for notifying the Attorney General of algorithmic discrimination caused or reasonably likely caused by the high-risk artificial intelligence system.
Users, on the other hand, should begin the following action items to prepare for the new law:
- Begin to develop a risk management policy and program that is based on a standard AI risk management framework, such as the NIST AI Risk Management Framework and/or ISO/IEC 42001.
- Begin developing an impact assessment for high-risk AI systems that they deploy, and be prepared to request more information from developers as necessary.
- Put processes in place to review each high-risk AI system for algorithmic discrimination at least once annually (more often if there is a significant change in the system or the use of such a system).
- Consider drafting form notices to consumers containing all required items if a high-risk AI system makes a consequential decision concerning the consumer, and provide consumers with an opportunity to appeal such a decision for human review.
- Begin developing procedures for consumers to correct any incorrect personal data used by high-risk AI systems. Businesses that are subject to the Colorado Privacy Act will already be familiar with providing consumers with the right to correct incorrect personal data and how to verify that the personal data was originally incorrect.
- Begin drafting the public statements regarding the types of high-risk systems that the currently deployed and information about how known or foreseeable risks of algorithmic discrimination are being managed. Businesses that are subject to the Colorado Privacy Act will already be familiar the requirement that these public statements also include the nature, source, and extent of information collected and used by the deployer.
- Begin putting policies in place (with appropriate legal and other stakeholder review) for notifying the Attorney General of algorithmic discrimination caused or reasonably likely caused by a high-risk AI system.
Both developers and deployers should also continue to monitor any further guidance from the Colorado State Attorney General. Based on the pattern of regulations issued by the Attorney General pursuant to the Colorado Privacy Act, additional regulations issued may be extensive and require more details than facially required by the law.
Further Reading
For a deeper dive into what this new law means for employers and the use of AI systems within the context of human resources, Foley’s Labor & Employment team has created a primer on the expected impact. Click here to read the companion piece.
If you have any other questions regarding the requirements of this law, please contact any of the authors or other Partners or Senior Counsel in Foley & Lardner’s Artificial Intelligence area of focus.