MARC HOAG AI LAW.

View Original

Everything you need to know about America’s first comprehensive AI regulation

Image source: https://www.state.gov/states/colorado/

In June 2024, Colorado made history by becoming the first U.S. state to enact comprehensive AI regulation through the Colorado Artificial Intelligence Act (CAIA), signed into law by Governor Jared Polis.

Scheduled to take effect on February 1, 2026, the CAIA introduces a robust framework of requirements for developers and deployers of AI systems, focusing particularly on high-risk applications where algorithmic decisions have substantial consumer impact. This law is groundbreaking in its approach to consumer protection, setting forth measures to guard against “algorithmic discrimination” and other risks across diverse areas, from financial services and healthcare to housing and insurance.

Although the CAIA shares certain objectives with the European Union AI Act (EUAIA)—notably, protecting consumer rights and emphasizing transparency in high-risk AI—it embodies a distinctly American approach to AI regulation. This article provides an in-depth analysis of the CAIA’s scope, obligations, and enforcement mechanisms, including a comparison with the EUAIA to highlight the similarities and differences between the two regulatory frameworks.

Key Definitions and Scope

The Colorado AI Act (CAIA) applies to developers and deployers of high-risk AI systems operating in Colorado, regardless of whether they have a physical presence within the state.

According to Section 6-1-1701(9), a high-risk AI system is defined as any artificial intelligence system that “makes, or is a substantial factor in making, a consequential decision.” The CAIA further defines a consequential decision as one with a “material legal or similarly significant effect” on a consumer’s access to education, employment, financial or lending services, essential government services, healthcare, housing, insurance, or legal services (Section 6-1-1701(3)). This focus on consequential decisions underscores the CAIA’s commitment to regulating AI systems in areas where outcomes significantly impact consumers’ lives, rights, or economic stability.

Algorithmic discrimination is another fundamental concept in the CAIA, defined as the unlawful differential treatment or impact stemming from an AI system’s use that disfavores individuals based on legally protected characteristics. Section 6-1-1701(1)(a) specifies these protected characteristics as including, but not limited to, “age, color, disability, ethnicity, genetic information, limited proficiency in the English language, national origin, race, religion, reproductive health, sex, and veteran status.” This language reflects Colorado’s dedication to equality, requiring AI developers and deployers to mitigate discriminatory risks within AI applications across multiple critical sectors.

This broad regulatory scope underscores Colorado’s proactive approach to AI governance, holding developers and deployers accountable for the far-reaching effects that high-risk AI systems may have on consumers in the state.

Obligations for Developers and Deployers

Under the CAIA, developers and deployers of high-risk AI systems must adhere to stringent operational, documentation, and disclosure requirements aimed at ensuring transparency, safety, and fairness.

Developers

For developers, the CAIA imposes a duty of care to protect consumers from algorithmic discrimination by exercising what it calls “reasonable care” in the creation and deployment of high-risk AI systems. This obligation is outlined in Section 6-1-1702(1), which mandates that developers must protect against “any known or reasonably foreseeable risks” of algorithmic discrimination resulting from the intended use of a high-risk AI system.

Compliance with this requirement can be demonstrated by adhering to specific documentation and disclosure obligations. For example, Section 6-1-1702(2)(a) requires developers to make available a general statement detailing both the “reasonably foreseeable uses” of their high-risk AI system as well as known “harmful or inappropriate uses.” This disclosure is essential for deployers who need this information to assess whether the AI system’s design aligns with ethical and legal standards.

Additionally, developers are obligated to provide further documentation that goes beyond general statements. Section 6-1-1702(2)(b) mandates that developers disclose high-level summaries of the data types used to train the AI system, any limitations that could impact the system’s reliability, and the risks of algorithmic discrimination that might arise from the system’s intended uses. This documentation must also include the purpose of the AI system, the anticipated benefits, and any other essential information required for deployers to meet their compliance obligations. Moreover, Section 6-1-1702(3) requires developers to make this documentation accessible to deployers in the form of model cards, dataset cards, or similar artifacts that clarify the impact and potential risks of the AI system.

Deployers

Deployers—entities utilizing AI systems to engage with Colorado consumers—must also fulfill rigorous requirements under the CAIA. One of the core requirements for deployers, as detailed in Section 6-1-1703(2), is the implementation of a comprehensive AI risk management program. This program must be consistent with recognized frameworks, such as the National Institute of Standards and Technology’s (NIST) AI Risk Management Framework or similar standards at a national or international level. Deployers are also instructed to consider factors such as the “size and complexity” of the deploying organization and the “nature and scope” of the AI systems being used when designing their risk management programs. By setting these standards, the CAIA requires deployers to adopt a flexible yet systematic approach to mitigate known or foreseeable risks of algorithmic discrimination.

The CAIA also imposes requirements on deployers to conduct initial impact assessments before deploying high-risk AI systems and mandates that these assessments be updated annually. Section 6-1-1703(3) specifies that impact assessments must include an analysis of the AI system’s intended uses, potential discrimination risks, transparency measures, and the data categories processed by the AI system. Additionally, deployers are required to update impact assessments within 90 days of any “intentional and substantial modification” to a high-risk AI system. This obligation to maintain an up-to-date record of the system’s impacts ensures that deployers remain vigilant in monitoring for potential biases or discriminatory effects as the AI system evolves.

The CAIA goes further to safeguard consumers’ rights by imposing consumer-facing obligations on deployers. If a high-risk AI system makes an adverse decision affecting a consumer, Section 6-1-1703(3)(b)(VII) mandates that deployers must provide the consumer with an explanation of the decision and an opportunity to appeal. Furthermore, consumers must be given the option to correct any inaccurate personal data used in making the decision, enhancing transparency and accountability for AI-driven decision-making. This aligns with the CAIA’s broader objective of empowering consumers to understand and, if necessary, challenge consequential decisions made by high-risk AI systems.

Consumer Rights and Protections

Consumer protection is a central pillar of the CAIA. The law mandates that consumers be clearly informed whenever they interact with a high-risk AI system, especially in cases where the system is used to make consequential decisions. Deployers are required by the CAIA to notify consumers of the use of AI and provide a transparent explanation of the nature of the decision being made. Section 6-1-1703(3)(b)(VI)-(VII) underscores this transparency obligation by requiring deployers to give consumers meaningful information about how their data is used, the system’s role in the decision-making process, and, crucially, any opportunities to contest and correct the data used. This transparency reflects a consumer-first regulatory approach, ensuring that individuals maintain control over decisions that significantly affect their lives and livelihoods.

Penalties and Enforcement

The Colorado Attorney General holds exclusive authority to enforce the CAIA. Violations of the CAIA are treated as unfair or deceptive trade practices under Colorado law, allowing the Attorney General to seek remedies, which may include corrective actions and fines. While the CAIA does not establish a private right of action, Section 6-1-1706 grants the Attorney General the power to initiate enforcement proceedings if developers or deployers fail to meet the CAIA’s requirements. This enforcement model underscores Colorado’s commitment to holding developers and deployers accountable while balancing the state’s interest in promoting compliance over punitive measures.

Exemptions and Limitations

The CAIA includes exemptions for certain entities and circumstances, reflecting a nuanced approach to regulatory burden. For instance, businesses with fewer than 50 employees are generally exempt from many of the CAIA’s requirements, provided they do not train high-risk AI systems on their own data and instead rely on systems as intended by the developer. Section 6-1-1701(12) also outlines exemptions for financial institutions, such as banks or insurers, which are already governed by AI-related federal or state regulations that meet or exceed the CAIA’s requirements. These exemptions help balance the CAIA’s consumer protection goals with a recognition of the regulatory frameworks already imposed on highly regulated industries.

Comparing the CAIA and the EU AI Act

While the CAIA and the EUAIA share similar objectives of consumer protection and transparency, each law adopts a distinct approach to regulatory oversight, reflecting regional legal philosophies and enforcement priorities.

Both the CAIA and EUAIA regulate high-risk AI systems, but the EUAIA employs a more structured, tiered classification of AI risk. While the CAIA broadly defines high-risk AI systems in relation to consequential decisions, the EUAIA categorizes AI systems across prohibited, high-risk, limited-risk, and minimal-risk tiers, each with distinct regulatory obligations. For example, the EUAIA places a heavy emphasis on conformity assessments and post-market surveillance for high-risk AI systems, whereas the CAIA’s requirements focus more on upfront disclosures and annual impact assessments (Section 6-1-1703).

Another key distinction is in penalties and enforcement. The EUAIA imposes significant fines, including penalties of up to 7% of global turnover for certain violations, underscoring the EU’s firm stance on AI regulation. By contrast, the CAIA does not outline a specific penalty framework for non-compliance but rather leverages Colorado’s existing consumer protection laws to enforce the act’s provisions through the Attorney General’s office. This difference reflects Colorado’s comparatively flexible, state-level enforcement model.

These comparisons highlight that while the CAIA provides a robust framework for AI regulation, its emphasis on compliance flexibility contrasts with the EUAIA’s more prescriptive and punitive approach.

Conclusion

The Colorado Artificial Intelligence Act represents a pioneering step in the U.S. regulatory landscape, offering comprehensive guidance for managing high-risk AI systems. By addressing the risks associated with consequential decisions, the CAIA sets a framework that may serve as a model for other states. Developers and deployers must navigate a complex landscape of compliance, balancing the CAIA’s requirements with the demands of federal and international frameworks.

Continue reading about the EUAIA or contact us.