Everything you need to know about the EU AI Act
This is educational material and does not constitute legal advice nor is any attorney/client relationship created with this article, hence you should contact and engage an attorney if you have any legal questions.
We should begin this discussion with an important clarification:
The European Union AI Act was not in fact conceived in response to the new era of so-called “generative AI” apps that began on November 30, 2022, when ChatGPT launched for public use. On the contrary, the first iteration of the EU AI Act (“EUAIA”) was drafted in April 2021 to regulate artificial intelligence systems generally, not “generative AI” systems like ChatGPT specifically.
Indeed, it was only subsequent revisions to the EUAIA that sought to incorporate text to accommodate generative AI following the launch of ChatGPT and others. Hence, the final version we study here is essentially an afterthought to compensate for what had been a decidedly unforeseen advent in the evolution of artificial intelligence systems. Simply put, nobody had anticipated generative AI specifically, with its almost mysteriously complex large language models (“LLMs”; learn more here) — at least not any time soon — and so the drafters of the EUAIA had to adapt quickly.
The final result, published on July 12, 2024 in the Official Journal of the European Union, is a landmark regulatory framework designed to ensure AI technologies are developed, deployed, and used responsibly. While critics may well fault it for its excessively strong grip akin to the EU’s GDPR (General Data Protection Regulation), its intent is unquestionably sound and indeed necessary to ensure safety in our fast-changing digital world, where fantastical things once limited to the confines of science fiction are now very real indeed.
This article attempts to unpack and summarize the nuances of the EUAIA, as derived directly from the EUAIA’s entire text and official summary, including key definitions, risk classifications, compliance requirements, penalties, and the implications for AI providers and deployers. A “mind map” is available below for an even deeper dive for those so inclined to learn more.
Key Definitions and Scope
The EUAIA defines an AI system as any machine-based setup that makes inferences to achieve objectives, except systems for military or defense purposes. The EUAIA applies to providers placing AI systems on the EU market, as well as to providers and deployers of AI systems in third countries if the output is used in the EU. But the EUAIA includes exceptions for free and open-source AI systems, unless they are otherwise prohibited or classified as high-risk AI systems, as defined below.
Risk Tiers for AI Systems
The framework of the EUAIA is premised on a categorization of AI systems into three primary risk tiers: prohibited, high-risk, and limited-risk. General-purpose and minimal-risk tiers are also slotted into the hierarchy.
Overall, the EUAIA mandates transparency for all AI systems, especially those that interact with humans or are involved in emotion recognition or biometric categorization; meanwhile, so-called high-risk AI systems must carry clear labels and instructions.
Providers of general-purpose AI models are required to disclose detailed technical documentation, including information about training data, limitations, and capabilities. Free and open-source GPAI models are exempt from some requirements unless they present systemic risk.
The EUAIA also includes specific provisions for deep fakes, defined as AI-generated or manipulated content that falsely appears authentic. Providers and deployers of such systems must disclose that the content is artificially generated or manipulated, with certain exceptions for authorized law enforcement use or evidently artistic works.
To foster innovation, the EUAIA includes provisions for AI regulatory sandboxes, allowing for supervised testing of novel AI applications. It also provides support measures for SMEs and startups to help them navigate compliance requirements.
We shall now unpack each of these in decreasing order from highest to least risk.
Prohibited AI Practices
The EUAIA bans certain AI practices outright due to their potential for harm. These include:
Manipulative AI that distorts human behavior.
Social scoring systems that may lead to discriminatory outcomes.
Real-time remote biometric identification by law enforcement, except in cases of severe public interest.
Biometric categorization systems inferring sensitive attributes, with specific exceptions for law enforcement use.
High-Risk AI Systems
High-risk AI systems are those with significant potential to impact health, safety, or fundamental rights. These systems must adhere to stringent requirements, including continuous risk assessment, high-quality data governance, comprehensive technical documentation including labeling requirements, robust quality management systems, and detailed record-keeping. The EUAIA outlines specific high-risk categories in Annex III of the EUAIA, including:
Biometric identification and categorization.
Critical infrastructure management.
Education and vocational training.
Employment, workers’ management, and access to self-employment.
Essential private and public services.
Law enforcement.
Migration, asylum, and border control management.
Administration of justice and democratic processes.
General Purpose AI Systems
These are AI models designed to perform a wide range of tasks and are assumed with systems presenting systemic risks (e.g.,those that exceed 10^25 FLOPs (floating point operations per second; for more information, see here)). Such systems have specific obligations under the EUAIA, including:
Providing technical documentation and instructions for use.
Complying with EU copyright law.
Publishing a summary of training data content.
Free and open-source GPAI models are exempt from some requirements unless they present systemic risk.
The EUAIA also places specific obligations on providers of general-purpose AI models regarding copyright compliance and transparency about training data:
Providers must implement a policy to comply with EU copyright law.
The EUAIA recognizes a text and data mining exception, allowing for reproductions and extractions of lawfully accessible works for AI training purposes, but rightsholders can opt out, requiring AI providers to obtain authorization for text and data mining of their works.
Providers must publish a sufficiently detailed summary of content used for training, facilitating copyright enforcement.
These obligations apply to all GPAI models on the EU market, regardless of where the training occurred.
Limited Risk AI Systems
Limited risk systems are those that are not high-risk yet still require transparency and accountability. Somewhat surprisingly, deep fakes are included in this risk category, despite their profound potential to disseminate false or misleading information by impersonating people without their consent. Providers must ensure users are aware they are interacting with an AI system and maintain basic documentation.
Minimal-Risk AI Systems
This category encompasses the majority of AI systems currently on the market. These systems:
Are not subject to specific obligations under the EUAIA due to their low risk profile.
Are encouraged to follow voluntary codes of conduct to ensure ethical and responsible AI development and use.
Allow for continued innovation while promoting best practices in AI governance.
Compliance and Penalties
Compliance with the EU AI Act involves a multi-faceted approach:
Conformity Assessments: High-risk AI systems must undergo pre-market evaluations.
Post-Market Monitoring: Continuous surveillance to ensure ongoing compliance.
Market Surveillance Authorities: National bodies responsible for enforcement.
EU Database: A centralized repository for high-risk AI systems.
Notified Bodies: Independent entities conducting conformity assessments.
Penalties for non-compliance are broken down as follows:
Prohibited uses can incur fines up to €35 million or 7% of global annual turnover, whichever is higher and/or prohibition from use.
High-risk compliance failures can attract fines up to €15 million or 3% of global turnover, whichever is higher and/or full access to system data and source code for inspection.
Misleading information can result in fines up to €7.5 million or 1% of global turnover, whichever is higher and/or proactive market surveillance.
For SMEs and startups, the fines are subject to the same maximum percentages or amounts, whichever is lower.
Deployers’ Obligations
Deployers of AI systems have specific obligations under the EUAIA. They must implement a risk assessment system, adopt transparency principles, and design responsible AI governance frameworks. Additionally, deployers of high-risk AI systems in public services must conduct fundamental rights impact assessments to ensure their systems do not negatively impact rights.
Data Protection and Privacy
Alignment with the General Data Protection Regulation (GDPR) is a cornerstone of the EUAIA, ensuring that AI systems comply with stringent data protection laws. Specific rules govern the processing of biometric data, imposing strict conditions to safeguard privacy.
Governance and International Cooperation
The governance structure established by the EUAIA includes the European Artificial Intelligence Board, which coordinates regulatory practices across Member States, and the AI Office, which oversees AI regulations and offers guidance. A scientific panel of independent experts provides technical and ethical advice. Internationally, the Act promotes global cooperation to ensure that AI systems developed outside the EU comply with EU standards. Mutual recognition agreements facilitate conformity assessments with other jurisdictions, ensuring a global standard for AI safety and compliance.
Implementation Timeline
The EUAIA’s implementation is slated to be phased in accordingly, with key dates including:
August 1, 2024: The EUAIA officially goes into effect.
February 2, 2025: Prohibited uses become enforceable.
May 2, 2025: Codes of practice for general-purpose AI models are made available. (This will be interesting to see.)
August 2, 2025: Compliance deadline for general-purpose AI providers.
February 2, 2026: The Commission provides guidelines and practical examples. (Also this.)
August 2, 2026: Full applicability of the EUAIA, with operational infrastructure.
August 2, 2027: Extended compliance for certain high-risk AI systems.
Conclusion
Like it or not, the EU AI Act is an impressively comprehensive regulatory framework designed to manage the rapid evolution of artificial intelligence, including and especially generative AI with all of its controversial copyright infringement allegations for language model (LLM) training data. By establishing robust compliance mechanisms, promoting transparency, and fostering innovation, the EUAIA aims to create a balanced ecosystem where AI can thrive while safeguarding fundamental rights and public interest, even if it may seem stifling. For AI developers, providers, and deployers, understanding and adhering to the EU AI Act is crucial for navigating the new regulatory landscape effectively.
For further questions, feel free to reach out.