Introduction
In April 2021, the European Commission introduced the Artificial Intelligence Act (AI Act), a pioneering legislative proposal aimed at creating a comprehensive regulatory framework for artificial intelligence (AI) within the European Union. This proposal, officially titled COM(2021) 206 final, represents a cornerstone of the EU’s broader strategy to lead globally in AI while ensuring that the development and deployment of AI technologies adhere to the Union’s core values, including respect for fundamental rights and the rule of law. As the AI Act approaches final adoption, recent developments in 2023 and 2024 have further shaped the landscape, introducing new challenges for the incoming European Parliament and the new European Commission to be formed after the June 2024 elections.
The Core of the AI Act: A Risk-Based Approach
At the heart of the AI Act lies a sophisticated risk-based classification system, a concept rooted in existing EU product safety legislation, such as Regulation (EU) 2019/1020 on market surveillance and compliance of products. This system categorizes AI applications into four levels of risk: unacceptable, high, limited, and minimal. This stratification is intended to apply proportionate regulatory requirements based on the potential impact of each AI system, ensuring that the most intrusive technologies are subject to the strictest controls.
Unacceptable risk AI systems are entirely prohibited under the AI Act, reflecting the EU’s commitment to protecting citizens from the most harmful uses of AI. This category includes AI systems that contravene the EU’s Charter of Fundamental Rights, such as those used for social scoring (Article 5). The prohibition echoes the principles established in Article 21 of the General Data Protection Regulation (GDPR) (Regulation (EU) 2016/679), which prohibits certain types of automated decision-making that have legal or similarly significant effects on individuals.
High-risk AI systems are subject to stringent requirements under the AI Act. These include AI applications in critical sectors such as healthcare, law enforcement, and employment—domains where the consequences of failure or misuse can be profound. The high-risk classification is derived from the precedent set by the Medical Device Regulation (Regulation (EU) 2017/745) and the General Product Safety Directive (Directive 2001/95/EC). High-risk systems must undergo rigorous conformity assessments (Articles 16-29), aligning with the principles established in these earlier regulations. Additionally, they must comply with requirements for data governance, record-keeping, transparency, and human oversight.
For limited-risk AI systems, the AI Act mandates transparency measures (Article 52), such as informing users that they are interacting with an AI system. This requirement is in line with the transparency obligations established under Directive (EU) 2018/958 on proportionality tests before the adoption of new regulation of professions, which emphasizes the need for clear information to the public.
Minimal-risk AI systems, which include many consumer-facing applications, are largely exempt from the AI Act’s more onerous requirements, reflecting a risk-based approach similar to that found in Regulation (EU) 2018/858 on the approval and market surveillance of motor vehicles, where low-risk products face less stringent oversight.
Recent Developments (2023-2024)
As the AI Act has moved through the legislative process, several significant developments have emerged:
- Revised Provisions for General Purpose AI Systems: In late 2023, the European Parliament and Council reached a provisional agreement on the inclusion of general-purpose AI systems under the high-risk category. This amendment aims to address concerns about the broad applicability and potential risks of these AI systems, which can be adapted for a variety of uses, including those with high risk. This change has been controversial, with industry stakeholders warning of increased compliance costs and potential stifling of innovation.
- Enhanced Role for National Supervisory Authorities: Another major development has been the strengthening of the role of national supervisory authorities. In early 2024, the Council proposed amendments that would grant these authorities greater autonomy in enforcing the AI Act, including the power to conduct on-site inspections without prior notice. This move is intended to ensure more effective enforcement but has raised concerns about the consistency of application across different member states.
- New Requirements for AI in Employment and Education: The Parliament has also pushed for stricter regulations on the use of AI in employment and education, where the potential for bias and discrimination is particularly high. These sectors are now subject to additional transparency and fairness requirements, which will be enforced through periodic audits and impact assessments.
- Ongoing Debate on AI and Fundamental Rights: Throughout 2023 and 2024, there has been ongoing debate within the EU institutions about the balance between AI innovation and the protection of fundamental rights. Several MEPs have called for even stricter regulations on AI systems that could infringe on privacy or exacerbate inequality, while others argue that overly restrictive rules could hinder the EU’s competitiveness in the global AI market.
Governance and the Role of National Authorities
The AI Act outlines a robust governance framework, entrusting national supervisory authorities with the responsibility to enforce the regulation. This approach mirrors the decentralized enforcement model seen in the GDPR, where Data Protection Authorities (DPAs) are responsible for oversight in their respective member states. Under the AI Act, these authorities are empowered to conduct audits, investigate breaches, and impose fines of up to 6% of a company’s global annual turnover for serious violations (Article 71), a provision inspired by the enforcement powers granted under Regulation (EU) 2019/1020.
At the EU level, the European Artificial Intelligence Board (EAIB), akin to the European Data Protection Board established under the GDPR, will coordinate actions across member states, ensuring consistency in the application of the AI Act and facilitating cooperation between national authorities (Article 56).
The Protection of Fundamental Rights
The AI Act places significant emphasis on the protection of fundamental rights, aligning with the EU’s commitment under the Charter of Fundamental Rights of the European Union. High-risk AI systems, in particular, must demonstrate compliance with these rights through rigorous conformity assessments (Articles 16-29). This requirement reflects the principles of non-discrimination and privacy protection as outlined in Directive 2000/43/EC implementing the principle of equal treatment between persons irrespective of racial or ethnic origin, and Directive 95/46/EC (now repealed and replaced by the GDPR), which focused on the protection of individuals with regard to the processing of personal data.
By embedding these protections into the AI Act, the EU seeks to prevent AI-driven discrimination and privacy infringements, addressing concerns that have been highlighted in the context of the GDPR and various rulings by the Court of Justice of the European Union (CJEU), such as the landmark decision in Case C-311/18 (Schrems II), which underscored the importance of safeguarding fundamental rights in the context of data transfers.
Transparency and Accountability: Cornerstones of the AI Act
Transparency and accountability are central pillars of the AI Act, drawing on the EU’s broader regulatory ethos as seen in the Open Data Directive (Directive (EU) 2019/1024), which promotes transparency in public sector data. Under the AI Act, high-risk AI systems must meet stringent transparency requirements, including the documentation of their algorithms and decision-making processes (Articles 13, 47). This ensures that AI systems can be audited and held accountable for their actions, particularly in sectors where their decisions can have significant legal or social consequences.
This focus on transparency is also reflected in the requirement for human oversight (Article 14), which echoes the GDPR’s emphasis on the right not to be subject to a decision based solely on automated processing (Article 22 of the GDPR). However, the technical complexity of many AI systems, particularly those involving deep learning, presents significant challenges to achieving meaningful transparency, raising concerns about the practical enforceability of these provisions.
Challenges for Corporations: Navigating the Regulatory Landscape
For corporations, the AI Act introduces substantial compliance obligations, particularly for those developing or deploying high-risk AI systems. These requirements echo the compliance challenges that companies have faced under the GDPR, where extensive documentation, impact assessments, and ongoing monitoring are necessary to ensure legal compliance. The AI Act’s emphasis on conformity assessments (Articles 16-29) and post-market monitoring (Article 61) imposes significant administrative and financial burdens, which could be particularly challenging for small and medium-sized enterprises (SMEs).
Furthermore, the stringent requirements may stifle innovation within the EU, particularly in high-risk areas where the regulatory burden is heaviest. This concern is compounded by the potential competitive disadvantage that EU-based companies may face compared to counterparts in jurisdictions with less restrictive AI regulations. The risk of regulatory arbitrage, where companies may relocate to more permissive environments, is a significant concern, echoing fears raised during the implementation of the GDPR.
Challenges for Individuals: Ensuring Rights in a Complex AI Landscape
While the AI Act is designed to protect individual rights, it also presents challenges in ensuring that these protections are effectively realized. The complexity of AI systems, coupled with the opacity of many algorithms, makes it difficult for individuals to understand how their rights might be impacted. This issue is particularly pertinent in the context of Directive 2016/680, which addresses the protection of personal data in criminal justice, where automated decision-making can have profound consequences for individuals’ rights.
Moreover, the enforcement mechanisms provided by the AI Act, although robust, may not be easily accessible to all individuals. The redress mechanisms established under the GDPR, which the AI Act’s enforcement provisions resemble, have been criticized for their complexity and the difficulties faced by individuals in pursuing legal remedies. Ensuring that the AI Act’s protections are meaningful will require not only strong enforcement but also public education and awareness initiatives