EU Parliament Approves Groundbreaking AI Regulation
BRUSSELS — In a historic move, the European Parliament has given the green light to the world’s first comprehensive framework aimed at mitigating the risks associated with artificial intelligence (AI).
The rapid expansion of the AI sector has led to significant financial gains, but has also raised concerns about issues such as bias, privacy violations, and even existential threats to humanity.
The AI Act establishes a system of risk classification for AI products, with varying levels of scrutiny based on the potential risks involved.
According to the creators of the law, its primary objective is to make AI technology more “human-centric.”
“The AI Act marks the beginning of a new era in AI governance, setting the stage for a regulatory framework centered around technology,” stated MEP Dragos Tudorache.
This groundbreaking legislation also positions the EU as a global leader in addressing the challenges posed by AI.
While China has already implemented a series of AI regulations, and US President Joe Biden issued an executive order in October 2023 requiring AI developers to share data with the government, the EU has taken a more comprehensive approach.
“The adoption of the AI Act signifies the dawn of a new era in AI regulation, with far-reaching implications,” said Enza Iannopollo, principal analyst at Forrester.
“The EU AI Act sets a precedent as the first and only set of mandatory requirements to mitigate AI risks,” she added.
She also highlighted that this legislation positions the EU as the de facto global standard for trustworthy AI, leaving other regions, including the UK, playing catch-up.
Although the UK hosted an AI safety summit in November 2023, it currently does not have plans to enact legislation similar to the AI Act.
The core principle of the law is to regulate AI based on its potential to harm society, with stricter rules applied to higher-risk applications.
AI systems that present a “clear risk to fundamental rights” will be prohibited, such as those involving the processing of biometric data.
High-risk AI applications, including those in critical sectors like infrastructure, education, healthcare, law enforcement, border security, and elections, will be subject to stringent requirements.
On the other hand, low-risk services such as spam filters will face lighter regulation, as the EU anticipates that most services will fall into this category.
The AI Act also introduces measures to address risks associated with generative AI tools and chatbots, such as OpenAI’s ChatGPT.
Producers of certain general-purpose AI systems will be required to be transparent about the data used to train their models and comply with EU copyright laws.
Before the final enactment of the law, it must undergo further procedures, including review by lawyer-linguists to ensure accuracy and translation, as well as approval by the European Council comprising EU member state representatives.
Despite these steps, businesses are already strategizing on how to comply with the new regulations.
Kirsten Rulf, a partner at Boston Consulting Group and former advisor to the German government, revealed that over 300 companies have sought guidance on scaling their AI technologies and navigating the legal landscape.
“Businesses are eager for legal clarity and are actively seeking ways to leverage AI while adhering to the new regulations,” she told the BBC.