/
Over two years since it was first proposed, policymakers in Brussels were still debating core contents of the EU’s landmark AI regulations hours before reaching a deal.
Share this story
When European lawmakers reached a provisional deal on landmark artificial intelligence rules last week, they had reason to celebrate.
The EU AI Act reached a long-awaited climax on Friday following not only two years of broad discussion but a three-day “marathon” debate between the European Commission, the European Parliament, and EU member states to iron out disagreements. All-nighters were pulled. Bins overflowed with the remnants of coffee, energy drinks, and sugary snacks. It was the kind of environment you’d expect from students cramming for finals, not lawmakers working on legislation that could set a blueprint for global AI regulation. The chaos was largely thanks to two contentious issues that threatened to derail the entire negotiation: facial recognition and powerful “foundation” models.
When the AI Act was first proposed in April 2021, it was intended to combat “new risks or negative consequences for individuals or the society” that artificial intelligence could cause. The act focused on tools already being deployed in fields like policing, job recruitment, and education. But while the bill’s overall intent didn’t change, AI technology did — and rapidly. The proposed rules were ill-equipped to handle general-purpose systems broadly dubbed foundation models, like the tech underlying OpenAI’s explosively popular ChatGPT, which launched in November 2022.
Much of the last-minute delay stemmed from policymakers scrambling to ensure these new AI technologies — as well as yet-undeveloped future ones — fell under the legislation’s scope. Rather than simply regulating every area they might appear in (a list including cars, toys, medical devices, and far more), the act used a tier system that ranked AI applications based on risk. “High risk” AI systems that could impact safety or fundamental rights were subjected to the most onerous regulatory restrictions. Moreover, General Purpose AI Systems (GPAI) like OpenAI’s GPT models faced additional regulations. The stakes of that designation were high — and accordingly, debate over it was fierce.
“At one point, it looked like tensions over how to regulate GPAI could derail the entire negotiation process,” says Daniel Leufer, senior policy analyst at Access Now, a digital human rights organization. “There was a huge push from France, Germany, and Italy to completely exclude these systems from any obligations under the AI Act.”
France, Germany, and Italy sought last-minute compromises for foundation AI models
Those countries, three of Europe’s largest economies, began stonewalling negotiations in November over concerns that tough restrictions could reduce innovation and harm startups developing foundational AI models in their jurisdictions. Those concerns clashed with other EU lawmakers who sought to introduce tight regulations regarding how they can be used and developed. This last-minute wrench thrown into the AI Act negotiations contributed to delays in reaching an agreement, but they weren’t the only sticking point.
In fact, it seems a sizable amount of the actual legislation remained unsettled even days before the provisional deal was made. At a meeting between the European communications and transport ministers on December 5th, German Digital Minister Volker Wissing said that “the AI regulation as a whole is not quite mature yet.”
GPAI systems faced requirements like disclosing training data, energy consumption, and security incidents, as well as being subjected to additional risk assessments. Unsurprisingly, OpenAI (a company known for refusing to disclose details about its work), Google, and Microsoft all lobbied the EU to water down the harsher regulations. Those attempts seemingly paid off. While lawmakers had previously considered categorizing all GPAIs as “High risk,” the agreement that was reached last week instead subjects them to a two-tier system that allows companies some wiggle room to avoid AI Act’s harshest restrictions. This, too, likely contributed to the last-minute delays being hashed out in Brussels last week.
“In the end, we got some very minimal transparency obligations for GPAI systems, with some additional requirements for so-called ‘high-impact’ GPAI systems that pose a ‘systemic risk’,” says Leufer — but there’s still a “long battle ahead to ensure that the oversight and enforcement of such measures works properly.”
There’s one much tougher category, too: systems with an “unacceptable” risk level, which the AI Act effectively bans outright. And in negotiations down to the final hours, member states were still sparring over whether this should include some of their most controversial high-tech surveillance tools.
An outright ban on facial recognition AI systems was fiercely contested
The European Parliament initially greenlit a total ban on biometric systems for mass public surveillance in July. That included creating facial recognition databases by indiscriminately scraping data from social media or CCTV footage; predictive policing systems based on location and past behavior; and biometric categorization based on sensitive characteristics like ethnicity, religion, race, gender, citizenship, and political affiliation. It also banned both real-time and retroactive remote biometric identification, with the only exception being to allow law enforcement to use delayed recognition systems to prosecute “serious crimes” following judicial approval. The European Commission and EU member states contested it and won concessions — to some critics’ consternation.
The draft approved on Friday includes exceptions that permit limited use of automated facial recognition, such as cases where identification occurs after a significant delay. It may also be approved for specific law enforcement use cases involving national security threats, though only under certain (currently unspecified) conditions. That’s likely appeased bloc members like France, which has pushed to use AI-assisted surveillance to monitor things like terrorism and the upcoming 2024 Olympics in Paris, but human rights organizations like Amnesty International have been more critical of the decision.
“It is disappointing to see the European Parliament succumb to member states’ pressure to step back from its original position,” said Mher Hakobyan, advocacy adviser on AI regulation at Amnesty International. “While proponents argue that the draft allows only limited use of facial recognition and subject to safeguards, Amnesty’s research in New York City, Occupied Palestinian Territories, Hyderabad, and elsewhere demonstrates that no safeguards can prevent the human rights harms that facial recognition inflicts, which is why an outright ban is needed.”
To make things even more complicated, we can’t delve into which specific compromises were made, because the full approved AI Act text won’t be available for several weeks. Technically, it probably doesn’t officially exist internally within the EU yet at all. Compromises for these agreements are often based on principles rather than exact wording, says Michael Veale, an associate professor for digital rights and regulation at UCL Faculty of Laws. That means it could take some time for lawmakers to refine the legal language.
Also, because only a provisional agreement was reached, the final legislation is still subject to change. There’s no official timeline available, but policy experts appear fairly unanimous with their estimations: the AI Act is expected to become law by mid-2024 following its publication in the EU’s official journal, with all provisions coming into force gradually over the next two years.
That gives policymakers some time to work out exactly how these rules will be enforced. AI companies can also use that time to ensure their products and services will be compliant with the rules when provisions come into effect. Ultimately, that means we might not see everything within the AI Act regulated until mid-2026. In AI development years, that’s a long time — so by then, we may have a whole new set of issues to deal with.