LONDON – The generative AI boom has sent governments around the world scrambling to regulate the emerging technology, but it has also raised the risk of upending a European Union push to adopt the world’s first comprehensive rules on artificial intelligence.
The 27-nation bloc’s AI law has been hailed as a pioneering set of rules. But with time running out, it’s uncertain whether the EU’s three branches of government can reach agreement on Wednesday in what officials hope will be a final round of closed-door talks.
Europe’s years-long effort to draw up AI guardrails has been stalled by the recent emergence of generative AI systems such as OpenAI’s ChatGPT, which have wowed the world with their ability to produce human-like work but raised fears about the risks they pose.
These concerns have driven the US, UK, China and global coalitions such as the Group of 7 major democracies into the race to regulate the fast-developing technology, although they’re still playing catch-up with Europe.
In addition to regulating generative AI, EU negotiators must resolve a long list of other thorny issues, such as a complete ban on police use of facial recognition systems, which have raised privacy concerns.
The chances of a political agreement between EU lawmakers, member state representatives and executive commissioners “are pretty high, partly because all the negotiators want a political win” on a flagship piece of legislation, said Kris Shrishak, a senior fellow specialising in AI governance at the Irish Council for Civil Liberties.
“But the issues on the table are significant and critical, so we can’t rule out the possibility of no deal,” he said.
About 85% of the technical wording of the bill has already been agreed, Carme Artigas, minister for AI and digitalisation in Spain, which holds the rotating EU presidency, told a press briefing in Brussels on Tuesday.
If an agreement isn’t reached in the latest round of talks, which began Wednesday afternoon and are expected to run late into the night, negotiators will be forced to pick it up again next year. That increases the chances that the legislation will be delayed until after EU-wide elections in June – or take a different direction when new leaders take office.
One of the main sticking points is foundation models, the advanced systems that underpin general-purpose AI services such as OpenAI’s ChatGPT and Google’s Bard chatbot.
These systems, also known as large language models, are trained on vast troves of written works and images scraped from the internet. They give generative AI systems the ability to create something new, unlike traditional AI, which processes data and performs tasks according to predetermined rules.
The AI Act should be a product safety law, like similar EU regulations for cosmetics, cars and toys. It would classify AI applications according to four levels of risk – from minimal or no risk for video games and spam filters, to unacceptable risk for social scoring systems that judge people based on their behaviour.
The new wave of general purpose AI systems released since the legislation was first drafted in 2021, has prompted European lawmakers to strengthen the proposal to cover foundation models.
Researchers have warned that powerful foundation models, built by a handful of big tech companies, could be used to boost online disinformation and manipulation, cyberattacks or the creation of biological weapons. They act as basic structures for software developers building AI-powered services, so “if these models are rotten, whatever is built on top of them will be rotten too – and the builders won’t be able to fix it,” according to Avaaz, a non-profit advocacy group.
France, Germany and Italy have resisted updating the legislation, calling instead for self-regulation – a change of heart seen as an attempt to help home-grown generative AI players, such as French startup Mistral AI and Germany’s Aleph Alpha, compete with big US tech companies like OpenAI.
Brando Benifei, an Italian member of the European Parliament who is co-leading the body’s negotiating efforts, was optimistic about resolving differences with member states.
There’s been “some movement” on foundation models, although there are “more problems to find an agreement” on facial recognition systems, he said.