Subscriber OnlyTechnology

Europe’s AI Act is a ‘decent start’ but is it workable?

Watered-down legislation still puts Europe ahead on regulating AI but enforcement could be held back by technological gaps

The general use of facial recognition to try to identify individuals in real-time and the creation of facial recognition databases by scraping internet images are among AI uses banned in EU legislation. Photograph: Smith Collection/Gado/Getty Images
The general use of facial recognition to try to identify individuals in real-time and the creation of facial recognition databases by scraping internet images are among AI uses banned in EU legislation. Photograph: Smith Collection/Gado/Getty Images

Last week, the European Parliament approved the Artificial Intelligence (AI) Act, a groundbreaking piece of EU regulation that will place significant controls and responsibilities on this fast-changing sector, bolstered by the threat of large fines.

The legislation is less than it should have been. Three months after a highly contested draft agreement was approved by the European Council and parliamentary negotiators, some rigorous protections were stripped out amid intense industry and special interest lobbying.

The result nonetheless shows grit and resolution. Much has remained in the final text to rile companies that had hoped for a continuation of that no-regulation farce known as industry self-regulation. Now, as with the EU’s other landmark legislation around technology and data, privacy and human rights – notably the General Data Protection Regulation (GDPR), and the Digital Services and Digital Markets Acts (DSA and DMA) – Europe is yet again far in advance of anywhere else in developing rules for this promising yet threatening technology road.

Comprehensive new AI rules embraced by EUOpens in new window ]

The AI industry certainly won’t have been popping champagne corks after last week’s vote. The Act remains risk-based, meaning that the greater the potential risks and impacts from a type of AI, the greater the responsibility to manage that risk and comply with stricter controls and safeguards. Companies didn’t get the more desirable (for them) option of a low bar of generally-applied light regulation.

READ MORE

High-risk AIs, including those used in education, critical infrastructure, employment, services like banking and healthcare, and some used by law enforcement, will have extra requirements for risk assessment and mitigation, transparency, accuracy and oversight. Citizens will have the right to submit complaints and get explanations about decisions made by AIs.

There will also be controls imposed on general-purpose AIs and the large language models they are based on – a category that includes some of the AIs most in the public eye, such as OpenAI’s Microsoft-backed ChatGPT and Google’s Gemini. They will have to comply with EU copyright laws and summarise the materials used to train them.

Deceptive ‘deep fake’ images and video and other manipulated media must be clearly labelled as such and detectable, with techniques such as digital watermarks

Both of these areas have been contentious. Content creators have taken legal cases over the inclusion of their work in AI training materials, and rights activists and researchers have repeatedly noted the hidden (and not so hidden) sex, gender and race biases of large language models based on the unfiltered content miasma of the entire internet.

This is an intriguing regulatory stick, as the vast majority of AI companies have been deeply reluctant to reveal the content sources for their models.

Deceptive “deep fake” images and video and other manipulated media must be clearly labelled as such and detectable, with techniques such as digital watermarks. Some AI uses are banned outright, too, such as categorisation systems based on sensitive characteristics, the general use of facial recognition that attempts to identify individuals in real-time, and the creation of facial recognition databases by scraping internet images (tough luck, Clearview).

Still, law enforcement did receive large concessions. Some biometric data and facial recognition can be used, with certain restrictions, to fight “serious crime” – a category that has always been subject to mission creep, expanding to include ever-fuzzier definitions of “serious”.

And civil rights groups have noted that vulnerable groups, such as immigrants, can be intently surveilled and, unlike with the GDPR, are not given the same personal or redress rights as EU citizens even though they are inside the EU. AIso, no defined controls were placed on AIs developed by EU companies and exported for uses not allowed here.

However, few adequate technology solutions currently exist to address and enforce many of the Act’s stipulations, a serious quandary.

While only some elements of the Act will be put in place in 2024, with most coming online in 2025 or later, technologists and lawyers have questioned how the Act will be interpreted in practical terms and enforced within that time frame. We don’t even have basics such as workable watermarks.

More is needed. Meanwhile, some AIs should be restricted until the Act’s enforceability matches its intent

Then there’s the inexplicably ignored fact that no one understands how AIs work. This poses extraordinary challenges and runs counter to other tech and software regulation, including the DSA’s transparency requirements for algorithms. How do you regulate what cannot (yet) be adequately defined or understood? The world has had years to fathom how less-complex social media algorithms work, and hold companies to account for the damaging fallout they cause, and we’ve still got mostly nowhere (the DSA may prove more effective).

In a must-read opinion piece in the MIT Technology Review, security experts Nathan Sanders and Bruce Schneier outline just how closely AI’s big problems and threats duplicate those we have seen, yet failed to tackle, with social media. Both industries often interlock treacherously with advertising, are structured on exhaustive surveillance, drive dangerous virality, lock users in to platforms and services and give rise to monopolies.

These are known problems, still unfixed. As they forcefully argue, we know what’s coming with AI and therefore cannot afford to repeat the same mistakes.

The AI Act, while a decent regulatory start, doesn’t address these deep structural issues. More is needed. Meanwhile, some AIs should be restricted until the Act’s enforceability matches its intent.