By Martin Coulter
LONDON (Reuters) – The world’s biggest technology companies have launched a final push to persuade the European Union to take a light-touch approach to regulating artificial intelligence as they seek to avoid the risk of billions of dollars in fines.
EU lawmakers approved the AI Act in May, the world’s first comprehensive set of rules governing the technology, after months of intense negotiations between different political groups.
But until the codes of practice accompanying the law are finalised, it remains unclear how strictly the rules around “general purpose” AI (GPAI) systems, such as OpenAI’s ChatGPT, will be enforced and how many copyright infringement lawsuits and multi-billion dollar fines the companies could face.
The EU invited companies, academics and others to contribute to the code of practice, and received nearly 1,000 applications, an unusually high number, according to a source familiar with the matter who spoke on condition of anonymity because he was not authorized to speak publicly.
The AI code of practice will not be legally binding when it comes into force late next year, but it will provide companies with a checklist they can use to demonstrate compliance. A company that claims to be complying with the law while ignoring the code could face a legal challenge.
“The code of conduct is crucial. If we implement it properly, we can continue to innovate,” said Boniface de Champris, senior policy officer at trade organisation CCIA Europe, whose members include Amazon, Google and Meta.
“If it’s too narrow or too specific, it will become very difficult,” he added.
Data extraction
Companies like Stability AI and OpenAI have had to consider whether using best-selling books or stock photography to train their models without permission from their creators constitutes copyright infringement.
Under the AI law, companies will be required to provide “detailed summaries” of the data used to train their models. In theory, a content creator who discovers that their work was used to train an AI model could seek compensation, although that issue is currently being considered by the courts.
Some business leaders said the required summaries should contain few details to protect trade secrets, while others said copyright holders have a right to know whether their content has been used without permission.
OpenAI, which has been criticized for refusing to answer questions about the data used to train its models, has also asked to join the working groups, according to a person familiar with the matter, who declined to be named.
Google has also filed an application, a spokesman told Reuters. Amazon said it hopes to “provide expertise and ensure the success of the code of practice.”
Maximilian Gahntz, head of AI policy at the Mozilla Foundation, the nonprofit behind the Firefox web browser, said he was concerned that companies were “doing everything they could to avoid transparency.”
“The AI Act represents the best chance to shine a light on this crucial aspect and to clear up at least part of the black box,” he said.
BIG BUSINESS AND PRIORITIES
Some business players have criticised the EU for prioritising technology regulation over innovation, and those drafting the code of practice will be looking to find a compromise.
Last week, former European Central Bank President Mario Draghi said the bloc needed better coordinated industrial policy, faster decision-making and massive investment to keep pace with China and the United States.
Thierry Breton, a staunch defender of European regulation and critic of non-compliant technology companies, quit his post as European commissioner for the internal market this week after a clash with Ursula von der Leyen, the president of the Union’s executive.
Amid growing protectionism within the EU, local tech companies are hoping that exceptions will be introduced into the AI law to benefit emerging European companies.
“We have insisted that these obligations must be manageable and, if possible, tailored to startups,” said Maxime Ricard, policy director at Allied for Startups, a network of trade organizations representing small technology companies.
Once the code is published in the first half of next year, tech companies will have until August 2025 before their compliance efforts begin to be assessed against it.
Nonprofits including Access Now, the Future of Life Institute and Mozilla have also asked to contribute to the code.
Gahntz said: “As we enter the phase where many of the AI law’s obligations are being fleshed out in greater detail, we must be careful not to allow large AI players to water down important transparency mandates.”
(Reporting by Martin Coulter; Editing by Matt Scuffham and Barbara Lewis)