Stay informed with free updates
Simply sign up for the Artificial intelligence myFT Digest – delivered straight to your inbox.
The three main Western jurisdictions developing artificial intelligence technologies have signed the first legally binding international treaty on the use of AI, as companies fear a patchwork of national regulations could hamper innovation.
The United States, EU The United Kingdom and Britain signed the Council of Europe’s convention on artificial intelligence on Thursday, which emphasizes human rights and democratic values in its approach to regulating public and private systems. Other countries continued to sign the pact on Thursday.
The convention was developed over two years by more than 50 countries, including Canada, Israel, Japan and Australia. It requires signatories to be held accountable for any harmful and discriminatory results of their actions. AI It also demands that the results of these systems respect the rights to equality and privacy, and that victims of AI-related rights violations have legal remedies.
“With innovation as rapid as AI, it’s really important that we take this first step on a global scale,” said Peter Kyle, the UK’s Minister for Science, Innovation and Technology. “This is the first agreement that has real global reach, and it also brings together a very disparate set of nations.”
“The fact that we are expecting such a diverse group of nations to sign this treaty shows that we are truly capable, as a global community, of meeting the challenges posed by AI,” he added.
Although the treaty is presented as “legally binding”, its critics point out that it does not provide for sanctions such as fines. Compliance with the rules is mainly measured through checks, which are a relatively weak form of enforcement.
Hanne Juncher, the Council’s director of negotiations, said: “This confirms that (the convention) goes beyond Europe and that these signatories were extremely invested in the negotiations and… satisfied with the outcome.”
A senior Biden administration official told the Financial Times that the US was “committed to ensuring that AI technologies support respect for human rights and democratic values” and saw “key added value of the Council of Europe in this area.”
The treaty comes as governments are drafting a series of new regulations, commitments and agreements to govern evolving AI software. These include the European AI Act, the G7 agreement reached last October and the Bletchley Declaration, signed in November by 28 countries, including the US and China.
While the US Congress has yet to pass a comprehensive framework for regulating AI, lawmakers in California, where many AI startups are based, did so last week. The bill, which has a mixed opinion in the industry, awaits the signature of the state governor.
The EU regulation, which came into force last month, is the first major regional law, but Kyle stressed that it remains divisive among companies creating AI software.
“Companies like Meta, for example, are refusing to launch their latest Llama product in the EU because of this, so it’s really useful to have a baseline that goes beyond just individual territories,” he said.
Although the EU’s AI law has been seen as an attempt to set a precedent for other countries, the signing of the new treaty illustrates a more coherent international approach, rather than relying on so-called The Brussels effect.
Věra Jourová, European Commission Vice-President for Values and Transparency, said: “I am very pleased to see that so many international partners are ready to sign the AI Convention. The new framework sets out important milestones for the design, development and use of AI applications, which should provide confidence and assurance that AI innovations respect our values of protecting and promoting human rights, democracy and the rule of law.”
“This was the basic principle of… the European AI law and it now serves as a model around the world,” she added.
Additional reporting by James Politi in Washington