1. Introduction

The Framework Convention on Artificial Intelligence, Human Rights, Democracy and the Rule of Law has been concluded by the Council of Europe (CoE) Committee on Artificial Intelligence on March 24, 2024, finally landing a decisive blow with a provisional agreement on the text of a treaty on artificial intelligence and human rights (Treaty).

This Treaty is the first of its kind and aims to establish basic rules to govern AI that safeguard human rights, democratic values and the rule of law among nations. As a CoE treaty, it is open for ratification by countries worldwide. It is worth noting that in this epic battlefield, apart from the CoE members in one corner of the global arena, on the opposite corner, representing various nations like the US, the UK, Canada and Japan, we have the observers, eyeing the proceedings, ready to pounce with their influence. Although lacking voting rights, their mere presence sends shockwaves through the negotiating ring, influencing the very essence of the Treaty.

While originally there seemed to be a consensus on the need for a robust framework that set out common general principles and rules based on shared values and harnessing the benefits of AI, as the bout raged on and the negotiations progressed, countries began to advocate limitations, particularly on scope, that served their own interests and consequently diluted the strength of the final text. But this battle is not just about the text on the Treaty, it is a clash of ideologies, a struggle for dominance over the future of AI. Who will emerge victorious? See below the most relevant takes.

2. EU vs. US: The Clash of the Titans

2.1 Scope of Application of the Treaty

In this arena of international negotiations, the European Commission (EC) stepped into the ring as the body negotiating on behalf of the EU member countries, aiming at covering both the public and private sector by default.

But across the ring, the American negotiators working on the treaty demanded an opt-in model, in which private companies would be in scope only if their government chooses so.

European countries wanted US support – after all, the US is home to the sector’s leading AI companies –  but feared that including language allowing the US to exempt American private companies from any new obligations imposed by the Treaty would fundamentally weaken the initiative.

Consequently, the EU body proposed an opt-out provision of certain obligations of the Treaty with temporal limitations for private entities, that did not make the cut in the final text. The US opt-in option was the one that emerged victorious.

2.2 Harmonization With Other Rules – the EU AI Act

While both the Treaty and the EU AI Act are designed to protect human rights, the EU AI Act’s way of approaching the matter leans towards harmonized EU market rules along traditional product safety principles. This means a complete alignment between the two is unlikely.

However, in a daring move to align with the EU AI Act, the EC threw down the gauntlet, demanding to exempt national security, defense, law enforcement and research and development activities from the scope.

Moreover, it aimed for the Treaty to allow the parties to derogate from the transparency-related provisions where necessary for the purposes of prevention, detection and prosecution of crimes, as well as to ban specific applications like social scoring deemed to pose an unacceptable risk.

The EU ended up victorious in that round, with the final text having the scope limited in relation to national security, defense and research and development activities.

2.3 Treaty’s Global Ambitions

The intention of the Treaty was to achieve a global agreement, so securing more signatories seemed to be a preference for countries like Germany, France, Spain, Czechia, Estonia, Ireland, Hungary and Romania, as opposed to having an ambitious text gaining less international support. Therefore, the EC had to show more flexibility to accommodate the constraints of all parties. Yet, with the Treaty limited to public bodies and the exemptions excluded from its application, there seems to be very little left of the original purpose.

3. Results Unveiled:

So, the battleground was set, with the fate of the Treaty hanging in the balance. Did it emerge as a robust defender of human rights and democracy, or was it rendered impotent by the weight of its own exemptions?

The Treaty’s final scope includes AI systems that have the potential to interfere with human rights, democracy and the rule of law, but only when undertaken by public actors and private ones acting on their behalf. In addition, every participating country shall address the risks and impacts arising from private actors not working for the public sector according with the objective and purpose of the Treaty.  This means that, when signing or depositing the convention for ratification, each country will have to declare if it intends to apply the obligations to private actors or take other appropriate measures – this declaration can be amended at any time. Therefore, the US opt-in approach seemed to have won this round, where CoE countries prioritized broader participation rather than a wider scope.

Furthermore, the scope is also limited in relation to national security, defense and research and development activities. With the opt-in mechanism for private companies and the exemptions agreed, the real impact of the Treaty might be hard to see – more of a declaration than a binding treaty.

One of the additional aspects to consider is the way some provisions have been drafted, such as the following where the parties shall “seek to ensure that AI systems are not used to undermine the integrity, independence and effectiveness of democratic institutions and processes.” This wording seems to imply more a recommendation rather than an obligation.

Finally, while there is a need for an international agreement establishing basic obligations to respect human dignity, the rule of law and democratic principles when AI is used, it appears that the applicability and enforceability of such Treaty will be reduced, which could hinder the parties’ ability to act against unfair uses of AI systems. This could have negative consequences for both parties and society.

Neither side emerged unscathed from this fierce battle, where no party clearly won after the final round.

Disclaimer: While every effort has been made to ensure that the information contained in this article is accurate, neither its authors nor Squire Patton Boggs accepts responsibility for any errors or omissions. The content of this article is for general information only and is not intended to constitute or be relied upon as legal advice.