fbpx

First Global Treaty on Artificial Intelligence and Human Rights Has Been Presented

AI Friend or Foe? - September 7, 2024

The Council of Europe has taken a momentous step by approving the first legally binding international treaty on artificial intelligence (AI). This treaty will ensure that the development of this technology is in line with democratic principles and human rights. At a conference in Vilnius, states including the US, UK and EU Member States were among the first to sign the treaty.

This agreement is necessary to regulate a rapidly evolving field such as AI, which offers enormous opportunities for society but also poses significant risks to human rights, privacy and civil liberties. The Council of Europe is creating a balance between technological innovation and the protection of fundamental rights with the Framework Convention.

The treaty will cover the entire life cycle of AI systems, from design, development and implementation to end use. It is imperative that AI systems respect human rights laws and democratic principles. The Council of Europe will prevent the unethical and irresponsible use of AI, such as unrestricted facial recognition, which compromises people’s privacy and security.

The treaty builds on the European Union’s existing work with the European Artificial Intelligence Act, which came into force in August 2023. European legislation will establish a legal framework to ensure the safe and responsible use of AI, based on a risk-based approach. This approach categorises AI applications according to their level of risk to human rights, with high-risk technologies subject to rigorous examination.
Von der Leyen, President of the European Commission, welcomed the signing of the Framework Convention with enthusiasm, stating that the European AI Act is becoming a global reference. This signals the EU’s unquestionable leadership role in regulating artificial intelligence and its ability to influence international technological governance.

Treaty objectives

The treaty is based on a number of key pillars that will ensure that AI systems operate in accordance with the law and human rights. The main objectives are as follows:

1. The protection of human rights is non-negotiable. This pillar is the foundation of the convention, which unequivocally protects privacy, transparency, and accountability in the use of AI. It will prevent technologies such as facial recognition or mass surveillance from being overused or abused, and ensure that individual rights are not compromised.
2. Responsible innovation is key. The Convention is not about stifling innovation. It is about promoting the development of ethical artificial intelligence. AI is an extraordinary tool that can be used to address challenges such as climate change or the health crisis. However, it must be developed and used responsibly.
3. International collaboration is essential. The treaty also aims to establish a framework for cooperation between states, including both Council of Europe Member States and non-members. The invitation to non-European countries such as the USA and Israel to participate in the treaty is a clear indication of the necessity for global coordination on this issue.
4. Access to justice is a fundamental right. The treaty guarantees effective remedies for those whose rights are violated by AI systems. Individuals must be able to rely on independent and accessible judicial mechanisms to enforce their rights.
It is now imperative that states that have signed the convention ratify it without delay. The treaty will only enter into force once it has been ratified by at least five signatories, three of which must be Council of Europe Member States. It will take time for this process to be completed, as each state must adapt the convention to its own legal system.

Furthermore, there are some challenges related to the uniform application of the treaty in different countries, which must be addressed. It is essential that the standards set out in the Convention are effectively integrated into national legal systems, as there is considerable variation in national laws on artificial intelligence.

Artificial intelligence is undoubtedly one of the most disruptive technologies of the 21st century. While it offers enormous opportunities, especially in the areas of healthcare, transport and education, it also carries significant risks. AI technologies, such as facial recognition, are being used to monitor and surveil people in ways that challenge fundamental principles of privacy and individual freedoms.

The Council of Europe Framework Convention addresses these challenges head-on by establishing a regulatory framework that strikes the perfect balance between protecting human rights and promoting innovation. It is crucial to regulate AI-based surveillance technologies properly to prevent them from being used repressively and undermining civil liberties.

The Council of Europe Framework Convention on Artificial Intelligence is a landmark agreement that will shape the future of AI. By signing this treaty, Europe is once again leading the way in promoting responsible and ethical AI, setting an example for the rest of the world. However, challenges remain related to its implementation and ratification. It is crucial that all signatory states work together to ensure that artificial intelligence is used safely and responsibly.