fbpx

New AI Law Comes into Force

AI Friend or Foe? - August 12, 2024

The European Union has recently taken significant steps by adopting legislation on artificial intelligence, recognising it as one of the most profound challenges of our time. Artificial Intelligence has the potential to revolutionise our way of life, offering unprecedented benefits that can significantly enhance human activities in various fields, both in our professional and private lives. Automating tasks, advancing healthcare and optimising industries are just a few examples of how AI can make a positive contribution to society. However, the risks associated with this technology are equally immense, and the potential for its misuse cannot be underestimated.

The power of AI, particularly its ability to replicate faces and voices with alarming accuracy, creates an environment where the lines between reality and fabrication become dangerously blurred. The creation of deepfake videos and sophisticated misinformation campaigns can be used as powerful tools by those with malicious intent, leading to widespread confusion, mistrust and the erosion of democratic values. This is not just a hypothetical scenario; we have already seen instances where such technologies have been used to undermine public trust in institutions, manipulate elections and incite social unrest. The threat goes beyond individual harm, as these tools can be used by authoritarian regimes to create sophisticated propaganda, destabilising entire regions and challenging the very foundations of the Western democratic order.

In this increasingly complex and tangible context, artificial intelligence could be weaponised by those seeking to discredit Western policies and values. The ability to rapidly disseminate false narratives could exacerbate existing tensions and make it harder for societies to distinguish truth from fiction. Moreover, the pervasive influence of AI in everyday life raises serious ethical concerns about the preservation of human dignity and the protection of fundamental rights. As AI systems become more integrated into our social and political fabric, there is a real risk that the intrinsic value of human beings – our individuality, creativity and capacity for empathy – could be overshadowed by an over-reliance on technology.

This over-reliance on artificial intelligence could lead to a profound dehumanisation of society, where people are reduced to mere data points in a vast, impersonal system. The richness of the human experience, characterised by diversity and personal identity, risks being replaced by a homogenised, robotic world where individuality is sacrificed in favour of efficiency and control. Such a scenario would not only diminish the social and political nature of human beings, but could also lead to a society where the concept of personhood is fundamentally altered, stripping away the very elements that make us uniquely human.

The European Union’s legislation on artificial intelligence therefore represents a crucial effort to balance the potential benefits of this technology with the need to guard against its many dangers. It is a recognition that while AI can drive progress, it must be carefully regulated to ensure that it serves, rather than undermines, humanity. The challenge now is to implement these regulations effectively, creating a framework that fosters innovation while protecting the core values that define our societies.

It is no coincidence that many Heads of State and Government immediately moved, not towards a total rejection of artificial intelligence, but towards its total regulation, which would build around the constant development of technology a framework, an enclosure of rights and duties that would limit the operation of digital systems in order to respect the undeniable constraint of the dignity of the person. At the last G7, held in southern Italy, in Puglia, from 13 to 15 June, one of the hottest topics of the summit, at the strong instigation of the president in charge, Giorgia Meloni, was precisely that of artificial intelligence. Pope Francis also spoke on the subject: it was the first time that a pontiff had joined the working tables of the Big Seven. “Artificial intelligence”, said the Holy Father, “is a fascinating and enormous tool”, underlining the two sides of the same coin that artificial intelligence reveals, depending on how it is used. At the end of the meeting, a historic result was achieved: the aim of the Big Seven is indeed “to deepen our cooperation in order to exploit the benefits and, as we read in the final communiqué of the summit, to manage the risks of artificial intelligence”. We will launch an action plan on the use of artificial intelligence in the world of work and develop a label to support the implementation of the International Code of Conduct for organisations developing advanced artificial intelligence systems’.

The European legislation on artificial intelligence follows precisely the direction agreed at the G7 summit: to reap the benefits in areas where it can be used, but to avoid the risks and likely degeneration by developing a code of conduct that limits the impact of the technology within the framework of human rights and respect for the dignity of each citizen. The new legislation came into force on 1 August 2024: technically it is called the European Artificial Intelligence Act (or AI Act). The law was proposed by the European Commission in April 2021 and approved by the European Parliament and the European Council in December 2023. It will provide all Member States with a single plan to follow, based on the differentiation of different types of risk according to their level of danger. The lowest risk is that which does not entail any obligation under the legislation: this includes, for example, spam filters and video games, although in these cases individual companies may voluntarily adopt additional codes of conduct. Next comes a risk related to transparency: in the presence of such a risk, chaboots will have to inform users that they are interacting with a machine, and content created by artificial intelligence will have to contain indications that it is not human. High-risk, on the other hand, is medical software based on artificial intelligence, which will be subject to very strict and rigorous procedures, including user information and human oversight. High-risk systems include technologies used in: critical infrastructures such as transport; education and training, which affect the development and education of individuals; work, employment and management of workers; essential public and private services, such as obtaining credit; product safety; enforcement of laws that may affect people’s fundamental rights; management of security and immigration policies; administration of justice and democratic processes. Very strict obligations are placed on digital operators before systems can be placed on the market, such as risk mitigation systems, logging of activities to ensure traceability of results, detailed documentation providing all necessary information about the system and its purpose, clear and adequate information to the distributor, and appropriate human control measures to minimise risk.

Finally, the unacceptable risk relates to artificial intelligence systems that enable ‘social scoring’ by governments and companies, which is detrimental to people’s fundamental rights. If the first class of risks is not regulated, the second is already particularly influential in terms of the spread of fake news: the creation of artificial intelligence content without the necessary labelling can create disinformation on a large scale. To the last two levels of risk, where the data requested from patients becomes more personal and sensitive: this is where the strong binding nature of the regulation is explained. The EU,” says the European Commission’s website, “aims to be a world leader in safe artificial intelligence. By developing a robust legal framework based on human rights and fundamental values, the EU can develop an AI ecosystem that benefits everyone. This means better healthcare, safer and cleaner transport and improved public services for citizens. It will bring innovative products and services, especially in the fields of energy, security and healthcare, as well as higher productivity and more efficient production for businesses, while governments can benefit from cheaper and more sustainable services such as transport, energy and waste management’. In parallel, the European Commission has also initiated the formation of a Code of Conduct for Artificial Intelligence Providers (GPAI), which will impose additional obligations on those using artificial intelligence. The GPAI Code, which will be included in the European AI Act legislation, will address areas such as transparency, copyright and risk management. It will be finalised and come into force next year.