The Council of the European Union recently approved the AI Act, marking a major breakthrough in the regulation of artificial intelligence (AI). This legislation represents a major step forward for the European Union, positioning it as a global leader in the regulation of emerging technologies. The AI Act aims to create a clear and consistent regulatory framework for the use of AI, while ensuring the protection of the fundamental rights of European citizens.
Managing the opportunities and risks of artificial intelligence at EU level
The AI Act is driven by the need to manage the opportunities and risks of artificial intelligence. The European Commission has stressed the importance of a balanced approach that promotes technological innovation while safeguarding fundamental EU values. The regulation applies to all entities, public and private, that develop or use AI systems in the European market, regardless of their geographical location.
One of the most important aspects of the AI law is the precise definition of an AI system. According to the text, an AI system is a set of technologies capable of operating with varying degrees of autonomy and adapting according to the input received. These systems can generate outputs such as predictions, content, recommendations or decisions that affect physical or virtual environments.
Risk classification
The Regulation adopts a risk-based approach and classifies AI systems into four categories: low, limited, high and unacceptable risk. Each category implies different responsibilities and requirements for developers and users. Unacceptable risk systems, such as those used for social control or behavioural manipulation, are banned. High risk systems must meet strict compliance requirements before they can be placed on the market.
Exemptions and exempted applications for defence and national security
Not all AI systems fall within the narrow scope of the Regulation. There are exemptions for systems used for military, defence or national security purposes, as well as those used in scientific research or released as open source software. In addition, the AI Act does not apply to personal use by private individuals, following the GDPR model.
Implications for businesses: risk assessment obligations and compliance requirements
The AI Act imposes significant obligations on companies that develop or use AI systems. They must carefully assess the risks associated with their systems, and ensure that they meet the compliance requirements set out in the Act. Penalties for non-compliance can be up to €35 million or 7% of the company’s global annual turnover, whichever is greater.
Timetable for compliance
Businesses and public administrations will have a staggered period to comply with the new rules. Prohibited systems must be eliminated within six months of the Regulation’s entry into force. The general governance rules will come into force within twelve months, while full implementation of the regulation is expected within two years.
The adoption of the AI law has been welcomed by many European experts and politicians. The almost plebiscitary vote in the European Parliament, with 523 votes in favour, 46 against and 49 abstentions, demonstrates the broad consensus on the need to regulate AI. However, some critics fear that overly rigid regulation could stifle innovation and make European companies less competitive than those in other regions of the world.
The AI Act is a decisive step towards a balanced and forward-looking regulation of artificial intelligence in Europe. It not only protects the rights of European citizens, but also sets a global precedent for the management of emerging technologies. Businesses will face significant challenges to adapt, but the EU is providing a clear framework to ensure that AI is developed and used safely and ethically.
The AI Act will have a profound impact on the development and use of artificial intelligence in the coming years, putting Europe at the centre of the global stage as a leader in technology regulation.