
Artificial intelligence (AI) has woven its web into everyday life: navigation systems, spam filters, weather forecasts, to name but a few examples. Alongside AI capacity building, increasing volumes of data are being collected and information on human behaviour is being monitored; all of this poses challenges for privacy and data protection.
As part of its Digital Agenda, the European Union has planned to regulate artificial intelligence to ensure better conditions for the development and use of this innovative technology.
Status quo
Currently, the use of AI in the EU is governed by regulations covering aspects such as data protection, human rights and security.
The General Data Protection Regulation (GDPR) is one of the main pieces of legislation that applies to the use of artificial intelligence in the EU. The GDPR, designed to be technologically agnostic, governs the collection, processing and retention of personal data, including those used in AI systems. It establishes key principles such as informed consent, data minimisation, purpose limitation, data security and people's rights over their personal data. When personal data is used in AI systems, the principles of the GDPR must be respected.
In addition to GDPR, other industry regulations may apply to the use of AI in specific areas. For example, the Medical Devices Regulation (MDR) and the In Vitro Diagnostic Medical Devices Regulation (IVDR) establish requirements for medical devices using AI. These regulations require appropriate conformity assessments and certifications to ensure the safety and effectiveness of these devices.
In addition, the EU Working Group on Ethical AI has published its "Ethics Guidelines for Trustworthy AI" which provide an ethical and trust framework for the use of AI, focusing on principles such as transparency, accountability, technical robustness, privacy and algorithmic governance.
First ever legal framework on AI
It is important to note that, although applied to AI use cases, these existing regulations are not specifically designed for artificial intelligence. However, on 21 April 2021, the European Commission presented the draft Artificial Intelligence Regulation, with the aim of ensuring that AI systems placed on the market in the EU are safe and respect existing laws on fundamental rights and values. Following various responses to this proposal, a compromise version of the AI Act was endorsed by the European Council on 6 December 2022. On 14 June 2023, the European Parliament voted to adopt its own negotiating position on AI, the final vote of which is scheduled for the end of 2023.
The AI Act proposes a multi-layered regulatory framework, based on the risks associated with use. The highest level of risk is reserved for certain uses of AI that are considered to pose an "unacceptable risk" to society, including uses such as retrieving images from social media and other sites to build facial recognition databases, predictive policing, and emotion recognition in government, educational, and professional contexts. These uses are outright prohibited.
Uses of AI considered "high-risk", such as uses in aviation, vehicles, medical devices and eight other categories specifically listed, are allowed, but subject to regulation proportional to their level of risk. Operators will have to register their AI systems in a European database and will be subject to numerous requirements in terms of risk management, transparency, human oversight and cybersecurity, among others.
Uses of AI that are considered "limited risk", such as systems that interact with humans (such as chatbots) and AI systems that can produce "deepfake" content, will be subject to a limited set of transparency obligations. Uses of AI that do not fall into any of the above categories are considered "low or minimal risk"
Once adopted, the AI legislation will have a huge impact on entities that use artificial intelligence in their activities. Like the GDPR, it will apply extraterritorially to providers placing AI systems on the market or putting them into service in the European Union, whether those providers are established in the EU or in a third country.
Next legislative procedures
With its adoption by the European Parliament on 14 June 2023, the AI legislation entered the "trilogue" phase of the EU legislative procedure. The trilogue involves informal negotiations between the European Commission, the European Parliament and the European Council in order to reconcile existing differences in the versions of the text on AI adopted by the bodies and to agree on the final text of the legislation.
Once the parties have finalised the legislative text, it is documented in the form of a provisional agreement and is submitted for approval to the European Parliament and the European Council for separate and formal adoption. After formal adoption, the legislation is published with implementation dates and other key information. The timing of the trilogue process can vary considerably depending on the complexity of the legislation.
Other notable initiatives
In addition to the European legal framework for AI, the European Commission has proposed a civil liability framework to adapt liability rules to the digital age and AI, as well as a revision of sectoral safety legislation (e.g. Machinery Regulation, General Product Safety Directive).
The Council of Europe is currently negotiating a convention on AI and human rights, democracy and the rule of law, which is expected to be concluded by the end of the year.
Separately, the G7 nations expressed in a press release a commitment to international discussions on AI governance, highlighting the role of global bodies such as the Global Partnership on AI (GPAI) and the Organisation for Economic Co-operation and Development (OECD). The G7 will work with the GPAI and the OECD to implement the Hiroshima Process on AI, holding discussions on generative AI by the end of the year.