The Artificial Intelligence Act, which entered into force on 1 August 2024, introduces a risk-based classification of artificial intelligence (AI) systems, ranging from ‘low’ to ‘unacceptable’. Systems classified as ‘unacceptable risk’ listed in Section 5 of the IA Act are strictly prohibited as they are considered to be contrary to the values of the European Union. This prohibition has been applicable since 2 February 2025.
This means that these systems cannot be deployed in the European Union since that date, and as such they could be subject to appeal before the courts and tribunals, for example in the event of damage caused to a third party. However, the CNPD would like to recall that most of these unacceptable risk AIs would also be problematic in the light of several principles of the GDPR, in particular in the light of the principle of transparency. Therefore, irrespective of the role of the CNPD in the context of the national provisions relating to artificial intelligence (and in particular Law 8476), such systems could give rise to complaints or be the subject of investigations on its part if they were to be deployed on Luxembourg territory.
The following shall be prohibited:
- subliminal techniques (Article 5(1)(a));
- exploiting vulnerabilities (Article 5(1)(b));
- social credit rating under conditions, i.e. if it leads to detrimental or unfavourable treatment and in the case of specific situations (Article 5(1)(c));
- predictive policing operations, in particular without human intervention (Article 5(1)(d));
- AIs that create or develop facial recognition databases through non-targeted harvesting of facial images from the internet or video surveillance (Art. 5(1)(e));
- the use of AIs for emotional recognition in the workplace or in educational institutions (Article 5(1)(f));
- biometric categorisation of individual natural persons in order to arrive at deductions or inferences concerning sensitive characteristics such as race, political opinions, sexual orientation (Article 5(1)(g));
- the use of real-time remote biometric identification systems in publicly accessible spaces for law enforcement purposes, unless such use is strictly necessary for the tracing of specific victims, the prevention of serious threats or criminal investigations for particularly serious offences (Article 5(1)(h)). Detailed additional conditions are listed by the AI Act for the latter specific case.
A case-by-case analysis is necessary as certain AI practices are prohibited only in specific contexts (e.g. the use of AIS for emotional recognition is only considered to be at unacceptable risk in the workplace or in educational institutions).
On 4 February 2025, the European Commission published its Guidelines on Prohibited AI Practices, which precisely define such practices, and to which the CNPD refers for further details.
We will simply illustrate below some of the practices prohibited by the IA Act as previously listed, making the connection with the principles already applicable on the basis of the GDPR:
- exploiting vulnerabilities: it is AI targeting people’s vulnerabilities (depending on their age, motor skills, etc.) in order to materially alter behaviour in a way that causes or may cause significant harm. For example, an application designed to encourage children to make in-app purchases by exploiting their lack of discernment would be prohibited. However, Article 8 of the GDPR already lays down conditions for the consent of children with regard to information society services. Similarly, an AI system detecting a person’s depressive state in order to sell them a personalised holiday would be problematic. In the latter case, as regards data concerning health, processing is in any event prohibited on the basis of Article 9 of the GDPR, in so far as none of the conditions in paragraph 2 of that article is intended to apply to such a situation.
- the placing on the market, putting into service for that specific purpose or the use of AIS that create or develop facial recognition databases through the non-targeted harvesting of facial images from the internet or video surveillance. For example, a system which collects or is developed by collecting as many faces as possible available on the internet in order to compare them with faces of persons subject to a video surveillance system for identification purposes would be prohibited on the basis of the AI Act. It also involves the processing of biometric data for the purpose of uniquely identifying a natural person, which is prohibited on the basis of Article 9 of the GDPR and in respect of which, again, none of the conditions set out in paragraph 2 of that article is intended to apply.
- placing on the market, putting into service for that specific purpose or using AIS to infer the emotions of a natural person at the workplace and in educational institutions, except where the use of AIS is intended to be put into place or placed on the market for medical or safety reasons. An example here could be AISs to analyse employees’ facial expressions, vocal tone or gestures in order to measure employees’ engagement, satisfaction or stress with the aim of improving well-being or productivity at work. It would, if necessary, be a measure of generalised surveillance of employees which would also pose problems in the light of several principles of the GDPR, in particular in terms of lawfulness, transparency and data minimisation.
Since 2 February 2025, another obligation of the AI Act has affected companies using artificial intelligence. Article 4 of the Artificial Intelligence Regulation requires that all persons involved in the operation and use of AI systems have a sufficient level of control. This obligation will be detailed in another article that will soon be published by the CNPD on its website.