How do organizations ensure control over AI?

Information article

Regulation (EU) 2024/1689 or Artificial Intelligence Act (‘AI Act’) imposes a series of obligations on providers (developers or designers) and deployers (users, both private and public bodies) to prevent risks related to the use of artificial intelligence (‘AI’) for citizens, in particular with regard to the protection of personal data. 

Focus on Article 4 of the AI Act,[1] which requires that all persons involved in the operation and use of all AI systems (‘AI systems’) have sufficient ‘mastery’ of AI (also existing under the term ‘AI Literacy’).   This contribution focuses more on the typical case of an employer who would allow the use of an AI system in a professional context.

The AI control obligation has been in force since 2 February 2025, as has the ban on systems classified as ‘unacceptable risk’ (see our previous article ‘AI systems prohibited under the AI Act: Impact on the protection of personal data", Article 5 of the AI Act). It is a challenge for companies and organizations that must now ensure this level of competence among their staff and other people who may be involved.

What does this new obligation to master AI mean?

Where a company or body allows the use of proprietary or third-party AI systems or even a corporate AI platform, it is now obliged to ensure that users have the ‘necessary skills, knowledge and understanding’ for responsible use. The text therefore requires all the persons concerned to be equipped with the necessary know-how to deploy and use the system in question (see, in particular, Article 3(56), recitals 20, 91 and 165 of the AI Act).

This control requirement applies to the use of all AIs, regardless of the level of risk they pose (i.e. high or non-high risks), including so-called “generative” AIs such as ChatGPT. However, in practice, its scope is of greater importance for actors linked to high-risk AI systems because of the dangers and possible harm they could cause. As stated in the FAQ AI Literacy published by the European Commission on 7 May 2025, “if the [AI systems]of the organisation are at high risk, in accordance with Chapter III of the AI Act, additional measures could be relevant to ensure that employees know how to manage the given AI systems and avoid and/or mitigate their risks”.

In practice, the company/organisation must ensure that all its staff, whatever their status (employees, temporary staff, employees, officials, trainees, apprentices, etc.) have a real command of AI as defined in Article 3 (56) of the AI Act.[2] In addition to its staff, Article 4 of the AI Act also applies to all ‘other persons’ dealing with AI systems on behalf of the provider and deployer. The FAQ specifies for this purpose that it could be any person involved in the operation or use of these systems on behalf of suppliers/deployers, e.g. a subcontractor, a service provider and even customers. On the basis of the European Commission’s interpretation, an AI deployer could therefore be legitimate to request training from the AI provider of which it is itself a customer.

The definition of AI proficiency is quite broad in that, according to the FAQ, everyone concerned must also be able to understand ‘the specific risks associated with AI (e.g. the possible hallucination of Chat GPT)’ as well as the consequences or ‘potential harm it may cause’.

In their compliance process, undertakings/bodies should therefore take into account all the elements resulting from the definition in Article 4 of the RIA, in other words:

"knowledge of employees",

“the context of the use of AI”,

"persons or groups of persons on whom AIs are to be used".

By ‘knowledge’,  it is important to recall that Article 4 of the AI Act includes, as such, technical knowledge, experience and education and training. However, the text does not provide more details. A certain level of flexibility would be granted to companies/bodies using AIs. In this context, the company/body could ask itself the following questions: Are employees already familiar with more advanced concepts than those related to basic functionalities? If so, ‘expert’ training could then be offered. Otherwise, basic knowledge about machine learning or algorithms would have to be considered. Do the employees come from a technical or scientific field of expertise? Depending on the profile, additional targeted additional training would be preferred. Have or have not employees already been employed for some time? In the latter case, specific training during onboarding could very well be envisaged. 

By ‘use context’, the company/body is advised to take into account the sector and the purpose for which the AI is used while combining it with the level of risk in order to possibly upgrade its training programme. The FAQ confirms this recommendation while specifying that the European AI Office does not impose ‘specific requirements’ and that a case-by-case analysis will therefore have to be carried out.

Finally, “the persons or groups of persons on whom the AIs are to be used”, i.e. the persons impacted by the AIs, are to be identified (e.g.: vulnerable persons, employees, public service users, patients, etc.). This analysis will also contribute to ‘awareness of the possibilities and risks of AI and the possible damage it may cause’ within the meaning of Article 3 of the AI Act and to adapt its AI literacy programme accordingly.

An in-depth evaluation of these three elements will ultimately make it possible to draw up a tailor-made set of rules and guidance with the establishment of an adequate and regular training plan according to the persons or groups concerned, their level and area of expertise. 

What are the current practices of companies in this area?

The obligation to master AI should therefore in all cases be a central element of effective and accountable AI governance.

In order to guide companies/bodies in this area, a living directory called “Compilation AI Literacy Practices”(“the directory”), which will be completed regularly, was published on 4 February by the European Commission. The European AI Office[3] thus brought together some of the practices of the signatories of the AI Pact.[4]  It is recalled that ‘the reproduction of practices collected in this living repository does not automatically confer a presumption of compliance with Article 4’ but ‘aims to encourage learning and exchange between providers and deployers of AI systems’. These practices can therefore be combined with the additional indications given in the above-mentioned FAQ, it being understood that these two instruments are part of soft law and are therefore not legally binding.

After a brief presentation of the organisation and the AIs concerned, the sector of activity, the role as supplier/deployer, there are several answers to the questions[5] that the participants were asked, namely: How does the practice [understood by the AI Literacy] take into account the technical knowledge, experience, education and training of the target group? How does this practice take into account the context in which the AIs are used? What has been the impact of this practice so far and how does the organisation measure this impact? What challenges has this practice met and what challenges does the organisation still face? Does the organisation plan to change and/or improve this practice?

The CNPD would like to recall that in addition to these issues, the persons or groups of persons on whom the AIs are to be used should be taken into account. This aspect is indeed one of the components of the definition in Article 4 of the IFR (see previous section), which should in no way be omitted by the actors concerned in their analysis.

On the basis of the feedback recorded in this directory, as existing according to its version consulted on 28 March 2025, the CNPD observes first of all that while some companies carry out an initial assessment of the different levels and needs of the persons concerned, others opt from the outset for a dedicated training programme according to the roles/responsibilities in the company but also according to the technical/non-technical profiles of the employees concerned. It can be noted that people in a management function, as well as CEOs, often have special training unlike other functions. In several cases, individuals or departments acting as point of contact (POC) for AI have more advanced additional training. For example, non-technical experts may also be offered the opportunity to enhance their basic knowledge of new technologies, as well as to familiarize human resources staff with potential biases related, in some cases, to a recruitment process that would be partly automated. 

In any case, simple instructions or isolated instructions for staff might be insufficient. The FAQ nevertheless states that there is no single format for compliance with those requirements, which are again to be assessed in the light of the individual case. 

A program dedicated to this mastery, however personalized, would risk missing its objectives without adequate monitoring. According to that register, it appears that the undertakings have chosen different criteria to measure the effectiveness of their methods. The most frequently used monitoring indicators include participant satisfaction and the total number of employees who completed the training. Some of them go further by choosing, for example, to measure the progress of the knowledge and skills of the employees who have taken the training or the subsequent reduction of the deadlines for carrying out AI projects. Although the AI Act does not expressly provide for this and the FAQ is silent on this point, the use of adequate measurement indicators is one of the means to assess the effectiveness of the performance of the AI literacy programme. They will allow professionals to document, monitor and prove compliance with the obligations related to this regulation.

Finally, it should be noted that the majority of responding organizations still report facing a number of challenges in their compliance process. Among the most frequently cited is the overly rapid evolution of AI technologies and, therefore, a programme that should be continuously adapted.  Next comes the difficulty of ensuring that the content is accessible to all employees on an equal footing with the difficulty of finding training modules adapted to the company’s context. The regulatory aspect is also problematic both from the standpoint of monitoring the many constantly evolving requirements and the interpretation of the AI Act, which some organisations consider to be far too complex.

What consequences in the event of non-compliance with this obligation?

Given the already effective entry into application of the provision of Article 4 of the AI Act as well as the very disparate levels of digital literacy in society, it is crucial that each body seriously examines its AI literacy and documents it appropriately in order to be able to justify its compliance. As such, the FAQ states that obtaining any specific certificate would not be necessary. However, it recommends the use of ‘an internal register of training and/or other guidance initiatives’.

As a reminder, the AI Act provides in particular for administrative penalties, including fines of up to EUR 35 000 000 or 7% of the total worldwide turnover of a group of undertakings, whichever is higher. It should be noted that the sanctions related to the AI Act are applicable from 2 August 2025 (see Chapter XII and Article 113 of the AI Act).

Article 99(7)(g) of the AI Act further[6] specifies that all relevant circumstances of the situation shall be taken into account when deciding whether or not to impose an administrative fine. To the extent that AI literacy is likely to constitute a relevant circumstance, the competent supervisory authorities would be legitimate, for example, to verify whether any breach could have been encouraged by the fact that the staff did not have sufficient AI skills, knowledge and understanding.

Lastly, Article 85 of the AI Act,[7]applicable from 2 August 2026, also provides for the possibility of lodging a complaint with a market surveillance authority (‘MSA’)[8] where a natural or legal person considers that the provisions of the AI Act, in particular Article 4, have been infringed.

Although the MSAs provided for in Article 70(1) of the AI Act have not yet been formally designated by the Member States at the date of publication of this Article, it should be recalled that the provisions of the AI Act are directly applicable and can already be invoked, subject to their date of application, before the competent national courts.

Apart from the legal consequences arising from non-compliance with the law, it is crucial for any company or organisation to protect its reputation and preserve itself by using AI in the safest and most ethical way possible.  Responsible use also enhances the trust of customers, partners and investors. It also opens up new opportunities for the development of innovative products and services that meet ethical and social standards.

 

[1] Providers and deployers of AI systems shall take measures to ensure, to the extent possible, a sufficient level of AI knowledge for their staff and other persons responsible for the operation and use of AI systems on their behalf, taking into account their technical knowledge, experience, education and training, the context in which AI systems are to be used and the persons or groups of persons on whom AI systems are to be used.

[2]  '(56) 'AI literacy' means the skills, knowledge and understanding that enable providers, deployers and data subjects, taking into account their respective rights and obligations under this Regulation, to deploy AI systems in an informed manner, as well as to become aware of the opportunities and risks of AI and the potential harm it may cause'.

[3] The European Artificial Intelligence (AI) Office, established within the European Commission, is the centre of expertise on AI for the entire European Union. It plays a central role in the implementation of the AI Act, in particular with regard to general-purpose AI systems. Its main mission is to foster the development and use of trustworthy AI, while protecting against the potential risks associated with these technologies. Source: https://digital-strategy.ec.europa.eu/fr/policies/ai-office

[4] The AI Pact encourages and supports organisations to plan the implementation of AI Act measures. It is built around two pillars, namely 1) Gathering and exchanging with the AI Pact network and 2) Facilitating and communicating business commitments. Source: https://digital-strategy.ec.europa.eu/fr/policies/ai-pact

[5] The questions in question are the subject of an unofficial machine translation. Only the original version available in English only is authentic.

[6] ‘7. When deciding whether to impose an administrative fine and to fix its amount in each individual case, all the relevant circumstances of the specific situation shall be taken into account and, where appropriate, the following shall be taken into account: ... (g) the degree of responsibility of the operator, taking into account the technical and organizational measures it has implemented.

[7] Without prejudice to other administrative or judicial remedies, any natural or legal person having reasons to consider that the provisions of this Regulation have been infringed may lodge a complaint with the market surveillance authority concerned. In accordance with Regulation (EU) 2019/1020, such complaints shall be taken into account for the purposes of market surveillance activities and shall be dealt with in accordance with specific procedures established for that purpose by market surveillance authorities.’

[8] According to the content of draft law No 8476 at the time of drafting this article, the role of MSA in Luxembourg would be attributed to the CNPD.

 

Dernière mise à jour