AI Act: Coming into effect of new obligations on 2 August 2025

I) A directly applicable regulation with gradual implementation

Regulation (EU) 2024/1689 of the European Parliament and of the Council (‘AI Regulation’) entered into force one year ago, on 2 August 2024. Its main objective is to harmonise the market within the European Union and to ensure the free movement of AI systems. This text frames the development, deployment and use of AI in the Union, imposing on operators a level of requirement and obligations proportionate to the risks associated with the intended purpose of each system. It aims to ensure that AI systems placed on the market safeguard the health, safety and fundamental rights of individuals.

The implementation of the provisions of the AI Act is gradual in all Member States. Certain obligations have already been applicable since 2 February 2025, in particular:

  • banning certain AI practices deemed unacceptable (e.g. social scoring or subliminal cognitive manipulation),[1]
  • the requirement for AI systems to be mastered by the operators concerned.[2]

The CNPD has already published articles on these obligations applicable since 2 February, and will continue to communicate in the coming months about the new obligations stemming from the AI Act as well as the inferences between it and the General Data Protection Regulation ("GDPR").

On 2 August, new obligations came into force, in particular:

  • the transparency obligation for generative or interactive AI systems (which implies, for example, an explicit mention when a user interacts with an AI),
  • requirements for conformity assessment bodies,
  • the criteria for the assessment of general-purpose AI models and their qualification according to the level of risk,
  • the possibility for providers of AI models to adopt a voluntary code of conduct (Chapter V of the Regulation).

II) A draft law for the Luxembourg implementation of the AI Regulation

 

At national level, Bill 8476 provides, inter alia:

  • the designation of the competent authorities,
  • measures to support AI innovation;
  • facilitating cooperation between stakeholders, and
  • the definition of sanctions and redress mechanisms.

Rather than creating new specific authorities, it was chosen to extend the mandates of existing institutions. This approach ensures continuity, avoids overlap and leverages existing expertise, which is particularly important given the contextual nature of AI technologies.

In order to ensure an effective and consistent implementation of the AI Act, Bill PL8476 foresees a significant expansion of the tasks of the CNPD. The new roles it will be given aim to build on the existing data protection expertise of the CNPD and expand its mandate to cover key aspects of AI governance.

The powers proposed in this bill include the following roles:

  • Notified body (Article 6): the CNPD would be designated as a notified body to assess the conformity of high-risk AI systems as part of their deployment by law enforcement authorities and asylum and immigration authorities,
  • default market surveillance authority (Article 7(1)): the CNPD would act as the default authority for the control and enforcement of the AI Regulation,
  • establishment of a regulatory sandbox (Article 12(1)): the CNPD would oversee the operation of the regulatory sandboxes, providing a controlled environment for AI experimentation;
  • single point of contact (Article 13): the CNPD would serve as a central point of contact for AI-related regulatory issues at national level;
  • Fundamental Rights Authority: the CNPD would be empowered to monitor and address fundamental rights risks related to the use of AI systems, alongside ALIA and ITM.

 

[1] Article 5: of the Rules of Procedure of the European Parliament and of the Council on Prohibited Practices in AI

[2] Article 4: of the Rules of Procedure of the European Parliament and of the AI Mastery Council

Dernière mise à jour