The acceleration of new technological advances in artificial intelligence (AI) is profoundly disrupting the digital world and the real world; every aspect of working and everyday life is affected, including the processing of personal data. Sometimes a valuable ally, sometimes a formidable force, these are the key opportunities and challenges that AI represents in terms of protecting personal data.

AI at the service of controllers
With its impressive capacity for data analysis and processing, AI can help strengthen the protection of personal data by detecting and mitigating potential security breaches and cyber threats. AI algorithms can process large volumes of data much faster and more accurately than human operators. Based on threat intelligence from millions of studies, blogs and press articles, AI technologies are able to identify threats almost in real time and with a very high degree of accuracy. This efficiency can enable a rapid response to data breaches and improve overall data management.
Artificial intelligence is also playing an increasingly important role in predicting data breaches. Through advanced data analysis, models can identify indicators of a potential security breach in a computer system. Organizations can take preventive measures to strengthen their defenses and protect their systems from cyberattacks.
As for the data processing itself, AI can enhance the preservation of the privacy of data subjects. Through differential confidentiality, a set of mathematical techniques, an artificial intelligence can perform "big data" analyses without allowing the re-identification of people. Federated learning is another AI tool to limit the concentration of personal data into a single server, thereby limiting the risk of re-identification. It is an algorithm trained on decentralised devices and using local data to learn. Unlike the vast majority of algorithms using a server containing all the data, federated learning learns about the local data of all participants and only communicates the output data to the participants. Thus, local data remains private, as it is no longer collected or stored on a remote server. These methods thus make it possible to analyse the data through several sources without exposing the latter to several participants.
Finally, artificial intelligence can enable controllers to set up automated processes to verify and facilitate their organization's compliance with data protection regulations, such as the General Data Protection Regulation (GDPR). In particular, AI can automate data anonymisation, pseudonymisation and consent management processes, thereby reducing the risk of human error and ensuring compliance with legal requirements.
The other side of the coin
While artificial intelligence can have undeniable benefits, for example in terms of cybersecurity, its implementation is not without risks. Like any new technology, systems using AI are still prone to failures, attacks, or may have as yet unsuspected impacts on individuals and society.
The development of smart algorithms requires the use of a significant amount of data, often a large part of which is personal data. The use of such data is unavoidable in that it enables algorithms to progress, evolve and learn. Mismanagement or abuse of such data – such as security errors, unauthorised access or data leaks – is likely to result in a breach of the privacy of data subjects.
Added to this is the lack of transparency of many smart technologies. Some AI models, especially those based on deep learning, can be difficult to understand and explain. This creates a transparency gap in how decisions made are described and explained and can make it difficult for individuals to understand how their data is used and processed. Lack of explainability can also make it difficult for individuals who have suffered potential harm caused by automated decisions to argue their case, allowing the controller to evade its data protection responsibility where appropriate.
From a legal and ethical point of view, AI applications that process personal data also raise dilemmas. Determining the appropriate legal bases for the processing of personal data, obtaining explicit consent and processing sensitive data pose challenges for organisations. In addition, ethical considerations regarding decision-making and potential privacy breaches need to be carefully addressed.
At present, artificial intelligence is far from infallible and can even introduce new vulnerabilities to the security of an IT system. With a view to the increased and systematic use of AI algorithms in the daily life of the future, it is essential to put in place adequate safeguards and regulations to manage the risks related to the protection of personal data, without limiting the possibilities offered by these new technologies. The European Union is actively working on regulations and guidelines to address these challenges and protect the rights of individuals in the digital age.