Artificial intelligence (AI) is seen as a key technology of the future. It is already present in everyday life, used in virtual assistants, social networks, surveillance cameras, routing applications, streaming platforms, etc. The question arises: What exactly is meant by ‘AI’?
As described on the European Parliament website, AI refers to the ability of a machine to reproduce human behaviours such as thinking, planning and creativity. It allows technical systems to perceive their environment, manage these perceptions, solve problems and take action to achieve a given goal. Systems receive data (already prepared or collected via sensors - a camera, for example), analyze it and react. They are able to adapt their behaviour by analysing the effects of their previous actions, working (more or less) independently.
Strong AI and weak AI
In the field of artificial intelligence, a distinction is usually made between strong and weak AI.
Strong AI
Strong AI, also known as general artificial intelligence, is a system of algorithms capable of solving problems for which it has never been trained, learning and planning, like a human being. According to experts, strong artificial intelligence could develop an autonomous awareness, sensitivity and will, on the model of the human being.
This type of AI does not yet exist, and a large majority of experts believe that we are still a long way from achieving it, not least because of the technological advances that still need to be made.
Weak AI
Weak AI, sometimes called narrow AI or specialized AI, works in a limited context and is a simulation of human intelligence applied to a narrowly defined problem (the transcription of a human speech, the organization of the content of a website or the analysis of legal texts or voluminous contractual documents).
Weak AI is often focused on performing a single task extremely efficiently. While these machines may seem intelligent, they are subject to far more constraints and limitations than the most basic human intelligence.
Examples of weak AI include:
- Siri and other smart assistants
- Self-driving cars
- Search engines
- Conversational robots ("chatbots")
- Spam filters for emails
- Netflix's recommendations
Machine learning and deep learning
Although AI is a technology with multiple approaches, advances in machine learning and deep learning, in particular, are creating a paradigm shift in virtually every economic sector and in society.
Machine Learning
Machine learning (ML) is a subspecialty of AI characterised by the use of algorithms that allow software applications to predict results more accurately, without having to be explicitly programmed. The basic principle of machine learning is to develop algorithms capable of obtaining input data and predicting an output using statistical analysis, while being updated when new data is available.
Recommendation engines are among the common use cases of machine learning, as well as fraud detection, spam filters and malware detection.

Deep Learning
Deep learning involves algorithms capable of mimicking the actions of the human brain through artificial neural networks. Networks are made up of tens or even hundreds of "layers" of neurons, each receiving and interpreting information from the previous layer. This allows the machine to go "deep" in its learning, making connections and weighing the data to get the best results.
Supervised and unsupervised learning
Supervised learning
The majority of machine learning uses supervised learning. This approach uses labeled datasets to train classification or prediction algorithms. They are fed with the labelled training data, and the model iteratively adjusts how it assesses the different characteristics of the data until the model is adapted to the desired outcome.
This process is called supervised, because the process of an algorithm taken from all the labelled data can be considered as a teacher supervising the learning process. The correct answers have been defined, the algorithm makes iterative predictions on the learning data and is corrected by the teacher. Learning stops when the algorithm reaches an acceptable performance level set in advance.
Unsupervised learning
In the case of unsupervised learning, artificial intelligence examines and aggregates unlabelled datasets. It models the underlying structure or distribution of data to learn more about it.
We talk about unsupervised learning because, unlike supervised learning above, there is no correct answer or teacher. Algorithms are left to their own mechanisms to discover and represent the interesting structure of data (Microsoft, “Supervised and unsupervised learning: what differences?", 29 May 2020). The only human help needed is validation of output variables.

Sources of error
Artificial intelligence is not infallible. As a technology designed by human beings, it has several inherent weaknesses in its design and operation.
The risk of bias is one of the concerns. Algorithms give the illusion of being impartial, but they are written by people and trained from data collected or generated by an organization. The biases and gaps inherent in coding and datasets are necessarily reproduced in the processes of the intelligent system. Well-known examples exist of companies experiencing this problem after launching their hiring algorithms trained on historical data and reflecting gender biases.
Added to this is the lack of contextual understanding. Current AIs perform very well in specific tasks, but they struggle to understand the broader context. They often lack general knowledge and are limited when it comes to common sense or common sense.
Similarly, AIs are devoid of conscience or moral judgment. They make decisions based solely on statistical criteria without taking into account ethical or moral considerations. For example, an autonomous vehicle could be programmed to minimize - in the event of a risk of collision - the injuries of occupants, without taking into account other road users.
From an operational point of view, AI requires significant data resources. Due to new technological advances in the field, actual usage data may differ from that on which the AI was trained, which can lead to unpredictable or undesirable results. This is why AI models need to be continuously updated with as much data as possible and improved to maintain their performance.
Finally, AI systems can be vulnerable to malicious attacks. Malicious actors can manipulate input data to mislead or deceive AI. For example, researchers have managed to deceive image recognition systems by adding to these images disturbances that are imperceptible to humans.
It is therefore important to keep in mind the vulnerabilities and risks of using artificial intelligence, without hindering its use, which can be a source of undeniable progress for the benefit of humans and society.