Of interest.

Prohibited practices of artificial intelligence under the AI Act

Artificial Intelligence (or also “AI”) may seem like an eternally inflected topic these days. Not long after the European Data Protection Board commented on the processing of personal data in the context of AI models, the European Commission (the “Commission”) came out with another document, Draft guidelines on prohibited AI practices under the AI Act.[1]

Generally on Prohibited Practices
We are already familiar with prohibited AI practices from Regulation (EU) 2024/1689 (the AI Act), which divided AI systems into four risk categories, namely:

  • minimal or no risk,
  • limited risk,
  • high-risk, and
  • unacceptable risk.

AI systems that fell into the unacceptable risk category under Article 5 of the AI Act are considered prohibited practices. The prohibition of these practices came into force on 2 February 2025.

Guidelines
As a direct follow-up to the prohibition of these practices, on 4 February 2025, just two days later, the Commission published the Guidelines on Prohibited Artificial Intelligence Practices laid down in Regulation (EU) 2024/1689 (AI Act); the Guidelines have been approved by the Commission but have not been formally adopted at present and are therefore currently at the draft stage (the “Guidelines”).

It is important to note here that, in general, the Commission’s Guidance is a non-legally binding document, it is mainly a document that serves to further explain and provide context in a particular area of EU law.

In this case, the Guidelines elaborate in detail on Article 5 of the AI Act, in particular for the application practice of EU Member States’ supervisory authorities and obliged entities under the AI Act. Thus, although the Guidelines are not binding, they aim in principle to promote a consistent interpretation and application of the AI Act across the EU.

What do the Guidelines specifically cover?
The Guidelines are a rather extensive document and it would not be possible to cover all of the almost 140 pages of regulation here, so we will focus on at least some of the relevant examples. In short, the Guidelines focus on individual AI systems that pose unacceptable risks, which are prohibited under Article 5 of the AI Act. These prohibitions are intended to protect fundamental rights, including human dignity, privacy and non-discrimination. For each of the prohibited practices, the Guidelines provide key insights and examples from practice, which we will elaborate on in this article.

Subliminal techniques, purposefully manipulative or deceptive techniques (Article 5(1)(a) of the AI Act)
In this case, AI is AI that uses subliminal, purposefully manipulative or deceptive techniques with the purpose or effect of substantially impairing behaviour by substantially impairing a person’s ability to make an informed decision and inducing them to make a decision that they would not otherwise have made in a way that is likely to cause them or another serious harm. It is therefore particularly AI that is used to subliminally, manipulatively influence a person.

We can consider the following AI systems as such:

  • An AI system that displays rapidly flashing text in a video, which is visible but flashes too fast for consciousness to register but is able to subliminally influence the attitudes or behaviour of the person watching the video, or similarly, AI that embeds images into other visual content.
  • An AI algorithm on social media that covertly uses psychological tricks to extend the time a user spends on the platform, potentially causing addiction or psychological problems.
  • An AI chatbot that is able to impersonate a friend or relative using synthetic voice mimicry, thereby committing fraud and causing significant harm.

For these types of AI uses, one can consider, for example, the following guidelines:

  • Human intent to deceive a person through an AI system is not required; it is sufficient that the AI be capable of these subliminal, manipulative, and deceptive techniques.
  • Explicit visible labeling of “deep fakes” greatly mitigates the risks of harmful interference from these AIs.
  • The interpretation of “substantial distortion of behaviour” refers to the consumer protection regulation under EU law, i.e. an AI practice has the effect of leading a person to make a decision that they would not otherwise have made.

Exploitation of user vulnerabilities (Article 5(1)(b) of the AI Act)
Put simply, AI systems that specifically target and exploit a user’s age, cognitive or physical disability or social or economic disadvantage are prohibited.

In practice, this may include:

  • An AI system used to target the elderly with misleading personalised offers or scams – e.g. to influence them to buy “expensive medical treatments” or “deceptive investment schemes”.
  • An AI system within an app that targets young users and uses particularly addictive patterns of operation in order to create user dependency on the app.
  • An AI platform that personalises offers by systematically offering disadvantageous loans at high interest rates to people in financial need.

For these types of AI, we can consider, for example, the following key insights:

  • The Guidelines define “vulnerability” broadly as a user’s cognitive, emotional, physical, and other forms of susceptibility.
  • Susceptibility must result from a person’s membership in one of the enumerated groups: age, disability, or socio-economic status.

Social scoring (Article 5(1)(c) of the AI Act)
Social scoring is a category of AI that is no longer a reality only in an episode of the sci-fi series Black Mirror.

These are real AI systems that rate or classify people based on social behaviour or personal characteristics and subsequently lead to unfair treatment in unrelated situations (for example, being banned from a social establishment based on low social media follower counts).

Thus, for example, the following AI systems are prohibited:

  • An AI system in which a health insurer would rate clients based on their diet and fitness habits and provide less favourable terms to those who do not fit the “ideal” profile.
  • A tax authority that would use an AI predictive tool (taking into account both income and unrelated data, such as social habits) on all taxpayer returns to select tax returns for closer scrutiny.

For these types of AI, we consider, for example, the following key findings:

  • Profiling of individuals under EU data protection law, if carried out using AI systems, may fall under this prohibition.
  • It does not have to be AI alone, but AI must play a significant role in generating social scores.
  • This prohibition applies even if the social score is generated by an organisation other than the one that subsequently uses it.

Crime prediction based on profiling (Article 5(1)(d) of the AI Act)
This category is fairly straightforward, prohibiting AI systems that predict the likelihood of committing a crime based on personality traits or profiling, use of discriminatory patterns to identify potential offenders, or automatically label individuals as at risk without objective and verifiable evidence.

In practice, this may include:

  • Police AI system that identifies certain ethnic groups as more likely to commit crime based on historical data.

For these types of AI, we can consider, for example, the following key insights:

  • Although the focus is mainly on law enforcement, the prohibition also applies to private actors, particularly when they are acting on behalf of law enforcement or assessing or predicting the risk of a person committing a crime for compliance purposes (e.g., anti-money laundering).

Other examples
Given the scope of regulation and the number of examples of prohibited AI practices for the following categories, we provide only a few practical examples.

Article 5 of the AI Act further regulates:

  1. Non-targeted facial searches for the purpose of developing facial recognition databases (Article 5(1)(e) of the AI Act)
  2. Emotion recognition in the workplace or educational institutions (Article 5(1)(f) of the AI Act)
  • Biometric cathegorisation to determine or infer status within sensitive groups (Article 5(1)(g) of the AI Act)
  1. Real-time biometric identification in public space (Article 5(1)(h) of the AI Act)

Examples of prohibited practices of interest under these categories would include:

  • A company assessing the emotional state of employees using cameras and adjusting their workload based on their expressions. What would not be prohibited in this category, however, would be an employer using an AI device in the workplace to measure anxiety based on measured stress levels when deploying dangerous machinery or working with hazardous chemicals (due to health and safety exemptions).
  • An AI system automatically categorising people by facial features and assigning them probable religious beliefs.
  • An AI system based on biometric categorization which claims to be able to determine a person’s race based only on their voice.

Conclusion
Given the dynamic development of AI and the tools that are beginning to be used in practice on a daily basis, it is highly likely that these interpretations of the categories of prohibited AI practices and their requisites will only become more prevalent.

The Commission’s Guidelines provide crucial clarifications on prohibited AI practices, with the aim of protecting fundamental rights while promoting innovation.

The use of prohibited AI practices can lead to significant penalties, which will apply from 2 August 2025 and include fines of up to EUR 35 million or 7% of the total worldwide annual turnover of businesses.

In the meantime, a so-called transitional period applies, and these specific sanctions will not yet apply, given the interim absence of enforcement authorities. Companies offering or using AI in the EU should pay close attention to the AI Act and these Guidelines and review their AI systems rigorously to address any deficiencies and potential prohibited practices during this transitional period.


[1] The Guidelines are available here: https://digital-strategy.ec.europa.eu/en/library/commission-publishes-guidelines-prohibited-artificial-intelligence-ai-practices-defined-ai-act

 

Mgr. Tereza Pechová, junior lawyer – pechova@plegal.cz

Mgr. Jakub Málek, managing partner – malek@plegal.cz

 

13. 2. 2025

 

www.peytonlegal.en

 

 

Back