As of 2 February 2025, the first wave of artificial intelligence (AI) regulatory obligations set out in Regulation (EU) 2024/1689 (AI Act) came into force. Although the AI Act entered into force on 1 August 2024, its implementation is gradual and we are now at the starting line, i.e. at the first milestone of directly effective obligations.
First wave and prohibited practices
The provisions of the AI Act, including the definition of an AI system, AI literacy, as well as a very limited number of prohibited AI practices that pose unacceptable risks in the EU, are becoming effective.[1]
We have already covered prohibited practices and recommend reading our article on Prohibited AI Practices under the AI Act. However, to briefly reiterate, the AI Prohibited Practices Act applies to AI systems that pose unacceptable risks. Prohibited practices are listed in 5 of the AI Act and described in more detail in the Commission’s Guidance on Prohibited AI Practices set out in Regulation (EU) 2024/1689 (AI Act). The prohibition of these practices aims to protect fundamental rights, including human dignity, privacy and non-discrimination. Given the high potential for these practices to be harmful, it was quite logical to put their prohibition at the top of the AI Act’s implementation calendar.
However, in addition to the prohibited AI practices, the aforementioned AI system definition and AI literacy provisions are effective as of 2 February 2025. Why are these topics included first in the implementation schedule along with the prohibited practices?
AI system definition and AI literacy
The effect of this part of the AI Act is more towards uniform interpretation. The AI Act emphasises the development of AI literacy to help understand how AI works, recognise its risks and use it safely. For these reasons as well, a proper understanding of the definitions, in particular what actually falls under an AI system and what does not, is indeed crucial.
AI system definition
The legal definition of an AI system can be found in Section 3(1) of the AI Act, which defines an AI system as follows: ‘machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments ‘.
Beyond this brief regulation, on 6 February 2025 the European Commission issued the Commission Guidelines on the definition of an artificial intelligence system introduced by Regulation (EU) 2024/1689 (AI Act) (the “Guidelines“)[2] , which explain the practical application of the legal concept of an AI system as enshrined in the AI Act, which the European Commission believes should make it easier for obliged entities to determine whether their software system constitutes an AI system and thus facilitate the proper application of the rules set out in the AI Act.
The Guidelines adds that an AI system includes both hardware and software and is able to operate with varying degrees of autonomy, i.e. some independence from humans. Adaptability, the ability to change its behaviour after deployment, is also an important feature of some AI systems. However, this is not a necessary condition for a system to fall into the AI category. According to the Guidelines, every AI system operates with certain goals, which can be explicit (clearly defined) or implicit (resulting from the system’s behavior and its interaction with the environment). A key aspect is the ability of an AI system to draw conclusions based on input data, which distinguishes it from conventional software that operates only with static rules.
An essential part of the Guidelines is the key to distinguishing between an AI system and traditional software from a legal perspective. An AI system is distinguished from traditional software primarily by its ability to learn, analyse patterns and autonomously modify its output. Conversely, systems performing only basic data operations, traditional optimization algorithms, or simple prediction models are not considered an AI system under the definition of the AI Act.
The European regulation of AI, as enshrined in the AI Act, therefore only applies to systems meeting the definition in Article 3(1) of the AI Act. Importantly, the Guidelines are not legally binding but serve as an important interpretative aid in the application of the AI Act and the obligations set out therein.
AI Literacy
Section 4 of the AI Act requires all providers and implementers of AI systems to ensure that their employees and those in a similar position have a sufficient level of knowledge and understanding of AI, including the opportunities and risks that AI presents. This requirement applies to all companies using AI, even when qualifying an AI system as low risk.
In practice, this means in particular the implementation of various internal policies and guidelines for the management and use of AI systems and sufficient training of employees and others in these areas.
On 4 February 2025, the European Commission simultaneously published a live repository of best practices to support learning and exchange in the field of AI literacy (the “Repository“).
The Repository was created following a survey directed to members of the AI Pact[3], or the European Commission’s initiative to support organisations in preparing to implement the measures arising from the AI Act, by sharing experiences, training and voluntary commitments to transparency and responsible use of AI. The AI Pact includes a network of stakeholder obliged entities for knowledge exchange and enables companies to proactively implement key measures before the legislation is fully effective.
Based on the Repository, the EU AI Office has gathered the results of ongoing practices among AI Pact members and created a list of recommended practices that can serve as examples of established AI literacy measures. The list of best practices is not exhaustive and will be regularly updated and added to. The practices published so far are listed alphabetically according to their level of implementation (fully implemented, partially implemented, planned). As a matter of interest, AI Pact members mentioned in this list of best practices include Booking.com and France’s Criteo.
However, the mere reproduction of the procedures set out in this live repository does not create an automatic presumption of full compliance with Article 4 of the AI Act.
The aim of the Repository is mainly to promote education and exchange of experience between providers and implementers of AI systems according to the AI Act, but it is up to each obliged entity to bring its activities into line with the AI regulation according to the actual situation.
Conclusion
Another significant milestone is 2 August 2025, as the latest date by which EU Member States must designate national authorities responsible for enforcing the AI Act. Rules on sanctions, AI governance and AI confidentiality will also come into force on this date.
EU Member States therefore have until 2 August 2025 to put in place national enforcement measures to enforce the AI Act. We can expect the first enforcement actions to be taken in relation to this deadline in the second half of 2025.
Obligated entities should use the first half of 2025 to put in place a strong AI systems governance strategy and take the necessary steps to remedy any shortcomings in compliance with the obligations set out in the AI Act and focus on properly training their employees and collaborators in AI, in addition to the prohibited practices.
[1]EUROPEAN COMMISSION. The first rules of the Artificial Intelligence Act are now applicable [online]. 3 February 2025 [cit. 2025-03-04]. Available from: https://digital-strategy.ec.europa.eu/en/news/first-rules-artificial-intelligence-act-are-now-applicable
[2] The original version of the Guidelines is available here: https://digital-strategy.ec.europa.eu/en/library/commission-publishes-guidelines-ai-system-definition-facilitate-first-ai-acts-rules-application
[3] EUROPEAN COMMISSION. AI Pact [online]. [cit. 2025-03-04]. Available from: https://digital-strategy.ec.europa.eu/en/policies/ai-pact
Mgr. Tereza Pechová, junior lawyer – pechova@plegal.cz
Mgr. Jakub Málek, managing partner – malek@plegal.cz
6. 3. 2025