On 21 May 2021, the European Commission presented a proposal for a Regulation of the European Parliament and the Council laying down harmonised rules on artificial intelligence (the Artificial Intelligence Regulation) and amending certain legislative acts of the EU (hereinafter as the “Proposal for an Artificial Intelligence Regulation”). The Proposal for an Artificial Intelligence Regulation is a follow-up to the European Strategy for Artificial Intelligence introduced in 2018, Code of Ethics for the Credibility of Artificial Intelligence from 2019, and the White Paper on Artificial Intelligence from 2020.
The Proposal for a Regulation on Artificial Intelligence contains a comprehensive set of rules for the management of artificial intelligence technologies in the European Union, which includes in particular:
- rules for development,
- the marketing and use of artificial intelligence systems,
- the prohibition of certain practices in the field of artificial intelligence,
- specific requirements to be met by high-risk artificial intelligence systems,
- transparency rules for artificial intelligence systems, and
- rules for market monitoring and supervision.
Definition of artificial intelligence
Artificial intelligence, resp. an artificial intelligence system is defined by the Proposal for an Artificial Intelligence Regulation as a “software which is developed with one or more of the techniques and approaches listed in Annex I [of the Regulation] and which, based on man-made objectives, can generate outputs such as content, predictions, recommendations, or decisions affecting the environment with which they communicate.”[1] This is a rapidly evolving group of technologies that can contribute to a wide range of economic and societal benefits across a range of industries and societal activities (AI).
Scope of the Proposal for an Artificial Intelligence Regulation
The Proposal for an Artificial Intelligence Proposal will regulate any use of AI technologies in the European Union, i.e., will target providers and other entities placing AI systems on the market in the European Union, regardless of whether they are established in the EU, and all users of AI systems in the European Union.
The aim is for the rules of the regulation to affect entities and AI systems inside and outside the European Union, if the systems affect EU citizens in some way, i.e., that, as with the GDPR, the rules become a legal norm that entities outside the European Union must respect.
Risk-based approach
The Proposal for an Artificial Intelligence Regulation introduces the categorization of AI according to its potential risk(s): unacceptable risk, high risk, limited risk, and minimal risk to ensure that AI systems are safe, transparent, ethical, impartial, and under human control. Selection of individual AI categories are discussed in more detail in the following paragraphs of this article.
Unacceptable risk
All AI systems that are identified by the Proposal as an unacceptable risk are strictly prohibited. This category includes a limited set of particularly harmful uses of artificial intelligence that are contrary to the values of the European Union because they violate fundamental rights. These include AI systems or applications that manipulate human behaviour to circumvent the free will of users, social scoring systems by government, systems for the biometric identification of persons remotely in public places for law enforcement purposes, and systems that allow governments to carry out so-called “social assessments”.
High risk
Among the high-risk AI systems, there are listed a limited number of systems that harm human safety or their fundamental rights. These are all remote biometric identification systems.
The list of high-risk AI systems contained in the Proposal for an Artificial Intelligence Regulation includes the following AI technologies used in:
- critical infrastructure (e.g., in transport) which could endanger the lives and health of citizens,
- education or training which may determine access to education and/or further careers of individuals (e.g., assessment of examinations),
- safety components of products (e.g., AI applications in robotic surgery),
- employment, personnel management, and access to self-employment (e.g., software for sorting CVs in recruitment procedures),
- basic private and public services (e.g., in credit ratings that prevent citizens from obtaining a loan),
- law enforcement, which may interfere with fundamental human rights (e.g., assessing the reliability of evidence),
- migration, asylum, and border control management (e.g., authentication of travel documents),
- the administration of justice or the judiciary and democratic processes (e.g., application of the law in specific cases).
For these high-risk AI systems, the Proposal for an Artificial Intelligence Regulation introduces strict requirements to which high-risk systems will be subject before being placed on the market. These are requirements concerning, for example:
- the quality of the data sets used,
- risk assessment and mitigation systems,
- recording activities to ensure traceability of results,
- detailed documentation containing all information about the system and its purpose, Clear and adequate information for users, and
- appropriate measures to ensure human supervision.
Limited risk
According to the Proposal for an Artificial Intelligence Regulation, AI systems for interaction with natural persons fall into the category of limited risk, unless they are classified as high-risk systems. These systems should be subject to the obligation of transparency, for example, if there is a clear risk of manipulation (e.g., using chatbots). Users should be aware that they are communicating with a machine so that they can make an informed decision about whether to continue communicating.
Supervision and sanctions
Monitoring of compliance with the obligations arising from the Proposal for an Artificial Intelligence Regulation will be ensured by the Member States through the national authorities set up for this purpose. This activity will consist of deciding on sanctions, including administrative fines, and taking measures to ensure the proper and effective implementation of the Proposal for a Regulation on Artificial Intelligence.
However, to harmonize national rules in the European Union, guidelines on procedures for setting administrative fines will be developed by the European Commission based on the EU Council recommendation.
Some thresholds for sanctions imposed are also set directly by the Proposal for an Artificial Intelligence Regulation. The amount of sanctions imposed may be:
- up to EUR 30 million or if the perpetrator is a company 6 % of the total worldwide annual turnover for the previous accounting period (whichever is higher) for breaches of prohibited practices or non-compliance with data requirements;
- up to EUR 20 million or 4 %of the total worldwide annual turnover for the previous accounting period in case of non-compliance with the requirements or obligations of the AI system according to the Proposal for an Artificial Intelligence Regulation;
- up to EUR 10 million or 2 %of the total worldwide annual turnover for the preceding accounting year in the event of incorrect, incomplete, or misleading information being provided to notified bodies and the competent national authorities in response to their request.
European Council on Artificial Intelligence and National Surveillance
The Proposal for an Artificial Intelligence Regulation also establishes a European Artificial Intelligence Council composed of representatives of the Member States (i.e., senior representatives of the relevant national supervisory authorities), the European Data Protection Supervisor, and the European Commission. The aim of the European AI Council should be to facilitate the implementation of the proposed rules and the development of AI standards. The European AI Council will issue recommendations and opinions to the European Commission on high-risk AI systems and other aspects important for the effective and uniform implementation of the new rules.
At the national level, Member States will have to designate one or more competent national authorities, including a national supervisory authority, to oversee the application and implementation of the Proposal for an Artificial Intelligence Regulation. This national authority will also represent the Member State in the European AI Council and will be able to consult the EU Council for consultation.
Adoption and effectiveness
The proposal for an Artificial Intelligence Regulation will be adopted after approval in the European Union’s legislative process, which is now estimated at around two years. Once the regulation enters into force, most of the provisions under the Proposal will be applicable after two years.
It is, therefore, to be expected that the new obligations for AI providers and users are not likely to take effect before 2025.
Conclusion
The Proposal for an Artificial Intelligence Regulation comprehensively regulates the unregulated area of artificial intelligence which is highly desirable given the constant progress and development of technological sectors.
Given that the proposed regulation will still have to go through the legislative process of the European Union, it is likely that the wording of the regulation is not final and will be adapted to the comments of the European Parliament, the Council of the EU, and other stakeholders.
If you have any questions regarding the Proposal for Regulation on Artificial Intelligence and current legislation, we are at your disposal. So do not hesitate to contact us.
Tereza Pšenčíková, LL.M., junior lawyer – psencikova@plegal.cz
Mgr. Jakub Málek, partner – malek@plegal.cz
Kateřina Roučková, legal assistant – rouckova@plegal.cz
21. 06. 2021
[1] https://eur-lex.europa.eu/legal-content/CS/TXT/HTML/?uri=CELEX:52021PC0206&from=EN