Artificial intelligence (AI) is a topic that is currently moving the world. It is no different in the Czech Republic. Thanks to rapid technological progress, changes are coming faster than the legislators can react. The proposal for a Regulation of the European Parliament and of the Council laying down harmonised rules on artificial intelligence (Artificial Intelligence Act) and amending certain Union legislative acts (the “AI Act”) which is the first legislative document from the pen of the EU institutions in this dynamically developing area, was adopted by the Council of the European Union on 21 May 2024.
AI Act
Although the EU’s initial intention was to adopt a soft-law solution, as outlined in 2020 with the publication of the White Paper on Artificial Intelligence, societal developments and the growing importance of AI in contemporary society required a more rigorous approach, culminating in the publication of the proposal of the AI Act, which first saw the light of day in April 2021.
There have been significant changes since the initial proposal was published, but in particular some generative systems such as ChatGPT have been made available to the general public. This was necessary to incorporate in the proposed regulation and thus on 11 May 2023 the Committee on the Internal Market and the Committee on Civil Liberties, Justice and Home Affairs adopted amendments to the original text.
Further developments did not take long and on 13 March 2024 the European Parliament finally approved the proposed regulation as amended. The legislative process culminated on 21 May 2024, when the final text was approved by the Member States. The AI Act will enter into force on the twentieth day following its publication in the Official Journal of the European Union and will be fully applicable 24 months after its entry into force.
In general
As this is the first ever legal framework for artificial intelligence, considerable care must be taken in its interpretation. The AI Act aims first and foremost to ensure the safety of AI systems that will be introduced on the EU market and to enhance legal certainty in order to encourage the necessary investment and innovation in AI, and of course to improve the governance and effective enforcement of the existing legislation governing fundamental rights and safety requirements.
Although the EU is currently considered a pioneer in the legal regulation of AI, it is far from being a major power in terms of AI-related developments. The United States in particular, along with China, is the leader in this field, with the EU investing many times less in research. We will probably soon see the effect that the new regulation will have on the volume of investment in this area.
Definition of artificial intelligence
The primary contribution of the AI Act is to establish a legal definition of the term artificial intelligence, or AI system. So far, it has been a term that has been widely used but until now has been a rather vague concept with no legal basis; this lack of a legal definition has posed a potential problem, particularly in the area of liability.
The current definition is based on the definition already used by the Organisation for Economic Co-operation and Development (OECD). The wording of this definition is relatively technology-neutral so that the definition can be used for future AI systems.
In short, AI is defined as autonomously operating software that is capable of generating content, predictions, recommendations, or decisions that affect the environment in which the software is deployed.
What news does AI Act bring?
Classification of systems according to the level of risk
Although most AI systems operate under minimal risk, risk categorisation and compliance with obligations according to the classification must observed.
The Regulation works with a total of four categories of risk level:
- unacceptable,
- high,
- limited and
- low or minimal.
Artificial intelligence systems with unacceptable levels of risk are now banned as they are contrary to the values of the European Union.
Explicitly, the AI Act prohibits the marketing or operation of systems that clearly threaten the life, health, safety or rights of individuals and create unacceptable risk. Specifically, these are for example AI systems that cognitively manipulate human behaviour or use these subliminal techniques on vulnerable groups (e.g. voice-controlled toys that promote violence in children), are used by public authorities to allocate social credit, or are used for emotion recognition or real-time remote biometric identification.
However, the AI Act also allows for an exception to the prohibition on the use of AI for biometric identification for the purpose of prosecuting serious crimes (e.g. terrorism), where, subject to court approval, it can be used to retrospectively biometrically identify offenders.
For systems exhibiting a high risk to the life, health and rights of individuals, the AI Act introduces two distinct regimes depending on the function performed, but also on the specific purpose of the AI system.
In general, high-risk systems can be divided into two modes:
- systems used as part of products regulated by European regulations to ensure their safety (e.g. automobiles, toys); and
- systems used in eight specific areas, including education, critical infrastructure, employment, law enforcement and border protection.
More stringent requirements are to be placed on providers when using high-risk AI systems, including, but not limited to:
- the establishment, application, documentation and maintenance of a risk management system, subject to regular updates, on the basis of which the necessary measures will be taken;
- preparation of technical documentation and its continuous updating;
- the transparency of the AI system used, which is achieved by, inter alia, the provision of concise, complete, factually correct instructions containing the identity and contact details of the provider, the features, capabilities of the system including its intended purpose and level of reliability and cyber security; and
- the need for human supervision aimed at preventing or minimising risks.
A further obligation for providers of high-risk AI systems is mandatory registration before the actual placing on the market or putting into service in a publicly accessible database managed by the Commission.
The other two categories of riskiness of artificial intelligence systems are without fundamental requirements.
In the case of AI systems that recognise emotions, manipulate image, sound or video content (deepfakes) or chatbots, users must be made aware that they are using an AI system and be given the choice to continue using it.
The residual category of low or minimal risk systems is not limited by any specific requirements and will be able to be placed on the EU market almost without restriction. However, the AI Act foresees the development of codes of conduct that will encourage voluntary compliance with the requirements for high-risk AI systems and possibly add some additional sub-requirements in the context of environmental sustainability.
The category of so-called foundation models or general-purpose AI models
During the drafting of the final text, the category of general-purpose AI models was added in response to technological developments linked to the success of ChatGPT.
The basic characteristic of foundation models is their ability to be tested on large amounts of data, their ability to process and “understand” diverse information, about which they then provide a general output, and their adaptability to a wide range of different tasks.
Given the need for regulation to ensure safety, credibility and to promote innovation and overall competitiveness, new requirements for providers will also be introduced. Providers will now have to produce detailed technical documentation.
As for the specific foundation model known as a generative AI system, which is designed to create content such as text, images, video and audio, the range of requirements placed on providers is expanding. In order to maintain transparency, providers are obliged to indicate that the content has been generated by the AI generative system and to implement sufficient safeguards to ensure that the content is generated in accordance with the law.
The most discussed was undoubtedly the requirement to publish a summary of the copyrighted data that the generative system uses for its development and learning in order to provide the most relevant information to users. This obligation is aimed at preventing copyright infringement. Generative systems often produce content very similar to works of art, but it is not possible to prove whether the system generated the content independently or whether it was largely inspired by another work and thus infringed existing copyright.
Regulatory sandbox
AI Act sets up a so-called regulatory sandbox. This is a controlled environment that will allow the AI system to be tested in real-world conditions before it is released to the market, and any shortcomings to be addressed or corrected.
Providers may use personal data in the test environment without meeting the conditions under Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (GDPR).
New institutions
Another novelty is the establishment of the European Council on Artificial Intelligence, composed of representatives of Member States and the Commission, whose agenda is primarily to facilitate the implementation of the AI Act and harmonisation. At national level, Member States are required to establish any number of supervisory bodies to oversee the correct implementation of the AI Act.
Advantages/disadvantages and impact on daily life
Artificial Intelligence is an increasingly used technology and is affecting most areas of everyday life. As more and more people invest in AI development, we will see more and more AI systems. For some time now, we have been dealing with them on a daily basis, for example in the form of chatbots that advise customers on some websites. Artificial intelligence has also found its use in personalising advertisements, medicine, the automotive industry and many other areas.
According to the published OECD report on the impact of artificial intelligence on the labour market of its member states[1], approximately 27% of all jobs fall into the category of highly vulnerable to automation, with a higher percentage of 35% in the Czech Republic.
Among the most at-risk jobs are those requiring high qualifications and years of experience, i.e. people working in medicine, finance, law, engineering and business. Paradoxically, research shows that people working in these fields use AI the most and rate its benefits very positively. The benefits of AI to the job market are then linked not only to cost reductions, but also to increased productivity and employee satisfaction, due to the reduction of tedious and potentially life-threatening tasks. Beyond the labour market benefits, AI offers new opportunities in public transport, energy, education and the green economy.
On the other hand, of course, there are also negatives associated with artificial intelligence. As AI systems are constantly evolving and learning, they adopt undesirable patterns of behaviour and thus risk adopting biased conclusions in relation to persistent socially undesirable phenomena – gender discrimination, discrimination of people with disabilities, ethnic or other minorities. In addition, the use of AI systems will mean a gradual reduction or elimination of jobs, interference with the privacy of individuals, which must be minimised to preserve privacy, and distortion of competition in situations where only some competitors accumulate relevant information.
Conclusion
Artificial intelligence is on the rise and will increasingly intrude into the lives of the general public. It is therefore crucial to adapt to developments and reap the benefits that AI brings, rather than succumb to its negatives. The legal regulation and legal implications of the use of AI technologies in business and other activities is a crucial issue.
Nowadays, the use of artificial intelligence systems leads to situations that are not addressed by the law, leaving providers, users and others in a legal vacuum that needs to be filled. Although the regulation of AI is still in its early days, the approved text of the AI Act is a first step that gives a hint of the direction the EU will take in the regulation of AI.
We at PEYTON legal will monitor developments in the field of AI regulation, in particular the implementation of the AI Act at the national level.
[1]OECD, OECD Employment Outlook 2023, ARTIFICIAL INTELLIGENCE AND THE LABOUR MARKET. In: OECD Publishing. [online]. 11. 07. 2023 [cited 2023-28-07]. Available from: OECD Employment Outlook 2023: Artificial Intelligence and the Labour Market | READ online (oecd-ilibrary.org)
Mgr. Jakub Málek, managing partner – malek@plegal.cz
Mgr. Kateřina Vyšínová, junior lawyer – vysinova@plegal.cz
13. 6. 2024