Policy brief on the state of play of the Artificial Intelligence Act
The proposal for an Artificial Intelligence Act was put forward by the European Commission in 2021. The file is now waiting for the European Parliament and the Council of EU to finalise the amendments within their own institutions to carry over to the interinstitutional negotiations (trilogue). This policy brief will recap the background and main elements of the Commission’s proposal, the Council’s and Parliament’s current positions, and the current status of the file.
Background to the issue
In April 2021, the European Commission published a proposal for a Regulation on Artificial Intelligence (AI Act), the first-ever horizontal legal framework on AI, which addresses the risks of AI. The regulatory Proposal aims to provide AI developers, deployers and users with clear requirements and obligations regarding specific uses of AI. The proposal is part of a wider AI package, which also includes measures to facilitate EU funding for research & innovation in AI.
The proposal establishes a technology-neutral definition of AI systems in EU law and addresses risks of specific AI utilisation and associated risks, categorising them into four levels of risk-based approach: unacceptable risk, high-risk, limited risk, and minimal risk.
- Unacceptable risks AI would be prohibited, such as real-time remote biometric identification for the purpose of law enforcement;
- High-risk AI systems, for example, CV-sorting software for recruitment procedures, would be authorised but subject to requirements and obligations to gain access to the EU market;
- Limited risk AI systems like deepfakes or chatbots would be subject to very light transparency obligations, such as making users aware that they are talking to a bot;
- Minimal risk AI, such as AI-enabled video games and any other system not included in the categories above, would be permitted without restrictions.
The proposal aims to guarantee the safety and fundamental rights of people and businesses concerning AI; to make sure that Europeans can trust the AI they are using; to strengthen the EU’s ability to compete globally. The broader package also aims to strengthen the uptake, investment and innovation in AI across the EU.
Key elements: Commission’s proposal and position
The Commission’s proposal for a Regulation on Artificial Intelligence (AI Act) includes:
- Harmonised rules: Incorporating a single standard across the EU to prevent fragmentation, enforced through conformity declarations and the obligation for a European conformity marking.
- Regulatory sandboxes: Ensuring legal certainty to encourage AI innovation and investment by creating AI regulatory sandboxes, controlled environments where new AI systems are tested before being put on the market.
- Implementation rules: Enabling national competent authorities who will update an EU database for high-risk AI practices and systems.
- A risk-based approach to differentiate four types of AI systems based on their potential for hazards and risk:
|Unacceptable Risk AI Systems: harmful uses of AI that contravene EU values will be banned. Prohibited are, for example, manipulation of human behaviour through subliminal techniques, social scoring, and real-time remote biometric identification for the purpose of law enforcement.|
|High-Risk AI Systems: are the focus of the regulation. High-risk AI is defined as a product falling under the EU product safety regulation, such as toys and medical devices, or belonging to a list of stand-alone high-risk AI systems laid down by the proposal. Annex III lists these systems, which include, for example, management of critical infrastructure; AI systems planned for employment and recruitment; access to public services; law enforcement; etc. High-risk AI systems that are safety components of products would also fall into this category.
Obligations & requirements: Providers of high-risk AI systems must conform to stringent quality standards and comply with disclosure, control, and monitoring requirements. For example, they should ensure a quality management system, draw up the necessary technical documentation and ensure that the AI system undergoes the relevant conformity assessment procedure before being placed on the market.
|Limited Risk AI Systems: Such AI applications are permitted but subject to information or transparency obligations. Some systems can deceive people into thinking they are dealing with a human, for example, chatbots and deepfakes. In these instances, the proposal imposes transparency requirements to ensure that the affected person is aware of being exposed to an AI application.|
|Minimal or No-Risk AI Systems: are permitted without restrictions. All other AI systems can be developed and used in the EU without additional legal obligations to existing legislation.|
State of play in the Council of EU
In the Council, negotiations between member states started with the Slovenian presidency, which began by organising a virtual conference on the regulation of AI, ethics and fundamental rights on 20 July 2021. The following French presidency continued the work by circulating compromise texts of the proposed AI Act, with the final one being on 15 June 2022. In June 2022, EU ministers took note of a progress report presented by the French presidency, and so far, work in the Council has been continuing under the Czech presidency.
- Definition of AI: The Council is pushing for a narrow definition of “Artificial Intelligence” by limiting the term only to Machine Learning, thus excluding from the scope of the Act simpler systems. Albeit this is not the final position of the Council, the trend seems to show that countries would like to reduce the scope of the Act. For instance, the Slovenian presidency introduced an amendment by including an article to clarify that general purpose AI systems are excluded from the scope of this regulation unless a trademark is registered.
- High-risk AI systems: The Czech presidency also confirmed that AI systems used in high-risk scenarios, whose usage is “purely accessory” to an AI-driven decision, will not be considered high-risk themselves. Moreover, the concept that a system that takes decisions without human review is high-risk has been deleted because “not all AI systems that are automated are necessarily high-risk, and because such a provision could be prone to circumvention by putting a human in the middle”. And the Slovenian text proposed that the update of the list of high-risk uses should occur every two years, as opposed to the one year initially proposed.
- National security: In addition to the Commission’s draft, which does not consider the regulation applicable for AI systems used for military purposes, the Council would also exclude AI systems in circumstances of national security, even if prohibited under the AI Act.
- Fundamental rights: The Council’s texts also contain amendments to ensure that the AI Act implements safeguards for fundamental rights. For example, the Slovenian text widens the prohibition on social scoring to include private actors.
- Open questions: It is still yet to be said if the Council will consider requiring fundamental rights impact assessments of AI systems or implementing rights and redress mechanisms for people affected by AI systems. When it comes to the question of enforcement and governance of the AI Act, the Council has not taken a strong stance yet. There have been indications that the Member States have objected to the role the Commission provides itself in the Act. However, in the Slovenian text, there were only minimal steps to address this.
State of play in the Parliament
In the European Parliament (EP), the AI Act is managed jointly by the Committee on Internal Market and Consumer Protection (IMCO) and the Committee on Civil Liberties, Justice and Home Affairs (LIBE); but MEPs in the legal affairs group (JURI) have exclusive competence on several articles.
- IMCO/ LIBE Committee report: Members of the European Parliament (MEPs) from the IMCO and LIBE Committees published their first draft report on 20 April 2022. The draft included some relevant amendments, such as an obligation on providers to inform affected people that they are subject to a high-risk AI system and that users affected can complain to authorities and seek a remedy.
The Committees are now still working out several key issues. Co-rapporteurs Brando Benifei (Italy, S&Ds) and Dragos Tudorache (Romania, Renew) distributed several compromise amendments for deliberation before the technical meeting on the AI Act took place on 30 August.
- The compromise amendments reportedly pertained to obligations for providers of high-risk AI systems, obliging them: to maintain logs automatically generated by their AI systems for at least six months and to immediately remove their system from operation if they believe it may be classified as prohibited.
- They would also oblige users of high-risk AI systems to be available to cooperate with national authorities to prove that the systems used complies with the regulation.
- On the Commission’s proposal of AI regulatory sandboxes, there are disagreements within the Committees over the degree of compliance and oversight the AIs will need to follow in the sandboxes; while left-leaning MEPs are pushing for more stringent rules, the centre-right is for leniency.
- Moreover, while the Commission’s draft provided that the AI Act is not applicable for AI systems used for military purposes, some MEPs, together with the Council, would extend this limitation by including national security reasonings, even if those systems were prohibited under the AI Act.
- JURI Committee opinion: In parallel, on 12 September, the JURI Committee voted on its opinion with amendments to the AI Act. All the compromise amendments on articles within the committee’s exclusive competence were passed.
For example, an amendment would allow AI-generated so-called deepfakes to be deployed in creative or satirical cases if their use is flagged. Others established that the level of human oversight on high-risk AI systems should be proportionate to the risks and added details to complaint-and-remedy mechanisms for people affected by AI decisions.
Given its exclusive competence, some of its decisions will definitely make it into the Parliament’s final position, while others will probably have a strong influence.
Following the Commission’s proposal, the Parliament and the Council are currently negotiating the draft written by the EU Commission within their own institutions. After this, the three bodies will go into trilogue negotiations. The vote on the joint IMCO and LIBE report is scheduled for October 2022 to then be voted on in plenary in November 2022. According to rapporteur Tudorache, the timeline may be delayed as he suggested that Parliament may not finalise its position on the AI Act until January 2023. On the other hand, the Council aims for an agreement by December 2022.
Once they both reach a consensus within their institutions, negotiations among Commission, Parliament and Council will start, most likely in January 2023, and will be closed with a final agreement, which seems possible in mid-2023. The earliest time the regulation could become applicable to private companies will be in mid-2025.