top of page
Writer's pictureDurham Pro Bono Blog

The AI Act: The New Global Standard for AI Regulation?

Disclaimer: The views expressed are that of the individual author. All rights are reserved to the original authors of the materials consulted, which are identified in the footnotes below.


By Maiya Dario


Artificial intelligence (AI) gives machines the ability to independently make decisions from available data. This has revolutionised multiple aspects of our lives. Personally, we turn to Siri and Alexa for assistance, and are targeted by creepily-accurate tailored advertisements. In the field of technology, AI is being used to develop self-driving cars and autonomous surgical robots [1]. However, AI comes with unprecedented risks. For instance, deep fakes, fake media where AI replaces one’s likeness with that of another’s, look realistic [2]. This can deceive the public and thereby derail political campaigns (e.g. Jordan Peele voicing Obama in the video where Obama calls Trump a “complete dipshit”) [3]. As a result, the European Commission proposed the first legal framework on AI, the AI Act, which aims to foster trustworthy AI by guaranteeing people’s fundamental rights and furthering AI development [4].



The General Framework

The Meaning of AI

Article 3 defines AI systems as “software that is developed with ... techniques and approaches” including machine learning and logic-and-knowledge based approaches amongst others (Annex I), “and can” for certain “objectives, generate outputs such as … predictions … or decisions influencing the environment they interact with.” The breadth of this definition allows the Act to bring a wide range of AI technologies within its protection. Consequently, the Act’s impact is far-reaching, regulating almost every AI system from spam filters to dark pattern AI, a user interface designed to subconsciously mislead people [5].


The Risk-based Approach

The magnitude of the obligations depends on the level of risk that a particular AI system presents to fundamental rights such as privacy and security [6]:


Minimal Risk

AI systems that present minimal to no risk fall under this category. These constitute the majority of AI systems used in Europe. Examples include “AI-enabled video games” and spam filters. Though these aren’t subject to formal regulations, Article 69 proposes voluntary codes of conduct, which give effect to principles such as accountability, fairness and robustness, that ground the obligations imposed on AI systems presenting higher levels of risk.

Limited Risk

AI systems that threaten the principle of transparency fall under this category. These include deep fakes, systems involving human interaction (e.g. chatbots acting as personal assistants) and emotion recognition or biometric categorisation systems (Title IV). Article 52 introduces a duty of disclosure, thereby giving effect to transparency. Consequently, people have the right to know if they’re watching a deep fake, speaking to a chatbot or are subject to an emotion recognition system.


High Risk

This category encompasses two kinds of AI systems (Chapter I, Title III). First, AI systems are implanted in products to act as safety components. These are already subject to sectoral legislation. Accordingly, compliance with this legislation results in compliance with the Act. Second, independent AI systems used in areas including employment, “management and operation of critical infrastructure,” and “biometric identification and categorisation of natural persons,” amongst others (Annex III). Before these systems can enter the EU market, Article 19 requires them to bear the European conformity mark of approval. To achieve this, they must comply with the principles of “data and data governance,” “transparency for users,” “human oversight,” “accuracy, robustness and cybersecurity” and “traceability and auditability” (Chapter II). For instance, for a system to comply with the “accuracy, robustness and cybersecurity” principles, the system’s developers must provide metrics regarding their system’s accuracy to users, backup plans in the event of errors and solutions to cybersecurity threats, respectively [7].

Unacceptable risk


This category encompasses AI systems that manipulate vulnerable groups and use subliminal techniques to alter one’s behaviour, both to the extent of causing psychological or physical harm; and those that provide a social score and immediate biometric identification in public spaces for law enforcement (Title II). One example of an AI system that provides a social score is China’s social credit system. This system assesses each citizen’s trustworthiness, which is then used to determine whether or not they’re entitled to certain benefits such as priority healthcare [8]. Article 5, with the exception of AI systems that provide immediate biometric identification in public spaces for law enforcement, absolutely prohibits these systems.


A Major Concern

Critics, such as Mueller, have argued that this Act inhibits innovation [9]. This concern can be appeased since the Act aims to balance the protection of people’s fundamental rights against the development of AI. This is evidenced by the Act’s risk-based approach: by ensuring that protective measures are proportional to the risk presented by certain AI technology, the Act strikes a balance between the two competing interests, thereby preventing the unnecessary hampering of innovation. Furthermore, the Act aims to promote innovation (1.1, explanatory memorandum). For instance, Title V provides “measures in support of innovation.” Some of these encourage the set up of AI regulatory sandboxes, which provide a controlled environment where innovative technologies can be tested.


Implications

This Act applies to any AI system that enters the EU market and to any AI system whose output reaches the Union (Article 2). Consequently, it seems that the Act’s effects will be felt beyond Europe. It’s probable that this will set a global standard for the regulation of AI, just as the General Data Protection Regulation did for the regulation of data protection.


The penalty for breaching the Act ranges from 20-30 million or 4-6% of a business’s global annual revenue, whichever is higher (Article 71). Consequently, businesses need to proactively take measures to ensure compliance with the Act. Burt recommends that businesses conduct the following:

Firstly, impact assessments. Businesses should assess the risks that their AI systems present and the measures in place to address these. Afterwards, they should document their findings. Secondly, independent reviews. Businesses should seek third party reviews of their systems’ risks and have these systems validated. Thirdly, continuous reviews. Once businesses utilise their systems, they should plan on how these will continuously be monitored afterwards. These reviews are essential since AI systems evolve over time. [10]


These recommendations are central to the AI Act. For instance, Annex IV requires businesses to provide a description of a high risk system’s “foreseeable unintended outcomes and sources of risks” and a plan to address these. Therefore, conducting these guarantees businesses’ first steps in complying with the Act.


 

[1] Lanna Deamer, ‘The artificial intelligence revolution’ (electronicspecifier.com, 18 March 2019) <https://www.electronicspecifier.com/products/artificial-intelligence/the-artificial-intelligence-revolution> accessed June 16, 2021


[2] Ian Sample, ‘What are deepfakes - and how can you spot them?’ (theguardian.com, 13 January 2020) <https://www.theguardian.com/technology/2020/jan/13/what-are-deepfakes-and-how-can-you-spot-them> accessed June 16, 2021


[3] BuzzFeedVideo (2018) You Won’t Believe What Obama Says In This Video! ;). [online video] Available at: <https://www.youtube.com/watch?v=cQ54GDm1eL0&t=1s>. [Accessed June 16, 2021].


[4] Proposal for a REGULATION OF THE EUROPEAN PARLIAMENT AND OF THE COUNCIL LAYING DOWN HARMONISED RULES ON ARTIFICIAL INTELLIGENCE (ARTIFICIAL INTELLIGENCE ACT) AND AMENDING CERTAIN UNION LEGISLATIVE ACTS COM/2021/206 final; European Commission, ‘Europe fit for the Digital Age: Commission proposes new rules and actions for excellence and trust in Artificial Intelligence’ (ec.europa.eu, 21 April 2021) <https://ec.europa.eu/commission/presscorner/detail/en/ip_21_1682> accessed June 16, 2021


[5] darkpatterns.org, ‘Dark Patterns’ (darkpatterns.org, n.d.) <https://www.darkpatterns.org> accessed June 17, 2021


[6] Eve Gaumond, ‘Artificial Intelligence Act: What Is the European Approach for AI?’ (lawfareblog.com, 4 June 2021) <https://www.lawfareblog.com/artificial-intelligence-act-what-european-approach-ai> accessed June 17, 2021


[7] Ibid.


[8] Amanda Lee, ‘What is China’s social credit system and why is it controversial?’ (scmp.com, 9 August 2020) <https://www.scmp.com/economy/china-economy/article/3096090/what-chinas-social-credit-system-and-why-it-controversial> accessed June 17, 2021


[9] Benjamin Mueller, ‘The Artificial Intelligence Act Is a Threat to Europe’s Digital Economy and Will Hamstring The EU’s Technology Sector In The Global Marketplace’ (datainnovation.org, 21 April 2021) <https://datainnovation.org/2021/04/the-artificial-intelligence-act-is-a-threat-to-europes-digital-economy-and-will-hamstring-the-eus-technology-sector-in-the-global-marketplace/> accessed June 17, 2021


[10] Andrew Burt, ‘New AI Regulations Are Coming. Is Your Organization Ready?’ (hbr.org, 30 April 2021) <https://hbr.org/2021/04/new-ai-regulations-are-coming-is-your-organization-ready> accessed June 17, 2021

131 views0 comments

Recent Posts

See All

Comments


bottom of page