Effective regulation of emerging and disruptive technologies is among the chief legislative challenges of contemporary policy-makers, including (perhaps especially) those concerned with internationally binding regulation. The EU is particularly sensitive to such challenges. Its freshly unveiled Artificial Intelligence Act is an ambitious proposal to regulate AI technologies in a human-centric way. 

In April 2021, the European Commission unveiled its ambitious regulation proposal of the Artificial Intelligence Act, which attempts to outline European lawmakers’ vision of regulating AI-based technologies in a human-centric way. On the back of the 2019 White Paper on Artificial Intelligence released by the Commission, it sets out practical steps which the European institutions envision to be part of a ‘trustworthy’ future of AI-based technologies. Similarly to the preceding White Paper, it is explicitly informed by human rights protections encompassed in the EU Charter as well as other international instruments.  However, will that be enough to build confidence in such a controversial industry? 

The Proposal – and its sweeping ambition

The draft Regulation will set new standards for the deployment of AI-based technologies within the EU single market. The crux of the proposal lies in the classification of some AI products as ‘high risk applications’, making them subject to stricter safety compliance requirements. AI products can fall within this category irrespective of the industry sector in which they are used. In other words, ‘high risk applications’ for the purposes of this Regulation are not tied to specific use or industries (as has been rumoured previously). 

A ‘high risk application’ of AI technology has been defined rather loosely thus far: it is to be determined by a two-step test, which first considers the technology’s intended purpose and then recounts any potential harm it may bring about – in terms of its severity and probability of occurrence. The proposal lists examples of possible harm to be considered, including disruptions to health and safety of citizens, provision of basic goods and services or adverse impacts on an individual’s fundamental rights. Such a (flexible) definition seems to be primarily stacked against certain use-cases where AI deployment has been especially controversial, such as recruitment programs, emergency service dispatch systems, bank credit assessments, or law enforcement-adjacent algorithms. Products which fall within these ‘high risk’ categories are to be subject to rigorous compliance requirements, including risk management and post-market surveillance analyses. The proposal also sets standards of accuracy and consistency of results, any breach of which has to be announced to the oversight authority within 15 days from detection; otherwise, substantial monetary sanctions or exclusion from the EU single market could ensue.

If the proposal is approved by both the Council and Commission, it will most likely apply extraterritorially. Similarly to the GDPR concerning data protection, it will take effect in all situations or transactions where European consumers are involved. This makes the forthcoming Regulation even more influential as it is likely to bind not only European companies, but also international tech giants under the threat of being excluded from the European market in case of their non-compliance.

The problem with artificial intelligence

AI is often used as a scapegoat for any uncomfortable aspects associated with the onslaught of the fourth industrial revolution. In reality, it is more common and mundane than the public tends to imagine. The often-reductive term, ‘artificial intelligence’, refers to a diverse field of computer science concerned with developing models and algorithms that enable computers to complete tasks which formerly burdened human workers. Thanks to advances in AI , computers can now categorise pictures based on what they show, recognise songs or suggest recommendations while shopping online – which are all tasks that people have accepted in their daily lives.

At the same time, some modes of use of AI technology have raised concerns of human rights scholars and activists alike. Perhaps most critics have warned against the risk of coded bias which is associated with the deployment of facial recognition, CV scanning software or other AI-based technologies capable of significantly altering individuals’ lives. Studies have shown that women and people of colour are disproportionately more likely to be misidentified by commercially available facial recognition software, and even technological giants by the likes of Twitter could not prevent the racial biases of their algorithms. In US jurisdictions, where facial recognition programs are already used to assist in the arrests of criminale suspects, reports of unjust arrests have increased and racial discrimination lawsuits were filed by the ACLU. This disparity is likely what the EU attempts to minimise as part of its new proposal, placing similarly influential algorithms to the ‘high risk’ category and requiring high accuracy and consistency of results. Ironically, some commentators point out that the EU may be the source of its own misfortunes when it comes to biased algorithms. Since AI models necessarily reflect the biases present in the datasets on which they were trained, the EU’s strict emphasis on data privacy is making it difficult for researchers to compile reliable datasets large enough to train unbiased programs.

Another potential issue concerns the lack of transparency. Commercially available programs using machine learning are often compared to ‘black boxes’ in that it is often unclear how precisely they make individual decisions: their results are clear, but explaining how precisely the computer arrived at them is nearly impossible. This is especially a problem in cases of life-altering decisions or AI technologies used to exercise coercive power of states. Even if mistakes in these important algorithms are discovered, remedying them is very costly and complicated, which has potential adverse effects on the effort to protect principles of human dignity and prevent breaches of rights.

Between human rights and innovation

With that being said, the proposed Regulation also aims to support innovation and AI development within EU Member States, laying down regulatory sandboxing schemes within which technological start-ups and developers will be able to build and test their technologies before bringing them to the market. AI and its further development carry great promise for the future of automation and digitisation and the EU is keen to not fall behind other large economies in reaping its benefits.

Yet, with technological progress, there are more and more questions raised regarding the implications of deployment of new technology. For example, a citizen’s committee is currently collecting signatures in support of the Reclaim Your Face Initiative, which seeks to ban the use of AI-based facial recognition technology in public spaces out of concern for privacy, fundamental rights and potential discrimination.

One of the main criticisms of the proposed Act emerging from the technological community seems to warn of a possible stifling of progress due to an overdue regulatory burden placed on innovative technologies. Given a certain ambiguity about whether the Regulation is expected to apply horizontally or sectorally, some are voicing doubts about whether the Regulation will be nimble enough to address the nuances of all diverse technologies and models covered under the blanket term of AI. Additionally, some commentators criticise the hypothetical nature of many examples upon which the regulation is built, arguing that it may in fact be too soon to implement such a far-reaching restriction to a phenomenon which may be understood as still being in its infancy. Arguments for evidence-based regulations stipulate that we need to understand the effects and implications of AI deployment in the European context before attempting to build a new regulatory framework around them. The overall reception of the proposal suggests that many questions remain, and debates are likely to be long and passionate.

Will the proposed Regulation be effective in inspiring public confidence in the safety and reliability of AI technologies? The impressions seem mixed thus far – a lot will depend on potential changes to the proposal throughout the legislative process and on the framework’s eventual implementation and enforcement. The EU’s ambition to lead the way in terms of tech regulation in a human-centric way is appreciated by some and questioned by others. Only time will tell whether this approach will prove effective in practice. 



Aspen Institute Central Europe. AI v Česku. Co s evropskou regulací? Youtube, 29 June 2021 (https://www.youtube.com/watch?v=hqOYQb2jJrM&t=3263s).

BBC. Twitter finds racial bias in image-cropping AI. 20 May 2021  (https://www.bbc.com/news/technology-57192898).

Desierto, Diane. Human Rights in the Era of Automation and Artificial Intelligence. EJIL: Talk!, 26 February 2020 (https://www.ejiltalk.org/human-rights-in-the-era-of-automation-and-artificial-intelligence/).

European Commission. White Paper on Artificial Intelligence – A European approach to excellence and trust. COM(2020) 65 final, 19 February 2020  (https://ec.europa.eu/info/sites/default/files/commission-white-paper-artificial-intelligence-feb2020_en.pdf).

European Commission. Proposal for a Regulation laying down harmonised rules on artificial intelligence – Artificial Intelligence Act. COM(2021) 206 final, 21 April 2021 (https://digital-strategy.ec.europa.eu/en/library/proposal-regulation-laying-down-harmonised-rules-artificial-intelligence-artificial-intelligence).

Hill, Kashmir. Wrongfully Accused by Algorithm. New York Times, 24 June 2020 (https://www.nytimes.com/2020/06/24/technology/facial-recognition-arrest.html).

Lomas, Natasha. EU plan for risk-based AI rules to set fines as high as 4% of global turnover. TechCrunch, 14 April 2021 (https://techcrunch.com/2021/04/14/eu-plan-for-risk-based-ai-rules-to-set-fines-as-high-as-4-of-global-turnover-per-leaked-draft/).

Pěchouček, Michal. Umělá inteligence a život zítřka. Neurazitelny.cz (https://neurazitelny.cz/umela-inteligence-zivot-zitrka-michal-pechoucek/).

Ryngaert, Cedric and Taylor, Mistake. The GDPR As Global Data Protection Regulation? Symposium on the GDPR and International Law. AJIL Unbound, Vol. 114, pp. 5-9.


Numbers Projected on Face, author: Mati Mango, 21 November 2020, source: Pexels, CC0, edits: cropped.