With the EU AI Act now official, Europe is setting the stage for how AI should be developed and managed. This groundbreaking move introduces a way to categorize and handle AI risks, putting a stop to some practices that could be harmful or unfair. Now, everyone's watching to see if this act will have a global influence, much like the GDPR did.

On March 13, 2024, the European Council gave its final approval to the AI Act, with an overwhelming majority vote (523 for, 46 against, 49 abstentions). This piece of legislation, which has been in the works since April 2021 and finally wrapped up in December 2023 through a collaborative effort among the Commission, the Council, and the Parliament, is a world first.

It lays down uniform rules for AI’s launch, use, and operation across the EU, focusing on risk management. This is a big deal because it puts Europe ahead in the game of tackling AI risks and crafting a legal frame that guides everyone involved in AI, from builders to users, clarifying what’s expected and required of them. It’s also keen on making things easier for businesses, especially smaller ones, aiming for a reliable and future-proof AI development.

This AI Act isn’t going solo; it’s part of a bigger plan that includes the AI Innovation Package and the Coordinated Plan on AIThe goal here is to ensure that AI is developed in a way that’s safe and respects people’s and businesses’ rights, while also encouraging its uptake, investment, and innovation all over the EU.

The heart of these rules is about championing trustworthy AI, not just in Europe but globally, making sure AI systems are up to par with ethical standards, safety, and respect for fundamental rights, especially as we deal with the challenges posed by powerful AI technologies. Recognizing that while many AI systems are pretty harmless and could help tackle societal issues, there are still risks that need addressing.

Given how big of a deal this regulation is, especially with AI playing a major role in the tech future we’re heading towards, we’ve decided to take a closer look at the act, to really dig into what it means and its impact.


The European Council has ratified the AI Act, a regulation on artificial intelligence that stands as a pioneering initiative on a global scale. This legislation unfurls a comprehensive regulatory framework that spans the full spectrum of AI, from its development to the utilisation of artificial intelligence systems.
The AI Act delineates four levels of risk associated with AI systems, from those deemed to pose an unacceptable risk, encompassing prohibited practices such as the employment of manipulative techniques or the compilation of facial recognition databases through indiscriminate scraping, to high-risk categories designated for critical sectors that are subject to rigorous compliance obligations.
The AI Act embodies the unique European stance, contrasting with the United States’ predilection for self-regulation or China’s emphasis on state oversight, with the objective of not only protecting European citizens but also championing ethical and responsible AI on a global scale. This may potentially precipitate a “Brussels Effect,” influencing international regulations on AI.

The european approach’s distinctiveness

Right from the start, European regulators have highlighted something special about the EU’s AI Act compared to laws elsewhere. It stands out as the only all-encompassing piece of legislation tackling the whole AI development scene with an all-in approach.

Over in the US, the vibe is more about letting the industry regulate itself and pushing innovation forward, with rules mostly coming in the form of voluntary guidelines and self-regulation by the industry itself.

While the White House is pushing for something like an AI Bill of Rights, there are several more limited laws around. These either apply just to the state that brought them in or deal with specific areas. Introducing local AI laws has ended up creating a bit of a mixed bag of state-level regulations.

Take the Equal Employment Opportunity Commission (EEOC) as an example. They’re pretty clear about sticking to Title VII of the Civil Rights Act, which is all about stopping discrimination against job seekers and employees, no matter if the risk is from people or robots.

Then there’s New York City’s Local Law 144, which demands checks for bias in automated hiring processes. Meanwhile, places like California and New Jersey are busy cooking up their own specific laws that will start off focusing locally.

China, for its part, with its “Interim Measures for the Management of Generative Artificial Intelligence Services,” emphasizes state control and economic dynamism: the Chinese government views Artificial Intelligence as a strategic tool for achieving economic and geopolitical objectives. An approach that raises considerable concerns regarding critical issues like privacy and civil liberties.

In China, companies need to get a license to offer generative AI services and make sure they’re in line with the country’s socialist values, making sure to steer clear of dissent.

The European Union, with its pitch for the AI Act, is all about keeping consumers safe, ensuring fairness, and maintaining security. They’re aiming to be the world’s go-to for AI rules. Like what happened with the GDPR, the EU’s AI Act is all about making sure there’s transparency, accountability, and a proper look at the risks for any AI use, especially those considered “high risk” for consumers.

The EU sees setting rules not as a roadblock but as a way to shield folks from the possible downsides of AI. Some even see following these rules as a sneaky advantage, especially when there’s worry in Europe that tougher laws might slow them down compared to others who don’t have as many hoops to jump through.

The AI Act and the Brussels effect

Actually, the European lawmakers are counting on kick-starting that “Brussels Effect” we’ve seen with the GDPR, but this time with AI.

The Brussels Effect is basically the EU’s power to set the agenda for global markets by deciding on standards in areas like competition, environmental protection, food safety, and privacy. It’s about the market’s clout rather than the EU having to strong-arm these standards into place

With the EU’s consumer market being massive and wealthy, backed by strong rule-making bodies, it’s hard for global companies to ignore the EU. Getting into this market means they have to play by the EU’s rules, which are usually pretty strict.
What’s really cool about the Brussels Effect is that companies tend to follow these rules elsewhere too, just to avoid the hassle and cost of juggling different rules in different places.

The basics of AI Act

So, the European AI Act is all about setting up a solid legal framework for AI, aiming for a kind of AI that we can rely on. It means making sure AI systems stick to our fundamental rights, stay safe, and keep ethical while dealing with the tricky parts of AI tech.

This set of rules covers the whole chain of AI, from the folks bringing AI goods and services into the EU, no matter where they’re from, to the users in the EU, and even the importers, distributors, and official reps. And there’s more.

It even includes makers who put AI in their products and both suppliers and users of AI that might be outside the EU but end up being used here. But there are some things it doesn’t cover.

AI for military or national security, AI made and used just for scientific research, AI still in the works before it’s released, or AI bits given out under open source aren’t part of this. Also, if you’re just using AI for personal, non-work stuff, this doesn’t apply.

As for when all this starts, the AI Act will kick in twenty days after it’s announced in the EU’s Official Journal and will really get going 24 months later.
But, there are some exceptions: the no-go practices will be off-limits six months in, codes of good conduct come in nine months later, rules for general AI stuff, including how it’s governed, in 12 months, and the must-dos for high-risk AI systems in 36 months.

An European AI Office will be establishe, which will keep an eye on how the AI Act is being followed, and they can hand out fines up to €35 million or 7% of global sales if needed.

A risk-based approach

As mentioned, the core of the AI Act is the adoption of a risk-based approach, categorising AI systems into four levels according to the sensitivity of the data involved and their specific use cases. Let’s examine them in detail.

Illustration portraying the risk pyramid, the founding principle of the AI ​​Act [source: europa.eu].
The risk pyramid [source: europa.eu]
  • Unacceptable risk systems: are systems that pose a significant threat to fundamental rights, democratic processes, and social values. Their use is prohibited as they could compromise the integrity of critical infrastructures and cause serious incidents. This group includes social scoring systems; systems capable of manipulating people through subliminal techniques or exploiting the vulnerabilities of specific groups; real-time biometric identification systems in publicly accessible spaces.

  • High-risk systemsare used in critical sectors such as healthcare, transport, justice, and are subject to strict compliance assessments to ensure their accuracy, robustness, and cybersecurity. Human oversight is required in their implementation to guarantee accountability and an additional level of security and protection. Notably, a pre-deployment compliance assessment is demanded. This group includes tools used for personnel selection, systems used to assess individuals’ eligibility for public assistance and services.

  • Limited risk systems: are considered less risky compared to the above and, therefore, are subject to fewer regulatory constraints. However, they must adhere to specific transparency obligations to maintain accountability and reliability in their use. This group includes systems that interact with humans like chatbots, systems detecting emotions based on biometric data, systems generating or manipulating content. In these cases, the AI Act imposes transparency obligations so that users are properly informed.

  • Minimal or non-existent risk systems: refer to the use of AI in applications that fall into the minimal risk category, such as AI-enhanced video games and anti-spam filters. Here, the legislator intends to minimize the regulatory burdens on such systems, promoting innovation and development in areas where the risks associated with AI use are considered negligible or non-existent, thus fostering the growth of AI-driven technologies for the benefit of a wide range of industries and users.

Prohibited practices

Article 5 of the EU AI Act presents a very detailed list of practices that are explicitly banned:

  • manipulative or deceptive techniques: it’s prohibited to market or use AI systems employing subliminal or deceptive techniques to significantly alter the behaviour of individuals or groups, leading them to make decisions they wouldn’t have otherwise made and that may cause significant harm

  • exploitation of vulnerabilities: the use of AI systems exploiting the vulnerabilities of specific individuals or groups due to their age, disability, or social or economic situation to distort their behaviour is forbidden

  • social evaluation and scoring: the use of AI to assess or score individuals or groups based on social behaviours or personal characteristics, when this leads to unjustified or disproportionate prejudicial treatment, is banned

  • criminal risk assessments: the use of AI systems for conducting criminal risk assessments based solely on individual profiling, unless such systems are used to support a human assessment based on objective facts directly related to the criminal activity, is prohibited

  • facial recognition and scraping: creating or expanding facial recognition databases through undirected scraping of facial images from the internet or from CCTV footage is forbidden emotion inference: the use of AI systems to infer the emotions of individuals in the workplace or educational context, except for medical or security reasons, is prohibited

  • emotion inference: the use of AI systems to infer the emotions of individuals in the work or educational context is prohibited, except for medical or safety reasons

  • biometric categorisation: biometric categorisation that classifies people based on biometric data to infer aspects such as race, political opinions, union membership, religious beliefs, or sexual orientation is prohibited

  • remote biometric identification: the use of “real-time” remote biometric identification systems in public places for law enforcement is restricted, except in strictly necessary cases, such as the search for victims of crimes or the prevention of specific and imminent threats

These restrictions aim to prevent abuses and ensure that the use of AI occurs in an ethical manner and respects fundamental rights.

Glimpses of Futures

Let us now endeavour to discern the potential future impact of the enactment of the European AI Act, employing the criteria represented by the STEPS matrix.

S – SOCIAL: the AI Act proscribes all those systems that present an unacceptable risk to fundamental rights and freedoms, such as emotion recognition tools in the workplace or educational institutions and the biometric categorisation of sensitive data. It mandates that high-risk systems undergo an assessment of their impact on fundamental rights before being introduced to the EU market, and it affords citizens the opportunity to demand explanations regarding decisions made by systems that affect their rights. However, some civil rights organisations have highlighted the AI Act’s lack of safeguards for the most perilous uses of AI, arguing that exemptions for law enforcement, border controls, and migration management leave room for potential abuses. The use of high-risk systems, such as biometric identification at border controls, has been criticised for creating a double standard – with one set of rules protecting EU citizens and another for migrants and asylum seekers.

T – TECHNOLOGICAL: we have already examined which technologies fall under the regulation of the EU AI Act. However, further considerations pertain to the potential for continuing research and experimentation. There are arguably well-founded concerns that the AI Act could impose significant limits on technological development in Europe due to the “stringent obligations” placed on developers of cutting-edge technologies. These obligations could deter researchers, leading to a talent exodus in the field of artificial intelligence. There are also fears of possible brakes on the activities of startups, which could be disadvantaged compared to other entities active in regions where bureaucratic impositions are not as strict.

E – ECONOMIC: much of the debate surrounding the law has focused on the risk of excessively limiting European businesses in the field of AI. The law has been amended repeatedly to address the concerns of countries such as France, Germany, and Italy, which do not intend to repeat the mistake of allowing developing technological markets to be completely dominated by large overseas multinationals. Furthermore, it must be emphasised that to compete globally with AI powerhouses such as the United States and China, the EU needs to significantly increase, aggregate, and make its public investment in AI more visible. Significant investments are required from the EU and member states collectively, for research and the implementation of AI: in computing infrastructure, microchip production, and talent retention.

P – POLITICAL: the AI Act is a clear statement that Europe believes it is possible to regulate artificial intelligence while remaining open to business needs. The idea is to maintain a commitment to co-regulatory approaches that allow for iterative change and the introduction of innovative policy tools, such as ‘sandboxes’ (regulated experimentation areas), in order to find the right balance between rigorous and safe AI application and the freedom for the industry to explore and experiment with new products, services, or businesses under the supervision of regulators.

S – SUSTAINABILITY: the AI Act introduces the first provisions concerning the environmental impact of artificial intelligence systems, representing a step forward towards sustainable AI regulation. Nonetheless, these are still minimal provisions: it is hoped that future iterations of the AI Act will expand these rules to ensure that the AI industry progresses in an environmentally sustainable manner, adopting a more rigorous approach in assessing, mitigating, and continuously managing the environmental footprint of AI systems, including mandatory sustainability impact assessments and a possible extension of the Emissions Trading System to data centres and other high-consumption IT processes.

Written by:

Maria Teresa Della Mura

Giornalista Read articles Look at the Linkedin profile