Hide table of contents

On April 21 the European Commission announced their proposal for European regulation of AI

The proposal strives to be for AI what GDPR has been for data protection.
Over the coming years the proposal will go through multiple readings in the Parliament and Council, where modifications can be proposed to the act before it is finally adopted or dismissed.

After the AI Act was announced, most attention fell on the act's wide definition of AI and blanket bans on 'manipulative AI'. Unfortunately this focus has led many to miss out on the most important points of the act.

I noticed there were no forum posts that would help someone get up to speed on the act. In this post I will summarise the act's most important points, how it may affect the development of transformative AI, as well as the EA community's response to the proposal.

What are the act's important points?

I below outline what I deem the most important points of the regulation, based on their effect on the development of transformative AI. I skip lightly over many details of the act, such as regulatory sandboxes and special rules for biometric systems, to keep the summary brief.

The act will apply to all EU countries and supersede any conflicting national law. Because of the act's broad definition of AI, it is difficult for any EU countries to make laws on AI which would not be in conflict.

It does not apply to military use of AI. Here countries are free to do as they see fit.

Rules for 'high-risk' AI

The Act lists a series of 'high-risk' areas. Systems operating in a high-risk area are considered high-risk and must be reviewed and approved before they can be placed on the market. This means that the AI Act's regulation will not apply to AI developed and used internally by companies. High-risk systems include everything from AI management of electricity grids, to AI that determines who to promote or fire.

Areas that are considered high-risk where certain uses are restricted are the following:

  • biometric identification and categorization
  • management and operation of critical infrastructure
  • educational and vocational training
  • employment, worker management, and access to self-employment
  • access to and enjoyment of essential services and benefits
  • law enforcement
  • migration, asylum and border management
  • administration of justice and democracy

A red thread across the systems which are considered high-risk, is that they make decisions which significantly affect the lives of citizens. The full list of high-risk systems is two pages and can be read in Annex III.

After the law is passed, the commission can add new uses of AI that must be approved as long as they fall under any of the existing high-risk areas.

For a high-risk system to be approved the provider must submit detailed technical documentation for the system.¹ Requirements for technical documentation include:

  • design specifications, key design choices, description of what the system is optimizing.
  • description of any use of third party tools.
  • description of training data, how it has been obtained, how it has been processed.
  • How the system can be monitored and controlled.
    • The system must have in-built operational constraints that cannot be overridden by the system itself and is responsive to the human operator (!!)
  • Description of foreseeable risks the system poses to EU citizens' health, safety and fundamental rights.

In other words the act creates a large, (partially) updatable list of areas where nobody is allowed to deploy AI without explicitly getting approval, which you only get by living up to numerous safety requirements of which one is a working off-switch. High-risk systems not only need approval, but must also be continuously monitored after they are placed on the market.

Establishment of the European AI Board

To enforce this, the act requires new institutions to be created.

  • The European AI Board.
    • Run by the EU Commission (EU's civil service).
    • Oversees national authorities and settles disputes.
  • National supervisory authorities in every EU country.
    • Countries are free to structure AI authority as they see fit.
    • Can create regulatory sandboxes, which allow companies to override the act's regulation in controlled settings.

The national supervisory authorities are responsible for approving high-risk systems and doing post-market monitoring to ensure approved systems are working as intended and pose no threat to EU citizens.

If two national supervisory authorities get into a dispute over whether a system should be approved or not, the European AI Board steps in and makes a final decision. The European AI Board is also responsible for overseeing and coordinating the national supervisory authorities.

Blanket ban of certain AI uses

Social scoring systems by governments are entirely banned.

The EU Act also bans use of AI that "deploys subliminal techniques beyond a person’s consciousness in order to materially distort a person’s behaviour in a manner that causes or is likely to cause that person or another person physical or psychological harm"

If you are finding it unclear exactly which AI systems would fall under this definition you are picking up on an important EU practice. By leaving law deliberately vague, it becomes the job of the European Court of Justice and standardisation bodies to determine what systems fall under this definition when specific cases of accused misuse are brought to court. This is done with the belief that specific definitions fall prey to loopholes, whereas the court is better able to punish only those who go against the 'spirit' of the law.

A less flattering analysis is that the ban is window dressing. The commission has struggled to come up with any examples of a banned use case that wouldn't already considered illegal by other EU regulation.²

AI systems must make themselves known

Any AI system that interacts with humans, must make it clear that it is an AI.
A customer-service chatbot pretending to be a human agent will, for example, be illegal.

Users of an AI system which generates deepfakes or similar content must disclose to their audience that the content is fake.

Users of emotion recognition or biometric categorization systems must disclose this to the subjects that they are using the system on.

Why should you care about the AI Act?

The AI Act is a smoke-test for AI governance

The AI act creates institutions responsible for monitoring high-risk systems and possibly broader monitoring of AI development in Europe. Doing so effectively is a monumentally difficult task that takes trial and error to do well.

If/When the monitoring of transformative AI systems becomes necessary, the AI Act ensures that the European Union will have institutions with plenty of practice. Other countries looking to implement similar oversight measures will also be able to learn from the AI Board's successes and failures.

The Brussels Effect

The EU AI Act is the single biggest piece of AI legislation introduced to the world yet. If history is anything to go by, there are good reasons to believe the act will influence the development of AI and AI legislation the world over.

When GDPR was introduced it was cheaper for Microsoft to just implement GDPR worldwide than to create a separate European version of every service they offer. Similarly we can expect it to be cheaper for AI developers serving the European market to ensure all systems are developed to be compliant with European regulation. This phenomenon has been dubbed the Brussels Effect.

The extent to which the Brussels Effect will affect the development of transformative AI is conditional on the continuity of AI takeoffs.

If transformative AI is brought about by a continuous stream of incremental improvements, we can expect development to be constrained by nearterm profits. Companies choosing to forego the European market face a competitive disadvantage. In such a world European laws and regulation is likely to play a significant international role.

In a world where transformative AI is brought about by discontinuous jumps in capability, we are much more likely to see races between private companies and governments alike all gunning to be first. In this world the European Union will struggle to be internationally influential.

I have written a rough analysis of why this is which can be read here.

The AI Act lays the foundation for future AI regulation

The AI act sets up institutions that will play an important role for all future regulation. Lawmakers around the world will draw lessons from its successes and failures. An AI act that is a smashing success moves the overton window, and enables future regulation. The act sets important legal precedents, for example that high-risk AI should be continuously monitored to prevent harm.

Once passed the legislation is unlikely to see major changes or updates

Flagship regulation of the EU such as GDPR and REACH (chemical regulation) tend not to see major updates even decades after having been passed.

Whatever act is passed, Europe will be stuck with for a while if going by historical precedent. Though that historical precedent may not be particularly applicable. If AI starts rapidly and visibly transforming society, I doubt the commission will be shy to suggest large updates to the regulation.

Responses from the EA community

The AI Act has received mixed responses within the EA community. I've summarized what I view as the main positive points emphasized by the EA community and the main areas that need improvements.

Positive points often emphasized

  • The act justifies AI regulation through the need to protect citizens' health, safety and fundamental rights. This sets a fantastic precedent for future regulation.
  • The need for continuous monitoring of high-risk AI and the creation of institutions capable of doing so.
  • That high-risk systems 'must have in-built operational constraints that cannot be overridden by the system itself and is responsive to the human operator' (ie. working off-switch)

Commonly suggested improvements

  • Every definition and every wording assumes that AI is specialized and narrow. Only minor changes are needed to enable the commission to regulate general AI systems with many intended uses.
  • The European AI Board will currently do little to explicitly monitor progress of AI as a whole. The European AI Board can be made responsible for maintaining a database of AI accidents and near misses.
  • The regulation act only affects AI that is placed on the market. The European AI Board can be made responsible for monitoring non-market AI for industrial accidents, similarly to what is done with chemical regulation.
  • Operators and developers of high-risk systems must explicitly consider possible violations of an individual's health, safety or fundamental rights. The conformity assessment for high-risk systems could require operators and developers to also consider societal-scale consequences.

You can read the public responses from various EA and EA Adjacent organisations here:

What is next

Insofar as the AI Act matters, now is the time to act. The EA community is generally hesitant to engage directly with policy for good reason. We barely know what good AI policy looks like, and we would preferably wait with acting before we know how to act and the consequences of doing so.

But the rest of the world is not static and will adopt policy even if we would prefer to wait. The choice is not to engage with the act now or later, it is to engage with the act now or never.

The AI act is not yet final, but the European Union is likely to see some version of this act passed in the coming years. Name of the game for EA organisations engaged with the act is generally to push for improvements similar to the commonly suggested ones, but there is much more work which can be done.

If the act is passed EA's wanting to work with AI in the European Union, should keep an eye out for new opportunities such as working in the EU AI Board or national supervisory authorities. This may be particularly impactful in the early years of these institutions when the culture and practices are particularly malleable.

 

 

¹ The full list of required technical documentation can be found in annex IV

² Demystifying the AI act argues this in greater detail.

Comments10


Sorted by Click to highlight new comments since:

Thank you for writing this summary!

I wanted to share this new website about the AI Act we have set up together with colleagues at the Future of Life Institute: https://artificialintelligenceact.eu/. You can find the main text, annexes, some analyses of the proposal, and the latest developments on the site. Feel free to get in touch if you'd like to discuss the proposal or have suggestions for the website. We'd like it to be a good resource for the general public but also for people interested in the regulation more closely. 

"If/When the monitoring of transformative AI systems becomes necessary, the AI Act ensures that the European Union will have institutions with plenty of practice."

It's true that setting up institutions earlier allows for more practice, and I suspect the act is probably good on the whole, but it's also worth considering potential negative aspects of setting up institutions earlier. For example:

  • potential for more institutional sclerosis
  • institutional inertia may ~lock in features now, despite having a less clear-eyed view than we'll likely have in the future

Excellent overview, and I completely agree that the AI Act is an important policy for AI governance.

One quibble: as far as I know, the Center for Data Innovation is just a lobbying group for Big Tech - I was a little surprised to see it listed in "public responses from various EA and EA Adjacent organisations".

I'm not very familiar with the Center for Data Innovation, thank you for pointing this out!

I included their response as its author is familiar with EA and well reasoned. I also felt it would be healthy to include a perspective and set of concerns vastly different from my own, as the post is already biased by my choice of focus.

That being said I haven't gotten the best impression by some of the Center for Data Innovation's research. As far as I can tell their widely cited analysis which projects the act to cost €31 billion has flaw in its methodology which results in the estimate turning out much higher. In their defense, their cost-analysis is also conservative in other ways, leading to a lower number than what might be reasonable.

Thank you for this! Very useful.

The AI act creates institutions responsible for monitoring high-risk systems and the monitoring of AI progress as a whole.

In what sense is the AI board (or some other institution?) responsible for monitoring AI progress as a whole?

Sorry I should have said "monitoring AI progress in Europe as a whole" and even then I think it might be misleading.

One of the three central tasks of the AI board is to 'coordinate and contribute to guidance and analysis by the Commission and the national supervisory authorities and other competent authorities on emerging issues across the internal market with regard to matters covered by this Regulation;'

For example, if a high-risk AI system is compliant but still poses a risk the provider is required to immediately inform the AI Board. The national supervisory authorities must also regularly report back to the AI Board about the results of their market surveillance and more.

So the AI Board both gets the mandate and the information to monitor how AI progresses in the EU. And they have to do so to carry out their task effectively even if it's not directly stated anywhere that they are required to do so.

I hope this clears it up, I'm happy that you found the post useful!

I think this is a better link to FLI's position on the AI act: https://ec.europa.eu/info/law/better-regulation/have-your-say/initiatives/12527-Artificial-intelligence-ethical-and-legal-requirements/F2665546_en

(The one in the post goes to their opinion on liability rules. I don't know the relationship between that and the AI act.)

Thank you for spotting this  that mistake. This is the position I meant to link to, I've replaced the link in the post.

The latest information as of June 2023: 

Thank you for this very useful post, it really helped me better understand the topic :)

Curated and popular this week
 ·  · 16m read
 · 
This is a crosspost for The Case for Insect Consciousness by Bob Fischer, which was originally published on Asterisk in January 2025. [Subtitle.] The evidence that insects feel pain is mounting, however we approach the issue. For years, I was on the fence about the possibility of insects feeling pain — sometimes, I defended the hypothesis;[1] more often, I argued against it.[2] Then, in 2021, I started working on the puzzle of how to compare pain intensity across species. If a human and a pig are suffering as much as each one can, are they suffering the same amount? Or is the human’s pain worse? When my colleagues and I looked at several species, investigating both the probability of pain and its relative intensity,[3] we found something unexpected: on both scores, insects aren’t that different from many other animals.  Around the same time, I started working with an entomologist with a background in neuroscience. She helped me appreciate the weaknesses of the arguments against insect pain. (For instance, people make a big deal of stories about praying mantises mating while being eaten; they ignore how often male mantises fight fiercely to avoid being devoured.) The more I studied the science of sentience, the less confident I became about any theory that would let us rule insect sentience out.  I’m a philosopher, and philosophers pride themselves on following arguments wherever they lead. But we all have our limits, and I worry, quite sincerely, that I’ve been too willing to give insects the benefit of the doubt. I’ve been troubled by what we do to farmed animals for my entire adult life, whereas it’s hard to feel much for flies. Still, I find the argument for insect pain persuasive enough to devote a lot of my time to insect welfare research. In brief, the apparent evidence for the capacity of insects to feel pain is uncomfortably strong.[4] We could dismiss it if we had a consensus-commanding theory of sentience that explained why the apparent evidence is ir
 ·  · 40m read
 · 
I am Jason Green-Lowe, the executive director of the Center for AI Policy (CAIP). Our mission is to directly convince Congress to pass strong AI safety legislation. As I explain in some detail in this post, I think our organization has been doing extremely important work, and that we’ve been doing well at it. Unfortunately, we have been unable to get funding from traditional donors to continue our operations. If we don’t get more funding in the next 30 days, we will have to shut down, which will damage our relationships with Congress and make it harder for future advocates to get traction on AI governance. In this post, I explain what we’ve been doing, why I think it’s valuable, and how your donations could help.  This is the first post in what I expect will be a 3-part series. The first post focuses on CAIP’s particular need for funding. The second post will lay out a more general case for why effective altruists and others who worry about AI safety should spend more money on advocacy and less money on research – even if you don’t think my organization in particular deserves any more funding, you might be convinced that it’s a priority to make sure other advocates get more funding. The third post will take a look at some institutional problems that might be part of why our movement has been systematically underfunding advocacy and offer suggestions about how to correct those problems. OUR MISSION AND STRATEGY The Center for AI Policy’s mission is to directly and openly urge the US Congress to pass strong AI safety legislation. By “strong AI safety legislation,” we mean laws that will significantly change AI developers’ incentives and make them less likely to develop or deploy extremely dangerous AI models. The particular dangers we are most worried about are (a) bioweapons, (b) intelligence explosions, and (c) gradual disempowerment. Most AI models do not significantly increase these risks, and so we advocate for narrowly-targeted laws that would focus their att
 ·  · 10m read
 · 
Citation: McKay, H. and Shah, S. (2025). Forecasting farmed animal numbers in 2033. Rethink Priorities. The report is also available on the Rethink Priorities website. Executive summary We produced rough-and-ready forecasts of the number of animals farmed in 2033 with the aim of helping advocates and funders with prioritization decisions. We focus on the most numerous groups of farmed animals: broiler chickens, finfishes, shrimps, and select insect species. Our forecasts suggest almost 6 trillion of these animals could be slaughtered in 2033 (Figure 1).   Figure 1: Invertebrates could account for 95% of farmed animals slaughtered in 2033 according to our midpoint estimates. Note that ‘Insects’ only includes black soldier fly larvae and mealworms. Our midpoint estimates point to a potential fourfold increase in the number of animals slaughtered from 2023 to 2033 and a doubling of the number of animals farmed at any time. Invertebrates drive the majority of this growth, and could account for 95% of farmed animals slaughtered in 2033 (see Figure 1) and three quarters of those alive at any time in our mid-point projections. We believe our forecasts point to an urgent need to address critical questions around the sentience and welfare of farmed invertebrates. Our estimates come with many caveats and warnings. In particular: * Species scope: For practicality, we produced numbers only for a few key animal groups: broiler chickens, finfishes, shrimp, and certain insects (black soldier flies and mealworms only). * Sensitivity to insect farming growth: Our forecasts are particularly sensitive to the growth in insect farming, which is highly sensitive to the success of insect farming business models and their ability to attract future investment. The recent and forecasted estimates, with 90% subjective credible intervals, can be viewed below in Table 1.  Table 1: Estimates of recent and forecasted numbers of broiler chickens, finfishes, shrimps, and insects slau