Hide table of contents

On April 21 the European Commission announced their proposal for European regulation of AI

The proposal strives to be for AI what GDPR has been for data protection.
Over the coming years the proposal will go through multiple readings in the Parliament and Council, where modifications can be proposed to the act before it is finally adopted or dismissed.

After the AI Act was announced, most attention fell on the act's wide definition of AI and blanket bans on 'manipulative AI'. Unfortunately this focus has led many to miss out on the most important points of the act.

I noticed there were no forum posts that would help someone get up to speed on the act. In this post I will summarise the act's most important points, how it may affect the development of transformative AI, as well as the EA community's response to the proposal.

What are the act's important points?

I below outline what I deem the most important points of the regulation, based on their effect on the development of transformative AI. I skip lightly over many details of the act, such as regulatory sandboxes and special rules for biometric systems, to keep the summary brief.

The act will apply to all EU countries and supersede any conflicting national law. Because of the act's broad definition of AI, it is difficult for any EU countries to make laws on AI which would not be in conflict.

It does not apply to military use of AI. Here countries are free to do as they see fit.

Rules for 'high-risk' AI

The Act lists a series of 'high-risk' areas. Systems operating in a high-risk area are considered high-risk and must be reviewed and approved before they can be placed on the market. This means that the AI Act's regulation will not apply to AI developed and used internally by companies. High-risk systems include everything from AI management of electricity grids, to AI that determines who to promote or fire.

Areas that are considered high-risk where certain uses are restricted are the following:

  • biometric identification and categorization
  • management and operation of critical infrastructure
  • educational and vocational training
  • employment, worker management, and access to self-employment
  • access to and enjoyment of essential services and benefits
  • law enforcement
  • migration, asylum and border management
  • administration of justice and democracy

A red thread across the systems which are considered high-risk, is that they make decisions which significantly affect the lives of citizens. The full list of high-risk systems is two pages and can be read in Annex III.

After the law is passed, the commission can add new uses of AI that must be approved as long as they fall under any of the existing high-risk areas.

For a high-risk system to be approved the provider must submit detailed technical documentation for the system.¹ Requirements for technical documentation include:

  • design specifications, key design choices, description of what the system is optimizing.
  • description of any use of third party tools.
  • description of training data, how it has been obtained, how it has been processed.
  • How the system can be monitored and controlled.
    • The system must have in-built operational constraints that cannot be overridden by the system itself and is responsive to the human operator (!!)
  • Description of foreseeable risks the system poses to EU citizens' health, safety and fundamental rights.

In other words the act creates a large, (partially) updatable list of areas where nobody is allowed to deploy AI without explicitly getting approval, which you only get by living up to numerous safety requirements of which one is a working off-switch. High-risk systems not only need approval, but must also be continuously monitored after they are placed on the market.

Establishment of the European AI Board

To enforce this, the act requires new institutions to be created.

  • The European AI Board.
    • Run by the EU Commission (EU's civil service).
    • Oversees national authorities and settles disputes.
  • National supervisory authorities in every EU country.
    • Countries are free to structure AI authority as they see fit.
    • Can create regulatory sandboxes, which allow companies to override the act's regulation in controlled settings.

The national supervisory authorities are responsible for approving high-risk systems and doing post-market monitoring to ensure approved systems are working as intended and pose no threat to EU citizens.

If two national supervisory authorities get into a dispute over whether a system should be approved or not, the European AI Board steps in and makes a final decision. The European AI Board is also responsible for overseeing and coordinating the national supervisory authorities.

Blanket ban of certain AI uses

Social scoring systems by governments are entirely banned.

The EU Act also bans use of AI that "deploys subliminal techniques beyond a person’s consciousness in order to materially distort a person’s behaviour in a manner that causes or is likely to cause that person or another person physical or psychological harm"

If you are finding it unclear exactly which AI systems would fall under this definition you are picking up on an important EU practice. By leaving law deliberately vague, it becomes the job of the European Court of Justice and standardisation bodies to determine what systems fall under this definition when specific cases of accused misuse are brought to court. This is done with the belief that specific definitions fall prey to loopholes, whereas the court is better able to punish only those who go against the 'spirit' of the law.

A less flattering analysis is that the ban is window dressing. The commission has struggled to come up with any examples of a banned use case that wouldn't already considered illegal by other EU regulation.²

AI systems must make themselves known

Any AI system that interacts with humans, must make it clear that it is an AI.
A customer-service chatbot pretending to be a human agent will, for example, be illegal.

Users of an AI system which generates deepfakes or similar content must disclose to their audience that the content is fake.

Users of emotion recognition or biometric categorization systems must disclose this to the subjects that they are using the system on.

Why should you care about the AI Act?

The AI Act is a smoke-test for AI governance

The AI act creates institutions responsible for monitoring high-risk systems and possibly broader monitoring of AI development in Europe. Doing so effectively is a monumentally difficult task that takes trial and error to do well.

If/When the monitoring of transformative AI systems becomes necessary, the AI Act ensures that the European Union will have institutions with plenty of practice. Other countries looking to implement similar oversight measures will also be able to learn from the AI Board's successes and failures.

The Brussels Effect

The EU AI Act is the single biggest piece of AI legislation introduced to the world yet. If history is anything to go by, there are good reasons to believe the act will influence the development of AI and AI legislation the world over.

When GDPR was introduced it was cheaper for Microsoft to just implement GDPR worldwide than to create a separate European version of every service they offer. Similarly we can expect it to be cheaper for AI developers serving the European market to ensure all systems are developed to be compliant with European regulation. This phenomenon has been dubbed the Brussels Effect.

The extent to which the Brussels Effect will affect the development of transformative AI is conditional on the continuity of AI takeoffs.

If transformative AI is brought about by a continuous stream of incremental improvements, we can expect development to be constrained by nearterm profits. Companies choosing to forego the European market face a competitive disadvantage. In such a world European laws and regulation is likely to play a significant international role.

In a world where transformative AI is brought about by discontinuous jumps in capability, we are much more likely to see races between private companies and governments alike all gunning to be first. In this world the European Union will struggle to be internationally influential.

I have written a rough analysis of why this is which can be read here.

The AI Act lays the foundation for future AI regulation

The AI act sets up institutions that will play an important role for all future regulation. Lawmakers around the world will draw lessons from its successes and failures. An AI act that is a smashing success moves the overton window, and enables future regulation. The act sets important legal precedents, for example that high-risk AI should be continuously monitored to prevent harm.

Once passed the legislation is unlikely to see major changes or updates

Flagship regulation of the EU such as GDPR and REACH (chemical regulation) tend not to see major updates even decades after having been passed.

Whatever act is passed, Europe will be stuck with for a while if going by historical precedent. Though that historical precedent may not be particularly applicable. If AI starts rapidly and visibly transforming society, I doubt the commission will be shy to suggest large updates to the regulation.

Responses from the EA community

The AI Act has received mixed responses within the EA community. I've summarized what I view as the main positive points emphasized by the EA community and the main areas that need improvements.

Positive points often emphasized

  • The act justifies AI regulation through the need to protect citizens' health, safety and fundamental rights. This sets a fantastic precedent for future regulation.
  • The need for continuous monitoring of high-risk AI and the creation of institutions capable of doing so.
  • That high-risk systems 'must have in-built operational constraints that cannot be overridden by the system itself and is responsive to the human operator' (ie. working off-switch)

Commonly suggested improvements

  • Every definition and every wording assumes that AI is specialized and narrow. Only minor changes are needed to enable the commission to regulate general AI systems with many intended uses.
  • The European AI Board will currently do little to explicitly monitor progress of AI as a whole. The European AI Board can be made responsible for maintaining a database of AI accidents and near misses.
  • The regulation act only affects AI that is placed on the market. The European AI Board can be made responsible for monitoring non-market AI for industrial accidents, similarly to what is done with chemical regulation.
  • Operators and developers of high-risk systems must explicitly consider possible violations of an individual's health, safety or fundamental rights. The conformity assessment for high-risk systems could require operators and developers to also consider societal-scale consequences.

You can read the public responses from various EA and EA Adjacent organisations here:

What is next

Insofar as the AI Act matters, now is the time to act. The EA community is generally hesitant to engage directly with policy for good reason. We barely know what good AI policy looks like, and we would preferably wait with acting before we know how to act and the consequences of doing so.

But the rest of the world is not static and will adopt policy even if we would prefer to wait. The choice is not to engage with the act now or later, it is to engage with the act now or never.

The AI act is not yet final, but the European Union is likely to see some version of this act passed in the coming years. Name of the game for EA organisations engaged with the act is generally to push for improvements similar to the commonly suggested ones, but there is much more work which can be done.

If the act is passed EA's wanting to work with AI in the European Union, should keep an eye out for new opportunities such as working in the EU AI Board or national supervisory authorities. This may be particularly impactful in the early years of these institutions when the culture and practices are particularly malleable.

 

 

¹ The full list of required technical documentation can be found in annex IV

² Demystifying the AI act argues this in greater detail.

Comments10


Sorted by Click to highlight new comments since:

Thank you for writing this summary!

I wanted to share this new website about the AI Act we have set up together with colleagues at the Future of Life Institute: https://artificialintelligenceact.eu/. You can find the main text, annexes, some analyses of the proposal, and the latest developments on the site. Feel free to get in touch if you'd like to discuss the proposal or have suggestions for the website. We'd like it to be a good resource for the general public but also for people interested in the regulation more closely. 

"If/When the monitoring of transformative AI systems becomes necessary, the AI Act ensures that the European Union will have institutions with plenty of practice."

It's true that setting up institutions earlier allows for more practice, and I suspect the act is probably good on the whole, but it's also worth considering potential negative aspects of setting up institutions earlier. For example:

  • potential for more institutional sclerosis
  • institutional inertia may ~lock in features now, despite having a less clear-eyed view than we'll likely have in the future

Excellent overview, and I completely agree that the AI Act is an important policy for AI governance.

One quibble: as far as I know, the Center for Data Innovation is just a lobbying group for Big Tech - I was a little surprised to see it listed in "public responses from various EA and EA Adjacent organisations".

I'm not very familiar with the Center for Data Innovation, thank you for pointing this out!

I included their response as its author is familiar with EA and well reasoned. I also felt it would be healthy to include a perspective and set of concerns vastly different from my own, as the post is already biased by my choice of focus.

That being said I haven't gotten the best impression by some of the Center for Data Innovation's research. As far as I can tell their widely cited analysis which projects the act to cost €31 billion has flaw in its methodology which results in the estimate turning out much higher. In their defense, their cost-analysis is also conservative in other ways, leading to a lower number than what might be reasonable.

Thank you for this! Very useful.

The AI act creates institutions responsible for monitoring high-risk systems and the monitoring of AI progress as a whole.

In what sense is the AI board (or some other institution?) responsible for monitoring AI progress as a whole?

Sorry I should have said "monitoring AI progress in Europe as a whole" and even then I think it might be misleading.

One of the three central tasks of the AI board is to 'coordinate and contribute to guidance and analysis by the Commission and the national supervisory authorities and other competent authorities on emerging issues across the internal market with regard to matters covered by this Regulation;'

For example, if a high-risk AI system is compliant but still poses a risk the provider is required to immediately inform the AI Board. The national supervisory authorities must also regularly report back to the AI Board about the results of their market surveillance and more.

So the AI Board both gets the mandate and the information to monitor how AI progresses in the EU. And they have to do so to carry out their task effectively even if it's not directly stated anywhere that they are required to do so.

I hope this clears it up, I'm happy that you found the post useful!

I think this is a better link to FLI's position on the AI act: https://ec.europa.eu/info/law/better-regulation/have-your-say/initiatives/12527-Artificial-intelligence-ethical-and-legal-requirements/F2665546_en

(The one in the post goes to their opinion on liability rules. I don't know the relationship between that and the AI act.)

Thank you for spotting this  that mistake. This is the position I meant to link to, I've replaced the link in the post.

The latest information as of June 2023: 

Thank you for this very useful post, it really helped me better understand the topic :)

Curated and popular this week
 ·  · 12m read
 · 
Economic growth is a unique field, because it is relevant to both the global development side of EA and the AI side of EA. Global development policy can be informed by models that offer helpful diagnostics into the drivers of growth, while growth models can also inform us about how AI progress will affect society. My friend asked me to create a growth theory reading list for an average EA who is interested in applying growth theory to EA concerns. This is my list. (It's shorter and more balanced between AI/GHD than this list) I hope it helps anyone who wants to dig into growth questions themselves. These papers require a fair amount of mathematical maturity. If you don't feel confident about your math, I encourage you to start with Jones 2016 to get a really strong grounding in the facts of growth, with some explanations in words for how growth economists think about fitting them into theories. Basics of growth These two papers cover the foundations of growth theory. They aren't strictly essential for understanding the other papers, but they're helpful and likely where you should start if you have no background in growth. Jones 2016 Sociologically, growth theory is all about finding facts that beg to be explained. For half a century, growth theory was almost singularly oriented around explaining the "Kaldor facts" of growth. These facts organize what theories are entertained, even though they cannot actually validate a theory – after all, a totally incorrect theory could arrive at the right answer by chance. In this way, growth theorists are engaged in detective work; they try to piece together the stories that make sense given the facts, making leaps when they have to. This places the facts of growth squarely in the center of theorizing, and Jones 2016 is the most comprehensive treatment of those facts, with accessible descriptions of how growth models try to represent those facts. You will notice that I recommend more than a few papers by Chad Jones in this
 ·  · 6m read
 · 
This post summarizes a new meta-analysis from the Humane and Sustainable Food Lab. We analyze the most rigorous randomized controlled trials (RCTs) that aim to reduce consumption of meat and animal products (MAP). We conclude that no theoretical approach, delivery mechanism, or persuasive message should be considered a well-validated means of reducing MAP consumption. By contrast, reducing consumption of red and processed meat (RPM) appears to be an easier target. However, if RPM reductions lead to more consumption of chicken and fish, this is likely bad for animal welfare and doesn’t ameliorate zoonotic outbreak or land and water pollution. We also find that many promising approaches await rigorous evaluation. This post updates a post from a year ago. We first summarize the current paper, and then describe how the project and its findings have evolved. What is a rigorous RCT? We operationalize “rigorous RCT” as any study that: * Randomly assigns participants to a treatment and control group * Measures consumption directly -- rather than (or in addition to) attitudes, intentions, or hypothetical choices -- at least a single day after treatment begins * Has at least 25 subjects in both treatment and control, or, in the case of cluster-assigned studies (e.g. university classes that all attend a lecture together or not), at least 10 clusters in total. Additionally, studies needed to intend to reduce MAP consumption, rather than (e.g.) encouraging people to switch from beef to chicken, and be publicly available by December 2023. We found 35 papers, comprising 41 studies and 112 interventions, that met these criteria. 18 of 35 papers have been published since 2020. The main theoretical approaches: Broadly speaking, studies used Persuasion, Choice Architecture, Psychology, and a combination of Persuasion and Psychology to try to change eating behavior. Persuasion studies typically provide arguments about animal welfare, health, and environmental welfare reason
LintzA
 ·  · 15m read
 · 
Introduction Several developments over the past few months should cause you to re-evaluate what you are doing. These include: 1. Updates toward short timelines 2. The Trump presidency 3. The o1 (inference-time compute scaling) paradigm 4. Deepseek 5. Stargate/AI datacenter spending 6. Increased internal deployment 7. Absence of AI x-risk/safety considerations in mainstream AI discourse Taken together, these are enough to render many existing AI governance strategies obsolete (and probably some technical safety strategies too). There's a good chance we're entering crunch time and that should absolutely affect your theory of change and what you plan to work on. In this piece I try to give a quick summary of these developments and think through the broader implications these have for AI safety. At the end of the piece I give some quick initial thoughts on how these developments affect what safety-concerned folks should be prioritizing. These are early days and I expect many of my takes will shift, look forward to discussing in the comments!  Implications of recent developments Updates toward short timelines There’s general agreement that timelines are likely to be far shorter than most expected. Both Sam Altman and Dario Amodei have recently said they expect AGI within the next 3 years. Anecdotally, nearly everyone I know or have heard of who was expecting longer timelines has updated significantly toward short timelines (<5 years). E.g. Ajeya’s median estimate is 99% automation of fully-remote jobs in roughly 6-8 years, 5+ years earlier than her 2023 estimate. On a quick look, prediction markets seem to have shifted to short timelines (e.g. Metaculus[1] & Manifold appear to have roughly 2030 median timelines to AGI, though haven’t moved dramatically in recent months). We’ve consistently seen performance on benchmarks far exceed what most predicted. Most recently, Epoch was surprised to see OpenAI’s o3 model achieve 25% on its Frontier Math dataset (thou