Hide table of contents

On April 21 the European Commission announced their proposal for European regulation of AI

The proposal strives to be for AI what GDPR has been for data protection.
Over the coming years the proposal will go through multiple readings in the Parliament and Council, where modifications can be proposed to the act before it is finally adopted or dismissed.

After the AI Act was announced, most attention fell on the act's wide definition of AI and blanket bans on 'manipulative AI'. Unfortunately this focus has led many to miss out on the most important points of the act.

I noticed there were no forum posts that would help someone get up to speed on the act. In this post I will summarise the act's most important points, how it may affect the development of transformative AI, as well as the EA community's response to the proposal.

What are the act's important points?

I below outline what I deem the most important points of the regulation, based on their effect on the development of transformative AI. I skip lightly over many details of the act, such as regulatory sandboxes and special rules for biometric systems, to keep the summary brief.

The act will apply to all EU countries and supersede any conflicting national law. Because of the act's broad definition of AI, it is difficult for any EU countries to make laws on AI which would not be in conflict.

It does not apply to military use of AI. Here countries are free to do as they see fit.

Rules for 'high-risk' AI

The Act lists a series of 'high-risk' areas. Systems operating in a high-risk area are considered high-risk and must be reviewed and approved before they can be placed on the market. This means that the AI Act's regulation will not apply to AI developed and used internally by companies. High-risk systems include everything from AI management of electricity grids, to AI that determines who to promote or fire.

Areas that are considered high-risk where certain uses are restricted are the following:

  • biometric identification and categorization
  • management and operation of critical infrastructure
  • educational and vocational training
  • employment, worker management, and access to self-employment
  • access to and enjoyment of essential services and benefits
  • law enforcement
  • migration, asylum and border management
  • administration of justice and democracy

A red thread across the systems which are considered high-risk, is that they make decisions which significantly affect the lives of citizens. The full list of high-risk systems is two pages and can be read in Annex III.

After the law is passed, the commission can add new uses of AI that must be approved as long as they fall under any of the existing high-risk areas.

For a high-risk system to be approved the provider must submit detailed technical documentation for the system.¹ Requirements for technical documentation include:

  • design specifications, key design choices, description of what the system is optimizing.
  • description of any use of third party tools.
  • description of training data, how it has been obtained, how it has been processed.
  • How the system can be monitored and controlled.
    • The system must have in-built operational constraints that cannot be overridden by the system itself and is responsive to the human operator (!!)
  • Description of foreseeable risks the system poses to EU citizens' health, safety and fundamental rights.

In other words the act creates a large, (partially) updatable list of areas where nobody is allowed to deploy AI without explicitly getting approval, which you only get by living up to numerous safety requirements of which one is a working off-switch. High-risk systems not only need approval, but must also be continuously monitored after they are placed on the market.

Establishment of the European AI Board

To enforce this, the act requires new institutions to be created.

  • The European AI Board.
    • Run by the EU Commission (EU's civil service).
    • Oversees national authorities and settles disputes.
  • National supervisory authorities in every EU country.
    • Countries are free to structure AI authority as they see fit.
    • Can create regulatory sandboxes, which allow companies to override the act's regulation in controlled settings.

The national supervisory authorities are responsible for approving high-risk systems and doing post-market monitoring to ensure approved systems are working as intended and pose no threat to EU citizens.

If two national supervisory authorities get into a dispute over whether a system should be approved or not, the European AI Board steps in and makes a final decision. The European AI Board is also responsible for overseeing and coordinating the national supervisory authorities.

Blanket ban of certain AI uses

Social scoring systems by governments are entirely banned.

The EU Act also bans use of AI that "deploys subliminal techniques beyond a person’s consciousness in order to materially distort a person’s behaviour in a manner that causes or is likely to cause that person or another person physical or psychological harm"

If you are finding it unclear exactly which AI systems would fall under this definition you are picking up on an important EU practice. By leaving law deliberately vague, it becomes the job of the European Court of Justice and standardisation bodies to determine what systems fall under this definition when specific cases of accused misuse are brought to court. This is done with the belief that specific definitions fall prey to loopholes, whereas the court is better able to punish only those who go against the 'spirit' of the law.

A less flattering analysis is that the ban is window dressing. The commission has struggled to come up with any examples of a banned use case that wouldn't already considered illegal by other EU regulation.²

AI systems must make themselves known

Any AI system that interacts with humans, must make it clear that it is an AI.
A customer-service chatbot pretending to be a human agent will, for example, be illegal.

Users of an AI system which generates deepfakes or similar content must disclose to their audience that the content is fake.

Users of emotion recognition or biometric categorization systems must disclose this to the subjects that they are using the system on.

Why should you care about the AI Act?

The AI Act is a smoke-test for AI governance

The AI act creates institutions responsible for monitoring high-risk systems and possibly broader monitoring of AI development in Europe. Doing so effectively is a monumentally difficult task that takes trial and error to do well.

If/When the monitoring of transformative AI systems becomes necessary, the AI Act ensures that the European Union will have institutions with plenty of practice. Other countries looking to implement similar oversight measures will also be able to learn from the AI Board's successes and failures.

The Brussels Effect

The EU AI Act is the single biggest piece of AI legislation introduced to the world yet. If history is anything to go by, there are good reasons to believe the act will influence the development of AI and AI legislation the world over.

When GDPR was introduced it was cheaper for Microsoft to just implement GDPR worldwide than to create a separate European version of every service they offer. Similarly we can expect it to be cheaper for AI developers serving the European market to ensure all systems are developed to be compliant with European regulation. This phenomenon has been dubbed the Brussels Effect.

The extent to which the Brussels Effect will affect the development of transformative AI is conditional on the continuity of AI takeoffs.

If transformative AI is brought about by a continuous stream of incremental improvements, we can expect development to be constrained by nearterm profits. Companies choosing to forego the European market face a competitive disadvantage. In such a world European laws and regulation is likely to play a significant international role.

In a world where transformative AI is brought about by discontinuous jumps in capability, we are much more likely to see races between private companies and governments alike all gunning to be first. In this world the European Union will struggle to be internationally influential.

I have written a rough analysis of why this is which can be read here.

The AI Act lays the foundation for future AI regulation

The AI act sets up institutions that will play an important role for all future regulation. Lawmakers around the world will draw lessons from its successes and failures. An AI act that is a smashing success moves the overton window, and enables future regulation. The act sets important legal precedents, for example that high-risk AI should be continuously monitored to prevent harm.

Once passed the legislation is unlikely to see major changes or updates

Flagship regulation of the EU such as GDPR and REACH (chemical regulation) tend not to see major updates even decades after having been passed.

Whatever act is passed, Europe will be stuck with for a while if going by historical precedent. Though that historical precedent may not be particularly applicable. If AI starts rapidly and visibly transforming society, I doubt the commission will be shy to suggest large updates to the regulation.

Responses from the EA community

The AI Act has received mixed responses within the EA community. I've summarized what I view as the main positive points emphasized by the EA community and the main areas that need improvements.

Positive points often emphasized

  • The act justifies AI regulation through the need to protect citizens' health, safety and fundamental rights. This sets a fantastic precedent for future regulation.
  • The need for continuous monitoring of high-risk AI and the creation of institutions capable of doing so.
  • That high-risk systems 'must have in-built operational constraints that cannot be overridden by the system itself and is responsive to the human operator' (ie. working off-switch)

Commonly suggested improvements

  • Every definition and every wording assumes that AI is specialized and narrow. Only minor changes are needed to enable the commission to regulate general AI systems with many intended uses.
  • The European AI Board will currently do little to explicitly monitor progress of AI as a whole. The European AI Board can be made responsible for maintaining a database of AI accidents and near misses.
  • The regulation act only affects AI that is placed on the market. The European AI Board can be made responsible for monitoring non-market AI for industrial accidents, similarly to what is done with chemical regulation.
  • Operators and developers of high-risk systems must explicitly consider possible violations of an individual's health, safety or fundamental rights. The conformity assessment for high-risk systems could require operators and developers to also consider societal-scale consequences.

You can read the public responses from various EA and EA Adjacent organisations here:

What is next

Insofar as the AI Act matters, now is the time to act. The EA community is generally hesitant to engage directly with policy for good reason. We barely know what good AI policy looks like, and we would preferably wait with acting before we know how to act and the consequences of doing so.

But the rest of the world is not static and will adopt policy even if we would prefer to wait. The choice is not to engage with the act now or later, it is to engage with the act now or never.

The AI act is not yet final, but the European Union is likely to see some version of this act passed in the coming years. Name of the game for EA organisations engaged with the act is generally to push for improvements similar to the commonly suggested ones, but there is much more work which can be done.

If the act is passed EA's wanting to work with AI in the European Union, should keep an eye out for new opportunities such as working in the EU AI Board or national supervisory authorities. This may be particularly impactful in the early years of these institutions when the culture and practices are particularly malleable.

 

 

¹ The full list of required technical documentation can be found in annex IV

² Demystifying the AI act argues this in greater detail.

Comments10


Sorted by Click to highlight new comments since:

Thank you for writing this summary!

I wanted to share this new website about the AI Act we have set up together with colleagues at the Future of Life Institute: https://artificialintelligenceact.eu/. You can find the main text, annexes, some analyses of the proposal, and the latest developments on the site. Feel free to get in touch if you'd like to discuss the proposal or have suggestions for the website. We'd like it to be a good resource for the general public but also for people interested in the regulation more closely. 

"If/When the monitoring of transformative AI systems becomes necessary, the AI Act ensures that the European Union will have institutions with plenty of practice."

It's true that setting up institutions earlier allows for more practice, and I suspect the act is probably good on the whole, but it's also worth considering potential negative aspects of setting up institutions earlier. For example:

  • potential for more institutional sclerosis
  • institutional inertia may ~lock in features now, despite having a less clear-eyed view than we'll likely have in the future

Excellent overview, and I completely agree that the AI Act is an important policy for AI governance.

One quibble: as far as I know, the Center for Data Innovation is just a lobbying group for Big Tech - I was a little surprised to see it listed in "public responses from various EA and EA Adjacent organisations".

I'm not very familiar with the Center for Data Innovation, thank you for pointing this out!

I included their response as its author is familiar with EA and well reasoned. I also felt it would be healthy to include a perspective and set of concerns vastly different from my own, as the post is already biased by my choice of focus.

That being said I haven't gotten the best impression by some of the Center for Data Innovation's research. As far as I can tell their widely cited analysis which projects the act to cost €31 billion has flaw in its methodology which results in the estimate turning out much higher. In their defense, their cost-analysis is also conservative in other ways, leading to a lower number than what might be reasonable.

Thank you for this! Very useful.

The AI act creates institutions responsible for monitoring high-risk systems and the monitoring of AI progress as a whole.

In what sense is the AI board (or some other institution?) responsible for monitoring AI progress as a whole?

Sorry I should have said "monitoring AI progress in Europe as a whole" and even then I think it might be misleading.

One of the three central tasks of the AI board is to 'coordinate and contribute to guidance and analysis by the Commission and the national supervisory authorities and other competent authorities on emerging issues across the internal market with regard to matters covered by this Regulation;'

For example, if a high-risk AI system is compliant but still poses a risk the provider is required to immediately inform the AI Board. The national supervisory authorities must also regularly report back to the AI Board about the results of their market surveillance and more.

So the AI Board both gets the mandate and the information to monitor how AI progresses in the EU. And they have to do so to carry out their task effectively even if it's not directly stated anywhere that they are required to do so.

I hope this clears it up, I'm happy that you found the post useful!

I think this is a better link to FLI's position on the AI act: https://ec.europa.eu/info/law/better-regulation/have-your-say/initiatives/12527-Artificial-intelligence-ethical-and-legal-requirements/F2665546_en

(The one in the post goes to their opinion on liability rules. I don't know the relationship between that and the AI act.)

Thank you for spotting this  that mistake. This is the position I meant to link to, I've replaced the link in the post.

The latest information as of June 2023: 

Thank you for this very useful post, it really helped me better understand the topic :)

Curated and popular this week
 ·  · 13m read
 · 
Notes  The following text explores, in a speculative manner, the evolutionary question: Did high-intensity affective states, specifically Pain, emerge early in evolutionary history, or did they develop gradually over time? Note: We are not neuroscientists; our work draws on our evolutionary biology background and our efforts to develop welfare metrics that accurately reflect reality and effectively reduce suffering. We hope these ideas may interest researchers in neuroscience, comparative cognition, and animal welfare science. This discussion is part of a broader manuscript in progress, focusing on interspecific comparisons of affective capacities—a critical question for advancing animal welfare science and estimating the Welfare Footprint of animal-sourced products.     Key points  Ultimate question: Do primitive sentient organisms experience extreme pain intensities, or fine-grained pain intensity discrimination, or both? Scientific framing: Pain functions as a biological signalling system that guides behavior by encoding motivational importance. The evolution of Pain signalling —its intensity range and resolution (i.e., the granularity with which differences in Pain intensity can be perceived)— can be viewed as an optimization problem, where neural architectures must balance computational efficiency, survival-driven signal prioritization, and adaptive flexibility. Mathematical clarification: Resolution is a fundamental requirement for encoding and processing information. Pain varies not only in overall intensity but also in granularity—how finely intensity levels can be distinguished.  Hypothetical Evolutionary Pathways: by analysing affective intensity (low, high) and resolution (low, high) as independent dimensions, we describe four illustrative evolutionary scenarios that provide a structured framework to examine whether primitive sentient organisms can experience Pain of high intensity, nuanced affective intensities, both, or neither.     Introdu
 ·  · 2m read
 · 
A while back (as I've just been reminded by a discussion on another thread), David Thorstad wrote a bunch of posts critiquing the idea that small reductions in extinction risk have very high value, because the expected number of people who will exist in the future is very high: https://reflectivealtruism.com/category/my-papers/mistakes-in-moral-mathematics/. The arguments are quite complicated, but the basic points are that the expected number of people in the future is much lower than longtermists estimate because: -Longtermists tend to neglect the fact that even if your intervention blocks one extinction risk, there are others it might fail to block; surviving for billions  (or more) of years likely  requires driving extinction risk very low for a long period of time, and if we are not likely to survive that long, even conditional on longtermist interventions against one extinction risk succeeding, the value of preventing extinction (conditional on more happy people being valuable) is much lower.  -Longtermists tend to assume that in the future population will be roughly as large as the available resources can support. But ever since the industrial revolution, as countries get richer, their fertility rate falls and falls until it is below replacement. So we can't just assume future population sizes will be near the limits of what the available resources will support. Thorstad goes on to argue that this weakens the case for longtermism generally, not just the value of extinction risk reductions, since the case for longtermism is that future expected population  is many times the current population, or at least could be given plausible levels of longtermist extinction risk reduction effort. He also notes that if he can find multiple common mistakes in longtermist estimates of expected future population, we should expect that those estimates might be off in other ways. (At this point I would note that they could also be missing factors that bias their estimates of
 ·  · 3m read
 · 
We’ve redesigned effectivealtruism.org to improve understanding and perception of effective altruism, and make it easier to take action.  View the new site → I led the redesign and will be writing in the first person here, but many others contributed research, feedback, writing, editing, and development. I’d love to hear what you think, here is a feedback form. Redesign goals This redesign is part of CEA’s broader efforts to improve how effective altruism is understood and perceived. I focused on goals aligned with CEA’s branding and growth strategy: 1. Improve understanding of what effective altruism is Make the core ideas easier to grasp by simplifying language, addressing common misconceptions, and showcasing more real-world examples of people and projects. 2. Improve the perception of effective altruism I worked from a set of brand associations defined by the group working on the EA brand project[1]. These are words we want people to associate with effective altruism more strongly—like compassionate, competent, and action-oriented. 3. Increase impactful actions Make it easier for visitors to take meaningful next steps, like signing up for the newsletter or intro course, exploring career opportunities, or donating. We focused especially on three key audiences: * To-be direct workers: young people and professionals who might explore impactful career paths * Opinion shapers and people in power: journalists, policymakers, and senior professionals in relevant fields * Donors: from large funders to smaller individual givers and peer foundations Before and after The changes across the site are aimed at making it clearer, more skimmable, and easier to navigate. Here are some side-by-side comparisons: Landing page Some of the changes: * Replaced the economic growth graph with a short video highlighting different cause areas and effective altruism in action * Updated tagline to "Find the best ways to help others" based on testing by Rethink