Hide table of contents

Last month we held an AI Safety Debate with the UCL EA Society.

I thought I'd share a few thoughts from running the event: both about community building, because I think the event went well, and more broadly about AI Safety. Not of all these thoughts are mine: thank you to Erin and Otto for sharing theirs. 

A full YouTube recording is here:

 

Community-Building Takes

  • Entertainment Value: Around 60/70 people attended, which is around 10x our normal attendance for a typical event. I think this is primarily because a debate is interesting to watch than a speaker event, or a workshop. Perhaps this was already obvious others, but if you are looking for an event to reach a big audience, entertainment value is important. 
  • Disagreeing about AI risk is okay: before I was concerned that the event might be overly polarising. The opposite happened – despite disagreements about 'rogue AI' scenarios, the speakers agreed broadly that: AI could be transformative for humanity, misuse risks are serious, and that regulation/evals are important. This may not have happened if the people arguing against x-risk were e/accs. 
  • X-Risk sentiment in the audience: at one point in the debate, one participant asked the audience who thought  AI was an existential risk. From memory, around 2/3s of students put up their hands. This shouldn't be too surprising, given that the 'public' is worried about about x-risk (e.g. here). (Although, obviously, this wasn't a representative sample.)

AI Things

  • AI Ethics folks aren't aware of the common ground: At one point in the debate, the "x-risk is a distraction" argument was brought up. In response, Reuben Adams mentioned that there is potential common ground between "ethics" and "safety" concerns, through evals. This seemed to have genuinely surprised the Science/Technology Professor (Jack Stilgoe) who was arguing against x-risk. Perhaps this is a result from Twitter echo-chambers? Who knows. 
  • (Bio) Misuse Risks were most convincing to the audience: this seemed like a particularly persuasive threat model, based on conversations after. I don't think this is particularly novel: I believe bio-terror was a prominent theme in the discussion of 'catastrophic risk' at the UK AI Summit last November.  

Feel free to reach out if you are a community-builder and  you'd like advise on organising a similar event. 

26

0
0
1

Reactions

0
0
1
Comments2


Sorted by Click to highlight new comments since:

X-Risk sentiment in the audience: at one point in the debate, one participant asked the audience who thought  AI was an existential risk. From memory, around 2/3s of students put up their hands.

Do you have a rough sense of how many of these had interacted with AI Safety programming/content from your group? Like, was a substantial part of the audience just members from your group who had heard EA arguments about AIS?

I’d guess less than 1/4 of the people had engaged w AIS (e.g. read some books/articles). Perhaps 1/5 had heard about EA before. Most were interested in AI though.

Curated and popular this week
 ·  · 11m read
 · 
Confidence: Medium, underlying data is patchy and relies on a good amount of guesswork, data work involved a fair amount of vibecoding.  Intro:  Tom Davidson has an excellent post explaining the compute bottleneck objection to the software-only intelligence explosion.[1] The rough idea is that AI research requires two inputs: cognitive labor and research compute. If these two inputs are gross complements, then even if there is recursive self-improvement in the amount of cognitive labor directed towards AI research, this process will fizzle as you get bottlenecked by the amount of research compute.  The compute bottleneck objection to the software-only intelligence explosion crucially relies on compute and cognitive labor being gross complements; however, this fact is not at all obvious. You might think compute and cognitive labor are gross substitutes because more labor can substitute for a higher quantity of experiments via more careful experimental design or selection of experiments. Or you might indeed think they are gross complements because eventually, ideas need to be tested out in compute-intensive, experimental verification.  Ideally, we could use empirical evidence to get some clarity on whether compute and cognitive labor are gross complements; however, the existing empirical evidence is weak. The main empirical estimate that is discussed in Tom's article is Oberfield and Raval (2014), which estimates the elasticity of substitution (the standard measure of whether goods are complements or substitutes) between capital and labor in manufacturing plants. It is not clear how well we can extrapolate from manufacturing to AI research.  In this article, we will try to remedy this by estimating the elasticity of substitution between research compute and cognitive labor in frontier AI firms.  Model  Baseline CES in Compute To understand how we estimate the elasticity of substitution, it will be useful to set up a theoretical model of researching better alg
 ·  · 7m read
 · 
Crossposted from my blog.  When I started this blog in high school, I did not imagine that I would cause The Daily Show to do an episode about shrimp, containing the following dialogue: > Andres: I was working in investment banking. My wife was helping refugees, and I saw how meaningful her work was. And I decided to do the same. > > Ronny: Oh, so you're helping refugees? > > Andres: Well, not quite. I'm helping shrimp. (Would be a crazy rug pull if, in fact, this did not happen and the dialogue was just pulled out of thin air).   But just a few years after my blog was born, some Daily Show producer came across it. They read my essay on shrimp and thought it would make a good daily show episode. Thus, the Daily Show shrimp episode was born.   I especially love that they bring on an EA critic who is expected to criticize shrimp welfare (Ronny primes her with the declaration “fuck these shrimp”) but even she is on board with the shrimp welfare project. Her reaction to the shrimp welfare project is “hey, that’s great!” In the Bible story of Balaam and Balak, Balak King of Moab was peeved at the Israelites. So he tries to get Balaam, a prophet, to curse the Israelites. Balaam isn’t really on board, but he goes along with it. However, when he tries to curse the Israelites, he accidentally ends up blessing them on grounds that “I must do whatever the Lord says.” This was basically what happened on the Daily Show. They tried to curse shrimp welfare, but they actually ended up blessing it! Rumor has it that behind the scenes, Ronny Chieng declared “What have you done to me? I brought you to curse my enemies, but you have done nothing but bless them!” But the EA critic replied “Must I not speak what the Lord puts in my mouth?”   Chieng by the end was on board with shrimp welfare! There’s not a person in the episode who agrees with the failed shrimp torture apologia of Very Failed Substacker Lyman Shrimp. (I choked up a bit at the closing song about shrimp for s
 ·  · 9m read
 · 
Crosspost from my blog.  Content warning: this article will discuss extreme agony. This is deliberate; I think it’s important to get a glimpse of the horror that fills the world and that you can do something about. I think this is one of my most important articles so I’d really appreciate if you could share and restack it! The world is filled with extreme agony. We go through our daily life mostly ignoring its unfathomably shocking dreadfulness because if we didn’t, we could barely focus on anything else. But those going through it cannot ignore it. Imagine that you were placed in a pot of water that was slowly brought to a boil until it boiled you to death. Take a moment to really imagine the scenario as fully as you can. Don’t just acknowledge at an intellectual level that it would be bad—really seriously think about just how bad it would be. Seriously think about how much you’d give up to stop it from happening. Or perhaps imagine some other scenario where you experience unfathomable pain. Imagine having your hand taped to a frying pan, which is then placed over a flame. The frying pan slowly heats up until the pain is unbearable, and for minutes you must endure it. Vividly imagine just how awful it would be to be in this scenario—just how much you’d give up to avoid it, how much you’d give to be able to pull your hand away. I don’t know exactly how many months or years of happy life I’d give up to avoid a scenario like this, but potentially quite a lot. One of the insights that I find to be most important in thinking about the world is just how bad extreme suffering is. I got this insight drilled into me by reading negative utilitarian blogs in high school. Seriously reflecting on just how bad extreme suffering is—how its intensity seems infinite to those experiencing it—should influence your judgments about a lot of things. Because the world is filled with extreme suffering. Many humans have been the victims of extreme suffering. Throughout history, tort