Emotional Status: I started working on this project before the FTX collapse and all subsequent controversies and drama. I notice an internal sense that I am "piling on" or "kicking EA while it's down." This isn't my intention, and I understand if a person reading this feels burned out on EA criticisms and would rather focus on object level forum posts right now. 

 I have just released the first three episodes of a new interview podcast on criticisms of EA:

  1. Democratizing Risk and EA with Carla Zoe Cremer and Luke Kemp
  2. Expected Value and Critical Rationalism with Vaden Masrani and Ben Chugg
  3. Is EA an Ideology? with James Fodor

I am in the process of contacting potential guests for future episodes, and would love any suggestions on who I should interview next. Here is an anonymous feedback form that you can use to tell me anything you don't want to write in a comment.

47

0
0

Reactions

0
0
Comments9


Sorted by Click to highlight new comments since:

Some quick thoughts, poorly structured:

  • I like seeing more attempts at understanding “EA Critiques” / ways of improving EA.
  • I think the timing that this is being released is inconvenient, but I don’t blame you.
  • Personally I feel exhausted by the last few months of what I felt like was much some firestorm of angry criticism. Much of it, mainly from the media and Twitter, feels like it was very antagonistic and in poor taste. At the same time, I think our movement has a whole lot of improvement to do.
  • As with all critiques, I am emotionally nervous about it being used as “cheap ammunition” for groups that just to hate on EA.
  • Personally, I very much side with James already on the Ideology question. I think Helen’s post was pretty bad. I’m not sure how much Helens post represent “core EA understanding”, and as such, the attack on it feels a bit less like “EA criticism”, than “regular forum content”. However, this might well be nitpicking. I listened to around half of this so far and found it reasonable (as expected, as I also agreed with the blog post).
  • I think issues around critique can still be really valuable. But I also think they (unfortunately) need to be handled more carefully than some other stuff we do. I’ll see about writing more about this later.
  • My guess is that 70%+ of critiques are pretty bad (as is the case for most fields). I’d likewise be curious about your ability to push back on the bad stuff, or maybe better, to draw out information to highlight potential issues. Frustratingly though, I imagine people will join your podcast and share things in inverse proportion to how much you call them out. (This is a big challenge podcasts have)
  • I suggest monitoring Twitter. If people do take parts of your podcast out of context and do bad things with them, keep an eye out and try to clarify things.
  • Good luck!

After listening to the rest of that post with James, I'll flag that while I agree that "EA is a lot like what many would call an ideology", I disagree with some of the content in the second half.

I think using tools like ethnography, agent based modeling, and Phenomenology, could be neat, but to me, they're pretty low-priority in improvements to EA now. I'd imagine it could take some serious effort in any ($200k? $300? Someone strong would have to come along with a proposal first) to produce something that really changes decision making, and I can think of other things I'd prefer that money be spent on.

There seems to be some assumption that the reason why such actions weren't taken by EA was because EAs weren't at all familiar and didn't read James' post. I think that often a more likely reason is just because it's a lot of work to do things, we have limited resources, and we have a lot of other really important initiatives to do. Often decision makers have a decent sense of a lot of potential actions, and have decided against them for decent reasons.

Similarly, I don't feel like the argument brought forth against the use of the word "aligned" when discussing a person was very useful. In that case I would have liked for you to have tried to really pin things down on what a good solution would look like. I think it's really easy to error on the side of "overfit on specific background beliefs" or "underfit on specific background beliefs", and tricky to strike a balance. 

My impression is that critics of "EA Orthodoxy" basically always have some orthodoxy  of their own. For example, I imagine few would say we should openly welcome Nazi sympathizers, as an extreme example. If they really have no orthodoxy, and are okay with absolutely any position, I'd find this itself an extreme and unusual position that almost all listeners would disagree with. 

Thank you for both comments! :)

Personally I feel exhausted by the last few months of what I felt like was much some firestorm of angry criticism. Much of it, mainly from the media and Twitter, feels like it was very antagonistic and in poor taste. At the same time, I think our movement has a whole lot of improvement to do.

I feel the same. Hopefully with this podcast I can increase the percentage of EA criticisms that is constructive and fun-to-engage-with.

My guess is that 70%+ of critiques are pretty bad (as is the case for most fields). I’d likewise be curious about your ability to push back on the bad stuff, or maybe better, to draw out information to highlight potential issues. Frustratingly though, I imagine people will join your podcast and share things in inverse proportion to how much you call them out. (This is a big challenge podcasts have)

I agree, although I think that some subset of the low quality criticism can be steel manned into valid points that may not have come up in an internal brainstorming session. And yes I am still experimenting with how much push back to give, and the first and second episodes are quite different on that metric.

Similarly, I don't feel like the argument brought forth against the use of the word "aligned" when discussing a person was very useful. In that case I would have liked for you to have tried to really pin things down on what a good solution would look like. I think it's really easy to error on the side of "overfit on specific background beliefs" or "underfit on specific background beliefs", and tricky to strike a balance. 

I think this is fair, and I honestly don't have a good solution. I think the word "aligned" can point to a real and important thing in the world but also has the risk of in practice just being used to point to the ingroup.

Note that saying "this isn't my intention" doesn't prevent net negative effects of a theory of change from applying. Otherwise, doing good would be a lot easier. 

I also highly recommend clarifying what exactly you're criticizing, i.e. the philosophy, the movement norms or some institutions that are core to the movement. 

Finally, I usually find the criticism of people a) at the core of the movement and b) highly truth-seeking most relevant to improve the movement so I would expect that if you're trying to improve the movement, you may want to focus on these people. There exists relevant criticisms external to the movement but usually they will lack of context and thus fail to address some key trade-offs that the movement cares about. 

Here's a small list of people I would be excited to hear on EA flaws and their recommandations for change: 

  • Rob Bensinger 
  • Eli Lifland 
  • Ozzie Gooen 
  • Nuno Sempere
  • Oliver Habryka

     

+1 for clarification. It could be neat if you could use a standard diagram to pinpoint what sort of criticism each one is. 

For example, see this one from Astral Codex Ten. 

Thank you for your comment and especially your guest recommendations! :)

Note that saying "this isn't my intention" doesn't prevent net negative effects of a theory of change from applying. Otherwise, doing good would be a lot easier. 

I completely agree. But I still think that saying when a harm was unintentional is an important signaling mechanism. For example, if I step on your foot, saying "Sorry, that was an accident" doesn't stop you from experiencing pain but hopefully prevents us from getting into a fight. Of course it is possible for signals like this to be misused by bad actors.

I also highly recommend clarifying what exactly you're criticizing, i.e. the philosophy, the movement norms or some institutions that are core to the movement. 

Ideally all of the above, with different episodes focusing on different aspects. Though I agree I should make the scope of the criticism clear at the beginning of each episode. I think the Ozzie's comment below has a good break down that I may use in the future.

Hey Nick! I've listened to episodes 1 and 3 during my commute over the week so far (I'd already listened to Episode 2 as Vaden and Ben had released it on Increments a bit ahead of you), and I want to say I thought they were all really great and well presented. I hope you are considering more, and while I get the trepidation about not 'piling on', I think all of these episodes are really valuable contributions to the community.

For those reading who are maybe a bit more sceptical, I'd really urge you to listen to all three episodes. Nick is a good host, and all 3 conversations are good with no sense of a host playing 'gotcha' with their guests. James poses a good challenge to one the Forum's most upvoted posts[1], Vaden especially poses core philosophical challenges to longtermism that haven't got a super convincing response yet imo (you may not find his challenges convincing, but they are some of the best ones posed so far)[2], and it's just worth listening to Luke and Carla in their own terms in Episode 1. Do I agree with everything they say? No. But having listened to their episode, I have no idea how the hell EA-space reacted so poorly to them the first time around. Not looking to open up the fight again[3], but if like me you've only had second-hand knowledge of the affair, I'd really suggest listening to that one and coming to you own opinion.

Finally, again to Nick thank you for making them. I hope this inspires others to continue this good faith engagement with critics of EAs central orthodoxy (both from within and outside of EA)

  1. ^

    Original post here, James's Forum response here

  2. ^

    A key blog post here, but it's worth reading the other posts too

  3. ^

    If you want to look, read here - especially the comments

Thank you for this super kind comment! ^_^

For me the ironic thing about critiquing current practices of EA is that it is, in itself, an act of EA.

The same can't necessarily be said for critiquing the underlying premise of EA.

Curated and popular this week
 ·  · 16m read
 · 
This is a crosspost for The Case for Insect Consciousness by Bob Fischer, which was originally published on Asterisk in January 2025. [Subtitle.] The evidence that insects feel pain is mounting, however we approach the issue. For years, I was on the fence about the possibility of insects feeling pain — sometimes, I defended the hypothesis;[1] more often, I argued against it.[2] Then, in 2021, I started working on the puzzle of how to compare pain intensity across species. If a human and a pig are suffering as much as each one can, are they suffering the same amount? Or is the human’s pain worse? When my colleagues and I looked at several species, investigating both the probability of pain and its relative intensity,[3] we found something unexpected: on both scores, insects aren’t that different from many other animals.  Around the same time, I started working with an entomologist with a background in neuroscience. She helped me appreciate the weaknesses of the arguments against insect pain. (For instance, people make a big deal of stories about praying mantises mating while being eaten; they ignore how often male mantises fight fiercely to avoid being devoured.) The more I studied the science of sentience, the less confident I became about any theory that would let us rule insect sentience out.  I’m a philosopher, and philosophers pride themselves on following arguments wherever they lead. But we all have our limits, and I worry, quite sincerely, that I’ve been too willing to give insects the benefit of the doubt. I’ve been troubled by what we do to farmed animals for my entire adult life, whereas it’s hard to feel much for flies. Still, I find the argument for insect pain persuasive enough to devote a lot of my time to insect welfare research. In brief, the apparent evidence for the capacity of insects to feel pain is uncomfortably strong.[4] We could dismiss it if we had a consensus-commanding theory of sentience that explained why the apparent evidence is ir
 ·  · 40m read
 · 
I am Jason Green-Lowe, the executive director of the Center for AI Policy (CAIP). Our mission is to directly convince Congress to pass strong AI safety legislation. As I explain in some detail in this post, I think our organization has been doing extremely important work, and that we’ve been doing well at it. Unfortunately, we have been unable to get funding from traditional donors to continue our operations. If we don’t get more funding in the next 30 days, we will have to shut down, which will damage our relationships with Congress and make it harder for future advocates to get traction on AI governance. In this post, I explain what we’ve been doing, why I think it’s valuable, and how your donations could help.  This is the first post in what I expect will be a 3-part series. The first post focuses on CAIP’s particular need for funding. The second post will lay out a more general case for why effective altruists and others who worry about AI safety should spend more money on advocacy and less money on research – even if you don’t think my organization in particular deserves any more funding, you might be convinced that it’s a priority to make sure other advocates get more funding. The third post will take a look at some institutional problems that might be part of why our movement has been systematically underfunding advocacy and offer suggestions about how to correct those problems. OUR MISSION AND STRATEGY The Center for AI Policy’s mission is to directly and openly urge the US Congress to pass strong AI safety legislation. By “strong AI safety legislation,” we mean laws that will significantly change AI developers’ incentives and make them less likely to develop or deploy extremely dangerous AI models. The particular dangers we are most worried about are (a) bioweapons, (b) intelligence explosions, and (c) gradual disempowerment. Most AI models do not significantly increase these risks, and so we advocate for narrowly-targeted laws that would focus their att