This is a special post for quick takes by Neel Nanda. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.
Sorted by Click to highlight new quick takes since:

In light of recent discourse on EA adjacency, this seems like a good time to publicly note that I still identify as an effective altruist, not EA adjacent.

I am extremely against embezzling people out of billions of dollars of money, and FTX was a good reminder of the importance of "don't do evil things for galaxy brained altruistic reasons". But this has nothing to do with whether or not I endorse the philosophy that "it is correct to try to think about the most effective and leveraged ways to do good and then actually act on them". And there are many people in or influenced by the EA community who I respect and think do good and important work.

As do I brother, thanks for this declaration! I think now might not be the worst time ogir those who do identify directly as EAs to stay so to encourage the movement, especially some of the higher up thought and movement leaders. I don't think a massive sign up form or anything drastic is necessary, just a few higher status people standing up and saying "hey, I still identify with this thing".

That is if they think it isn't an outdated term...

I’m curious what you both think of my impression that the focus on near-term AGI has completely taken over EA and sucked most of the oxygen out of the room.

I was probably one of the first 1,000 people to express an interest in organized effective altruism, back before it was called “effective altruism”. I remember being in the Giving What We Can group on Facebook when it was just a few hundred members, when they were still working on making a website. The focus then was exclusively on global poverty.

Later, when I was involved in a student EA group from around 2015 to 2017, global poverty was still front and centre, animal welfare and vegetarianism/veganism/reducetarianism was secondary, and the conversation about AI was nipping at the margins.

Fast forward to 2025 and it seems like EA is now primarily a millennialist intellectual movement focused on AGI either causing the apocalypse or creating utopia within the next 3-10 years (with many people believing it will happen within 5 years), or possibly as long as 35 years if you’re way far out on the conservative end of the spectrum.

This change has nothing to do with FTX and probably wouldn’t be a reason for anyone at Anthropic to distance themselves from EA, since Anthropic is quite boldly promoting a millennialist discourse around very near-term AGI.

But it is a reason for me not to feel an affinity with the EA movement anymore. It has fundamentally changed. It’s gone from tuberculosis to transhumanism. And that’s just not what I signed up for.

The gentle irony is that I’ve been interested in AGI, transhumanism, the Singularity, etc. for as long as I’ve been interested in effective altruism, if not a little longer. In principle, I endorse some version of many of these ideas.

But when I see the kinds of things that, for example, Dario Amodei and others at Anthropic are saying about AGI within 2 years, I feel unnerved. It feels like I’m at the boundary of the kind of ideas that it makes sense to try to argue against or rationally engage with. Because it doesn’t really feel like a real intellectual debate. It feels closer to someone experiencing some psychologically altered state, like mania or psychosis, where attempting to rationally persuade someone feels inappropriate and maybe even unkind. What do you even do in that situation?

I recently wrote here about why these super short AGI timelines make no sense to me. I read an article today that puts this into perspective. Apple is planning to eventually release a version of Siri that merges the functionality of the old, well-known version of Siri and the new soon-to-be-released version that is based on an LLM. The article says Apple originally wanted to release the merged version of Siri sooner, but now this has been delayed to 2027. Are we going to have AGI before Apple finishes upgrading Siri? These ideas don’t live in the same reality.

To put a fine point on it, I would estimate the probability of AGI being created by January 1, 2030 to be significantly less than the odds of Jill Stein winning the U.S. presidential election in 2028 as the Green Party candidate (not as the leader of either the Democratic or Republican primary), which, to be clear, I think will be roughly as likely as her winning in 2024, 2020, or 2016 was. I couldn’t find any estimates of Stein’s odds of winning either the 2028 election or past elections from prediction markets or election forecast models. At one point, electionbettingodds.com gave her 0.1%, but I don’t know if they massively rounded up or if those odds were distorted by a few long-shot bets on Stein. Regardless, I think it’s safe to say the odds of AGI being developed by January 1, 2030 are significantly less than 0.1%.

If I am correct (and I regret to inform you that I am correct), then I have to imagine the credibility of EA will diminish significantly over the next 5 years. Because, unlike FTX scamming people, belief in very near-term AGI is something that many people in EA have consciously, knowingly, deliberately signed up for. Whereas many of the warning signs about FTX were initially only known to insiders, the evidence against very near-term AGI is out in the open, meaning that deciding to base the whole movement on it now is a mistake that is foreseeable and… I’m sorry to say… obvious.

I feel conflicted saying things like this because I can see how it might come across as mean and arrogant. But I don’t think it’s necessarily unkind to try to give someone a reality check under unusual, exceptional circumstances like these.

I think EA has become dangerously insular and — despite the propaganda to the contrary — does not listen to criticism. The idea that EA has abnormal or above-average openness to criticism (compared to what? the evangelical church?) seems only to serve the function of self-licensing. That is, people make token efforts at encouraging or engaging with criticism, and then, given this demonstration of their open-mindedness, become more confident in what they already believed, and feel licensed to ignore or shut down criticism in other instances.

It also bears considering what kind of criticism or differing perspectives actually get serious attention. Listening to someone who suggests that you slightly tweak your views is, from one perspective, listening to criticism, but, from another perspective, it’s two people who already agree talking to each other in an echo chamber and patting themselves on the back for being open-minded. (Is that too mean? I’m really trying not to be mean.)

On the topic of near-term AGI, I see hand-wavey dismissal of contrary views, whether they come from sources like Turing Prize winner and FAIR Chief AI Scientist Yann LeCun, surveys of AI experts, or superforecasters. Some people predict AGI will be created very soon and seemingly a much larger number think it will take much longer. Why believe the former and not the latter? I see people being selective in this way, but I don’t see them giving principled reasons for being selective.

Crucially, AGI forecasts are a topic where intuition plays a huge role, and where intuitions are contagious. A big part of the “evidence” for near-term AGI that people explicitly base their opinion on is what person X, Y, and Z said about when they think AGI will happen. Someone somewhere came up with the image of some people sitting in a circle just saying ever-smaller numbers to each other, back and forth. What exactly would prevent that from being the dynamic?

When it comes to listening to differing perspectives on AGI, what I have seen more often than engaging with open-mindedness and curiosity is a very unfortunate, machismo/hegemonic masculinity-style impulse to degrade or humiliate a person for disagreeing. This is the far opposite of "EA loves criticism”. This is trying to inflict pain on someone you see as an opponent. This is the least intellectually healthy way of engaging in discourse, besides, I guess, I don’t know, shooting someone with a gun if they disagree with you. You might as well just explicitly forbid and censor dissent.

I would like to believe that, in 5 years, the people in EA who have disagreed with me about near-term AGI will snap out of it and send me a fruit basket. But they could also do like Elon Musk, who, after predicting fully autonomous Teslas would be available in 2016, 2017, 2018, 2019, 2020, 2021, 2022, 2023, and 2024, and getting it wrong 9 years in a row, now predicts fully autonomous Teslas will be available in 2025.

In principle, you could predict AGI within 5 years and just have called it a few years too soon. If you can believe in very near-term AGI today, you will probably be able to believe in very near-term AGI when 2030 rolls around, since AI capabilities will only improve.

Or they could go the Ray Kurzweil route. In 2005, Kurzweil predicted that we would have “high-resolution, full-immersion, visual-auditory virtual reality” by 2010. In 2010, when he graded his own predictions, he called this prediction “essentially correct”. This was his explanation:

The computer game industry is rapidly moving in this direction. Technologies such as Microsoft’s Kinect allows players to control a videogame without requiring controllers by detecting the player's body motions. Three-dimensional high-definition television is now available and will be used by a new generation of games that put the user in a full-immersion, high-definition, visual-auditory virtual reality environment.

Kurzweil’s gradings of his own predictions are largely like that. He finds a way to give himself a rating of “correct” or “essentially correct”. Even though he was fully incorrect. I wonder if Dario Amodei will do the same thing in 2030.

In 2030, there will be the option of doubling down on near-term AGI. Either the Elon Musk way — kick the can down the road — or the Ray Kurzweil way — revisionist history. And the best option will be some combination of both.

When people turn out to be wrong, it is not guaranteed to increase their humility or lead to soul searching. People can easily increase their defensiveness and their aggression toward people who disagree with them.

And, so, I don’t think merely being wrong will be enough on its own for EA to pull out of being a millennialist near-term AGI community. That can continue indefinitely even if AGI is over 100 years away. There is no guarantee that EA will self-correct in 5 years.

For these reasons, I don’t feel an affinity toward EA any more — it’s nothing like what it was 10 or 15 years ago — and I don’t feel much hope for it changing back, since I can imagine a scenario where it only gets worse 5 years from now.

Curated and popular this week
Relevant opportunities