Take the 2025 EA Forum Survey to help inform our strategy and prioritiesTake the survey
by [anonymous]
2 min read 17

40

This is a story of growing apart.

I was excited when I first discovered Effective Altruism. A community that takes responsibility seriously, wants to help, and uses reason and science to do so efficiently. I saw impressive ideas and projects aimed at valuing, protecting, and improving the well-being of all living beings.

Today, years later, that excitement has faded. Certainly, the dedicated people with great projects still exist, but they've become a less visible part of a community that has undergone notable changes in pursuit of ever-better, larger projects and greater impact:

From concrete projects on optimal resource use and policy work for structural improvements, to avoiding existential risks, and finally to research projects aimed at studying the potentially enormous effects of possible technologies on hypothetical beings. This no longer appeals to me.

Now I see a community whose commendable openness to unbiased discussion of any idea is being misused by questionable actors to platform their views. 

A movement increasingly struggling to remember that good, implementable ideas are often underestimated and ignored by the general public, but not every marginalized idea is automatically good. Openness is a virtue; being contrarian isn't necessarily so.

I observe a philosophy whose proponents in many places are no longer interested in concrete changes, but are competing to see whose vision of the future can claim the greatest longtermist significance.

This isn't to say I can't understand the underlying considerations. It's admirable to rigorously think about the consequences one must and can draw when taking moral responsibility seriously. It's equally valuable to become active and try to advance one's vision of the greatest possible impact.

However, I believe a movement that too often tries to increase the expected value of its actions by continuously reducing probabilities in favor of greater impact loses its soul. A movement that values community building, impact multiplying and getting funding much higher than concrete progress risks becoming an intellectual pyramid scheme.

Again, I’m aware that concrete, impactful projects and people still exist within EA. But in the public sphere accessible to me, their influence and visibility are increasingly diminishing, while indirect high-impact approaches via highly speculative expected value calculations become more prominent and dominant. This is no longer enough for me to publicly and personally stand behind the project named Effective Altruism in its current form.

I was never particularly active in the forum, and it took years before I even created an account. Nevertheless, I always felt part of this community. That's no longer the case, which is why I'll be leaving the forum. For those present here, this won't be a significant loss, as my contributions were negligible, but for me, it's an important step.

I'll continue to donate, support effective projects with concrete goals and impacts, and try to actively shape the future positively. However, I'll no longer do this under the label of Effective Altruism.

I'm still searching for a movement that embodies the ideal of committed, concrete effective (lowercase e) altruism. I hope it exists. Good luck to those here that feel the same.

Comments17


Sorted by Click to highlight new comments since:

Again, I’m aware that concrete, impactful projects and people still exist within EA. But in the public sphere accessible to me, their influence and visibility are increasingly diminishing, while indirect high-impact approaches via highly speculative expected value calculations become more prominent and dominant.

This has probably been what many people experienced over the last few years, especially as the rest of the world also started getting into AI.

But I think it's possible to counteract by curating one's own "public sphere" instead.

For example, you could follow all of your favorite charities and altruistic projects on Twitter. This might be a good starting point. For inspiration, you could also check the follow lists of places like Open Phil (my employer; we follow a ton of our grantees) or CEA's "official EA" account. Throw in Dylan Matthews and Kelsey Piper while you're at it; Future Perfect publishes content across many cause areas. And finally, at the risk of sounding biased, I'll note that Alexander Berger has one of the best EA-flavored research feeds I know of.

If you mostly follow concrete, visibly impactful projects, Twitter will start throwing more of those your way. I assume you'll start seeing development economists and YIMBYs working on local policy — at least, that's what happened to me. And maybe some of those people have blogs you want to follow, or respond when you comment on their stuff, and suddenly you find yourself floating peacefully among a bunch of adjacent-to-EA communities focused on things that excite you.

 

The Forum also lets you filter by topic pretty aggressively, hiding or highlighting whatever tags you want. You just have to click "Customize feed" at the top of the homepage...

...and follow these instructions. (You might be familiar with this, but many Forum users aren't, so I figured I'd mention it.)

 

Of course, it's not essential for anyone to follow a bunch of "EA content" — your plan of donating to and supporting projects you like is a good one. But if you previously enjoyed reading the Forum, and find it annoying as of late, it may be possible to restore (or improve upon!) your earlier experience and end up with a lot of stuff to read.

I think this is a really important point. My “public sphere” of EA has very little longtermism just because of who I happen to follow / what I happen to read.

I feel exactly the same way. I love the idea of "lowercase" effective altruism. I've also dabbled with term "evidence-based altruism", where "evidence-based" I hope signals to people that I am not interested in hypothetical/rationalist/extremely philosophical issues.

Mjreard
11
10
16

I gave this post a strong downvote because it merely restates some commonly held conclusions without speaking directly to the evidence or experience that supports those conclusions. 

I think the value of posts principally derives from their saying something new and concrete and this post failed to do that. Anonymity contributed to this because at least knowing that person X with history Y held these views might have been new and useful.  

It wasn't anonymous when the post went up, but it became so when the user deactivated their account.

Is that just from the tooltip? I'm not sure how anonymous posting works. It'd be interesting to learn who the author was if they didn't intend to be anonymous and if it was anyone readers would know.

I saw it when it first went up and it was nonymous, though I don't remember what the user name was.

I don't think you can post 'anonymously' in the sense that there is no account related to your post, you'll always have to create an account, but you can of course use a one-off username and even email address if you want to. However, you can delete your account and then apparently all 'your' posts appear as "[anonymous]" whether you intended this or not. (And in this case, it seems the original poster just created this post with their usual, non-anonymous account, but then deleted their account.)

You are not alone, definitely not alone.

As a community builder, I have several people telling me this on a frequent basis. It's nice to be able to follow good charities on Twitter, but that does not make up for the direction of the funding and therefore the opportunities and projects that are actually selected and funded, or the fact that most posts on the forum are now about AI given the sharp increase in AI-interested people (who do not necessarily have a past with EA, or altruism, as in giving etc). It does not make up for the fact that most people enter EA through 80k and get the feeling that they have to get into AI to be impactful, given the priorities. Or the fact that your chance to be coached by 80k is much greater if you want to work in longtermistic matters.

There is really a turning point in the movement, few actors are reacting against it, there is no real counter movement and most people in power do not speak up against this, even though they might have a more nuanced view on funding distribution than what is actually happening. 

Maybe it will be one of these cases where the audience of one community changes completely, and thus becomes a different organization. It makes me very sad--there is no replacement to EA. No, global aid economics are not 'GH' in EA. No, animalistic parties cannot replace the work done by some EA orgs. It's a question to all: will we silently abide and passively go along the movement, whatever it becomes, or will we just have to exit EA? The latter is already happening a lot. 

I wish you well. I would love it if you let me know what you find beyond this and what you learn. I'm sad that this place wasn't for you, but currently we disagree on a lot. I hope things become clearer. 

May you be more well and do more good. 

I wish you well. I would love it if you let me know what you find beyond this and what you learn. I'm sad that this place wasn't for you, but currently we disagree on a lot. I hope things become clearer. 

Curated and popular this week
 ·  · 1m read
 · 
This morning I was looking into Switzerland's new animal welfare labelling law. I was going through the list of abuses that are now required to be documented on labels, and one of them made me do a double-take: "Frogs: Leg removal without anaesthesia."  This confused me. Why are we talking about anaesthesia? Shouldn't the frogs be dead before having their legs removed? It turns out the answer is no; standard industry practice is to cut their legs off while they are fully conscious. They remain alive and responsive for up to 15 minutes afterward. As far as I can tell, there are zero welfare regulations in any major producing country. The scientific evidence for frog sentience is robust - they have nociceptors, opioid receptors, demonstrate pain avoidance learning, and show cognitive abilities including spatial mapping and rule-based learning.  It's hard to find data on the scale of this issue, but estimates put the order of magnitude at billions of frogs annually. I could not find any organisations working directly on frog welfare interventions.  Here are the organizations I found that come closest: * Animal Welfare Institute has documented the issue and published reports, but their focus appears more on the ecological impact and population decline rather than welfare reforms * PETA has conducted investigations and released footage, but their approach is typically to advocate for complete elimination of the practice rather than welfare improvements * Pro Wildlife, Defenders of Wildlife focus on conservation and sustainability rather than welfare standards This issue seems tractable. There is scientific research on humane euthanasia methods for amphibians, but this research is primarily for laboratory settings rather than commercial operations. The EU imports the majority of traded frog legs through just a few countries such as Indonesia and Vietnam, creating clear policy leverage points. A major retailer (Carrefour) just stopped selling frog legs after welfar
 ·  · 10m read
 · 
This is a cross post written by Andy Masley, not me. I found it really interesting and wanted to see what EAs/rationalists thought of his arguments.  This post was inspired by similar posts by Tyler Cowen and Fergus McCullough. My argument is that while most drinkers are unlikely to be harmed by alcohol, alcohol is drastically harming so many people that we should denormalize alcohol and avoid funding the alcohol industry, and the best way to do that is to stop drinking. This post is not meant to be an objective cost-benefit analysis of alcohol. I may be missing hard-to-measure benefits of alcohol for individuals and societies. My goal here is to highlight specific blindspots a lot of people have to the negative impacts of alcohol, which personally convinced me to stop drinking, but I do not want to imply that this is a fully objective analysis. It seems very hard to create a true cost-benefit analysis, so we each have to make decisions about alcohol given limited information. I’ve never had problems with alcohol. It’s been a fun part of my life and my friends’ lives. I never expected to stop drinking or to write this post. Before I read more about it, I thought of alcohol like junk food: something fun that does not harm most people, but that a few people are moderately harmed by. I thought of alcoholism, like overeating junk food, as a problem of personal responsibility: it’s the addict’s job (along with their friends, family, and doctors) to fix it, rather than the job of everyday consumers. Now I think of alcohol more like tobacco: many people use it without harming themselves, but so many people are being drastically harmed by it (especially and disproportionately the most vulnerable people in society) that everyone has a responsibility to denormalize it. You are not likely to be harmed by alcohol. The average drinker probably suffers few if any negative effects. My argument is about how our collective decision to drink affects other people. This post is not
 ·  · 5m read
 · 
Today, Forethought and I are releasing an essay series called Better Futures, here.[1] It’s been something like eight years in the making, so I’m pretty happy it’s finally out! It asks: when looking to the future, should we focus on surviving, or on flourishing? In practice at least, future-oriented altruists tend to focus on ensuring we survive (or are not permanently disempowered by some valueless AIs). But maybe we should focus on future flourishing, instead.  Why?  Well, even if we survive, we probably just get a future that’s a small fraction as good as it could have been. We could, instead, try to help guide society to be on track to a truly wonderful future.    That is, I think there’s more at stake when it comes to flourishing than when it comes to survival. So maybe that should be our main focus. The whole essay series is out today. But I’ll post summaries of each essay over the course of the next couple of weeks. And the first episode of Forethought’s video podcast is on the topic, and out now, too. The first essay is Introducing Better Futures: along with the supplement, it gives the basic case for focusing on trying to make the future wonderful, rather than just ensuring we get any ok future at all. It’s based on a simple two-factor model: that the value of the future is the product of our chance of “Surviving” and of the value of the future, if we do Survive, i.e. our “Flourishing”.  (“not-Surviving”, here, means anything that locks us into a near-0 value future in the near-term: extinction from a bio-catastrophe counts but if valueless superintelligence disempowers us without causing human extinction, that counts, too. I think this is how “existential catastrophe” is often used in practice.) The key thought is: maybe we’re closer to the “ceiling” on Survival than we are to the “ceiling” of Flourishing.  Most people (though not everyone) thinks we’re much more likely than not to Survive this century.  Metaculus puts *extinction* risk at about 4