Take the 2025 EA Forum Survey to help inform our strategy and prioritiesTake the survey
This is a special post for quick takes by Aaron_Scher. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.
Sorted by Click to highlight new quick takes since:

Progressives might be turned off by the phrasing of EA as "helping others." Here's my understanding of why. Speaking anecdotally from my ongoing experience as a college student in the US, mutual aid is getting tons of support among progressives these days. Mutual aid involves members of a community asking for assistance (often monetary) from their community, and the community helping out. This is viewed as a reciprocal relationship in which different people will need help with different things and at different times from one another, so you help out when you can and you ask for assistance when you need it; it is also reciprocal because benefiting the community is inherently benefiting oneself. This model implies a level field of power among everybody in the community. Unlike charity, mutual aid relies on social relations and being in community to fight institutional and societal structures of oppression (https://ssw.uga.edu/news/article/what-is-mutual-aid-by-joel-izlar/).

"[Mutual Aid Funds] aim to create permanent systems of support and self-determination, whereas charity creates a relationship of dependency that fails to solve more permanent structural problems. Through mutual aid networks, everyone in a community can contribute their strengths, even the most vulnerable. Charity maintains the same relationships of power, while mutual aid is a system of reciprocal support." (https://williamsrecord.com/376583/opinions/mutual-aid-solidarity-not-charity/).

Within this framework, the idea of "helping people" often relies on people with power aiding the helpless, but doing so in a way that reinforces power difference. To help somebody is to imply that they are lesser and in need of help, rather than an equal community member who is particularly hurt by the system right now. This idea also reminds people of the White Man's Burden and other examples of people claiming to help others but really making things worse.

I could ask my more progressive friends if they think it is good to help people, and they would probably say yes – or at least I could demonstrate that they agree with me given a few minutes of conversation – but that doesn't mean they wouldn't be peeved at hearing "Effective Altruism is about using evidence and careful reasoning to help others the best we can"

I would briefly note that mutual aid is not incompatible with EA to the extent that EA is a question; however, requiring that we be in community with people in order to help them means that we are neglecting the world's poorest people who do not have access to (for example) the communities in expensive private universities.

I think many progressives and others on the left value mutual aid because they see it as more sustainable and genuine and with fewer negative strings attached. I think they are generally fine with aid and helping others as long as they can be shown good evidence that 1) the aid is not going to be used to prevent other positive changes (basically things like exchanging humanitarian aid for continued resource extraction from a region that's worth more than the total aid contributed, or pressuring/requiring a housing justice org to stop organizing tenants to stand up for their rights in exchange for more funding for their shelter initiatives) and 2) Aid is done in a competent manner so that it doesn't get stolen by governments, wasted, or taken by other corrupt actors and 3) respects local wisdom and empowers people to have more of a say over decisions that most affect them. Another example would be conservation efforts that kick indigenous people off their land vs ones that center their practical experience and respect their rights. 

There's a big difference between donating to a food bank and creating the infrastructure for people to organize their own food bank and/or grow their own food of their choosing. The first one is more narrowly focused on food security whereas the latter fits with a broader food justice or food sovereignty approach. I think both are important. Many people believe the latter kind of empowerment initiatives are more sustainable in the long run and less dependent on shifts in funding, even if they're harder to set up initially.  The reason being that they redistribute power, not just resources. To sum it up, something like "Give a man a fish and he will eat for a day; teach a community to fish, and give them a place to do so, and they will eat for generations."

Thanks for your response! I don't think I disagree with anything you're saying, but I definitely think it's hard. That is, the burden of proof for 1, 2, and 3 is really high in progressive circles, because the starting assumption is charity does not do 1, 2, or 3. To this end, simplified messages are easily mis-interpreted. 
I really like this: "The reason being that they redistribute power, not just resources."

Yeah when I was reading it I was  thinking "these are high bars to reach" but I think they cover all the concerns I've heard. Oh glad you liked it! I probably could have said that from the start, now that I think about it. 

A Simpler Version of Pascal's Mugging Background: I found Bostrom’s original piece (https://www.nickbostrom.com/papers/pascal.pdf) unnecessarily confusing, and numerous Fellows in the EA VP Intro Fellowship have also been confused by it. I think we can be more accessible in our ideas. I wrote this in about 30 minutes though, so it's probably not very good. I would greatly appreciate feedback on how to improve it. I also can't decide if it would be useful to have at the end a section of "possible solution" because as far as I can tell, theses solutions are all subject to complicated philosophical debate that goes over my head. So including it might be necessarily too confusing. Might be easiest to provide comments on the Google Doc itself (https://docs.google.com/document/d/1NLfDK7YqPGdYocxBsTX1QMldLNB4B-BvbT7sevPmzMk/edit)

Pascal is going about his day when he is approached by a mugger demanding Pascal’s wallet. Pascal refuses to give over his wallet, at which point the mugger offers the following deal: “Give me your wallet now and tomorrow I will give you twice as much money as is in the wallet now” Pascal: “I have $100 in my wallet, but I don’t think it’s very likely you’re going to keep your promise” Mugger: “What do you think is the probability that I keep my promise and give you the money?” Pascal: “Hm, maybe 1 in a million because you might be some elaborate YouTube prankster” Mugger: “Ok, then you give me your $100 now, and tomorrow I will give you $200 million” Let’s do the math. We can calculate expected value by multiplying the value of an outcome by the probability of that outcome. The expected value of taking the deal, based on Pascal’s stated belief that the mugger will keep their word, is 200,000,000 * 1/(1,000,000) = $200. Whereas, the expected value of not taking the deal is $100 * 1 (certainty) = $100. Pascal should take the deal if he is an expected value maximizing person. Maybe at this point Pascal realizes that the chances of the mugger having 200 million dollars is extremely low. But this doesn’t change the conundrum because the mugger will simply offer more money to account for the lower probability of them following through. For example, Pascal thinks the probability of the mugger having the money decreases the chance of the mugger following through to one in a trillion. Then the mugger offers 200 trillion dollars. The mugger is capitalizing on the fact that everything we know, we know with a probability less than one. We can not be 100% certain that the mugger won’t follow through on their promise, even though we intuitively know they won’t. Extremely unlikely outcomes are still possible.

Pascal: “200 trillion dollars is too much money, in fact I don’t think I would benefit from having any more than 10 million dollars” Pascal is drawing a distinction between expected value (uses units of money) and expected utility (uses units of happiness, satisfaction, other things we find intrinsically valuable), but the mugger is unphased.

Mugger: “Okay, but you do value happy days of life in such a way where more happy days is always better than fewer happy days. It turns out that I’m a wizard and I can grant you 200 trillion happy days of life in exchange for your wallet” Pascal: “It seems extremely unlikely that you’re a wizard, but the amount I value 200 trillion happy days of life is so high that the expected utility is still positive, and greater than what I get from just keeping my $100” Pascal hands his wallet to the mugger but doesn’t feel very good about doing so.

So what’s the moral of this story? -Expected value is not a perfect system for making decisions, because we all know Pascal is getting duped. -We should be curious and careful about how to deal with low probability events with super high or low expected value (like extinction risks). Relatedly, common sense seems to suggest that spending effort on too unlikely scenarios is irrational

Random journaling and my predictions: Pre-Retrospective on the Campus Specialist role.
 Applications for the Campus Specialist role at CEA close in like 5 days. Joan Gass's talk at EAG  about this was really good, and it has led to many awesome, talented people believing they should do Uni group community building full time. 20-50 people are going to apply for this role, of which at least 20 would do an awesome job. 

Because the role is new, CEA is going to hire like 8-12 people for this role; these people are going to do great things for community building and likely have large impacts on the EA community in the next 10 years. Many of the other people who apply will feel extremely discouraged and led on. I'm not sure what they will do, but for the ~10 (or more) who were great fits for the Campus Specialist program but didn't get it, they will do something much less impactful in the next 2 years.

I have no idea what the effects longer-term will be, but definitely not good. Probably some of these people will leave the EA community temporarily because they are confused, discouraged, and don't think their skill set fits well with what employers in the EA community care about right now. 

This is avoidable if CEA expands the number of people they hire and the system for organizing this role. I think the strongest argument against doing so is that the role is fairly experimental and we don't know how it will work out. I think that the upside of having more people in this role totally overshadows the downsides. The downsides seem to mainly be money (as long as you hire competent, agentic people). The role description suggests an impact of counterfactually moving ~10 people per year into high impact careers. I think even if the number were only 5, this role would be well worth it, and my guess is that the next 10 best applicants would still have such an effect (even at less prestigious universities). 

Disclaimer: I have no insider knowledge. I am applying for the Campus Specialist role (and therefore have a personal preference for more people getting the job). I think there is about a 2/3 chance of most of the above problem occurring, and I'm least confident about paragraph 3 (what the people who don't get the role do instead).

The other people who were good fits but weren't hired might do something less impactful over the next two years, but I think it's still unclear whether their career will be less impactful in the longer term. There are lots of jobs with quality training and management that could teach you a lot in the two years you would've been a campus specialist. I would encourage everyone who's applying to be a campus specialist to also apply to some of those jobs, and think carefully about which to pick if offered both.

Some things you could try:

-Testing your fit for a policy/politics career

-Learning the skills you'd need to help run a new EA megacharity

-Working or volunteering as a community organizer

Yes, I agree that this is unclear. Depending on AI timelines, the long-term might not matter too much. To add to your list:

- What do you or others view as talent/skill gaps in the EA community; how can you build those skills/talents in a job that you're more likely to get? (I'm thinking person/project management, good mentoring, marketing skills, as a couple examples)

Thanks for posting this, Aaron! I'm also applying to the role, and your thoughts are extremely well-put and on the mark. 

20-50 people are going to apply for this role, of which at least 20 would do an awesome job. 

I think we have two disagreements here. 

  1. My thought is that over 50 people are going to apply (my expectation is 65+); perhaps this doesn't matter too much (quite a few disappointed people regardless), and I don't think either of us has particularly good evidence for this.
  2. I'm uncertain as to whether 40% (assuming your prediction of 50 applications) would do an "awesome" job. 'Awesome' needs to be defined further here, but, without going into the weeds, I think that a recently graduated person having a fleshed-out entrepreneurial aptitude + charisma + a deep understanding of EA  is extremely rare (see Alex HT's post).

More on the 2nd thought: I'd reckon (high uncertainty) that CEA may struggle to find more than ~12 people like this. This does not imply that there are not far more than 12 qualified people for the job. Primary reasons I think this: a) the short application timeline; b) my uncertainty about the degree of headhunting that's gone on; and c) the fact that a lot of the best community builders I know (this is a limited dataset, however) already have jobs lined up.  All of this depends on who is graduating this year and who is applying, of course.

Hey Ed, thanks for your response. I have no disagreement on 1 because I have no clue what the upper end of people applying is – simply that it's much higher than the number who will be accepted and the number of people (I think) will do a good job. 

2. I think we do disagree here. I think these qualities are relatively common in the CBers and group organizers I know (small sample). I agree that short app timeline will decrease the number of great applicants applying, also unsure about b, c seems like the biggest factor to me. 

Probably the crux here is what proportion of applicants have the skills you mention, and my guess is ⅓ to ⅔, but this is based on the people I know which may be higher than in reality.

Awesome - thanks for the response. Yes, I agree with the crux (this also may come from different conceptions of the skills themselves). I'll message you!

Hey I applied too! Hopefully at least one of us gets it. I think they probably got more than 50 applications, so it almost starts to become a lottery at that point if they only have a few spots and everyone seems like they could do it well. Or maybe that's just easier for me to think haha. 

I think conceptualizing job hunts like this for very competitive positions is often accurate and healthy fwiw

Curated and popular this week
 ·  · 1m read
 · 
This morning I was looking into Switzerland's new animal welfare labelling law. I was going through the list of abuses that are now required to be documented on labels, and one of them made me do a double-take: "Frogs: Leg removal without anaesthesia."  This confused me. Why are we talking about anaesthesia? Shouldn't the frogs be dead before having their legs removed? It turns out the answer is no; standard industry practice is to cut their legs off while they are fully conscious. They remain alive and responsive for up to 15 minutes afterward. As far as I can tell, there are zero welfare regulations in any major producing country. The scientific evidence for frog sentience is robust - they have nociceptors, opioid receptors, demonstrate pain avoidance learning, and show cognitive abilities including spatial mapping and rule-based learning.  It's hard to find data on the scale of this issue, but estimates put the order of magnitude at billions of frogs annually. I could not find any organisations working directly on frog welfare interventions.  Here are the organizations I found that come closest: * Animal Welfare Institute has documented the issue and published reports, but their focus appears more on the ecological impact and population decline rather than welfare reforms * PETA has conducted investigations and released footage, but their approach is typically to advocate for complete elimination of the practice rather than welfare improvements * Pro Wildlife, Defenders of Wildlife focus on conservation and sustainability rather than welfare standards This issue seems tractable. There is scientific research on humane euthanasia methods for amphibians, but this research is primarily for laboratory settings rather than commercial operations. The EU imports the majority of traded frog legs through just a few countries such as Indonesia and Vietnam, creating clear policy leverage points. A major retailer (Carrefour) just stopped selling frog legs after welfar
 ·  · 10m read
 · 
This is a cross post written by Andy Masley, not me. I found it really interesting and wanted to see what EAs/rationalists thought of his arguments.  This post was inspired by similar posts by Tyler Cowen and Fergus McCullough. My argument is that while most drinkers are unlikely to be harmed by alcohol, alcohol is drastically harming so many people that we should denormalize alcohol and avoid funding the alcohol industry, and the best way to do that is to stop drinking. This post is not meant to be an objective cost-benefit analysis of alcohol. I may be missing hard-to-measure benefits of alcohol for individuals and societies. My goal here is to highlight specific blindspots a lot of people have to the negative impacts of alcohol, which personally convinced me to stop drinking, but I do not want to imply that this is a fully objective analysis. It seems very hard to create a true cost-benefit analysis, so we each have to make decisions about alcohol given limited information. I’ve never had problems with alcohol. It’s been a fun part of my life and my friends’ lives. I never expected to stop drinking or to write this post. Before I read more about it, I thought of alcohol like junk food: something fun that does not harm most people, but that a few people are moderately harmed by. I thought of alcoholism, like overeating junk food, as a problem of personal responsibility: it’s the addict’s job (along with their friends, family, and doctors) to fix it, rather than the job of everyday consumers. Now I think of alcohol more like tobacco: many people use it without harming themselves, but so many people are being drastically harmed by it (especially and disproportionately the most vulnerable people in society) that everyone has a responsibility to denormalize it. You are not likely to be harmed by alcohol. The average drinker probably suffers few if any negative effects. My argument is about how our collective decision to drink affects other people. This post is not
 ·  · 5m read
 · 
Today, Forethought and I are releasing an essay series called Better Futures, here.[1] It’s been something like eight years in the making, so I’m pretty happy it’s finally out! It asks: when looking to the future, should we focus on surviving, or on flourishing? In practice at least, future-oriented altruists tend to focus on ensuring we survive (or are not permanently disempowered by some valueless AIs). But maybe we should focus on future flourishing, instead.  Why?  Well, even if we survive, we probably just get a future that’s a small fraction as good as it could have been. We could, instead, try to help guide society to be on track to a truly wonderful future.    That is, I think there’s more at stake when it comes to flourishing than when it comes to survival. So maybe that should be our main focus. The whole essay series is out today. But I’ll post summaries of each essay over the course of the next couple of weeks. And the first episode of Forethought’s video podcast is on the topic, and out now, too. The first essay is Introducing Better Futures: along with the supplement, it gives the basic case for focusing on trying to make the future wonderful, rather than just ensuring we get any ok future at all. It’s based on a simple two-factor model: that the value of the future is the product of our chance of “Surviving” and of the value of the future, if we do Survive, i.e. our “Flourishing”.  (“not-Surviving”, here, means anything that locks us into a near-0 value future in the near-term: extinction from a bio-catastrophe counts but if valueless superintelligence disempowers us without causing human extinction, that counts, too. I think this is how “existential catastrophe” is often used in practice.) The key thought is: maybe we’re closer to the “ceiling” on Survival than we are to the “ceiling” of Flourishing.  Most people (though not everyone) thinks we’re much more likely than not to Survive this century.  Metaculus puts *extinction* risk at about 4