I lead Effective Altruism Lund in southern Sweden, while wrapping up my M.Sc. in Engineering Physics specializing in machine learning. I'm a social team player who likes high ceilings and big picture work. Scared of AI, intrigued by biorisk, hopeful about animal welfare.
My interests outside of EA, in hieroglyphs: 🎸🧙🏼♂️🌐💪🏼🎉📚👾🎮✍🏼🛹
The first quote you mention sounds more like a dog whistle to me. I actually think it's great if we can "weaponize capitalist engines" against the world's most pressing problems. But if you hate capitalism, it sounds insidious.
The rest I agree is uncharitable. Like, surely you wouldn't come out the shallow pond feeling moral guily, you'd be extatic that you just saved a child! To me, Singer's thought experiment always implied I should feel the same way about donations.
EA’s goal is impact, not growth for its own sake. Because cost-effectiveness can vary by 100x or more, shifting one person’s career from a typical path to a highly impactful one is equivalent to adding a hundred contributors. I agree with the EA stance that the former is often more feasible.
This doesn’t fully address why we maintain a soft tone outwardly, but it does imply we could afford to be a bit less soft inwardly. I predict that SMA will surpass EA in numbers, while EA will be ahead of SMA in terms of impact.
As a community builder, I've started donating directly to my local EA group—and I encourage you to consider doing the same.
Managing budgets and navigating inflexible grant applications consume valuable time and energy that could otherwise be spent directly fostering impactful community engagement. As someone deeply involved, I possess unique insights into what our group specifically needs, how to effectively meet those needs, and what actions are most conducive to achieving genuine impact.
Of course, seeking funding from organizations like OpenPhil remains highly valuable—they've dedicated extensive thought to effective community building. Yet, don't underestimate the power and efficiency of utilizing your intimate knowledge of your group's immediate requirements.
Your direct donations can streamline processes, empower quick responses to pressing needs, and ultimately enhance the impact of your local EA community.
I'm sad to hear that you'd feel manipulated by my reply to the QALY-doubting response, but I'm very happy and thankful to get the feedback! We do want to show that EA has some useful tools and conclusions, while also being honest and open about what's still being worked on. I'll take this to heart.
I feel the need to clarify that none of these responses are meant to be "sales-y" or to trick people into joining a movement that doesn't align with their values. My reply was more based on the ideas that we need more skeptics. If they have epistemic (as opposed to ethical) objections, I think it's particularly important to signal that they're invited. My condolences for having gotten such awful advice from whatever organization it was, but that's not how we do things at EA Lund.
For a more realistic example, I talked to one person who said that they'd focus significantly on homelessness in their own city as well as homelessness in Rwanda, because it's unfair to not divide the resources. They're not doing the most good, because they find it more ethical to divide their resources.
So I think your professor's description is good, but I'm not sure it helps discuss egalitarianism/prioritarianism with laymen in their terms. When I say I'd give everything to Rwanda, I'm answering "what does the most good?" and not "what's the most fair/just?" Nonetheless I'll consider raising this response next time the objection comes up.
That's a mistake, thanks for pointing it out! That final sentence wasn't meant to stay in. That is, I think institutional trust is part of the trunk and not the branches.
I agree with your side point that there are some ideas & tools within EA that many would find useful even while rejecting all of the EA institutions.
I'm sorry if the title was misleading, that was not my intention. I think you and I have different views on the average forum user's population ethics. If I believed that more people reading this had a totalist (or similar) view, I would have been much more up front about my take not being valid for them. Believing the opposite, I put the conclusion you'd get from non-person-affecting views as a caveat instead.
That aside, I'd be happy to see the general discourse spell out more that population ethics is a crux for x-risks. I've only gotten - and probably at some points given - the impression that x-risks are similarly important to other cause areas under all population ethics. This runs the risk of baiting people into working on things they logically shouldn't believe to be the most pressing problem.
On a personal note, I concede that extinction is much worse than 10 billion humans dying. This is however for non-quantitative reasons. Tegmark has said something along the lines of a universe without sapience being terribly boring, and that weighs quite heavily into my judgement of the disutility of extinction.
I'm interested in hearing more about the cases you found for and against EA ideas/arguments applying without utilitarianism. I personally am very much consequentialist but not necessarily fully utilitarian, so curious both for myself and as a community builder. I'm not a philosopher so my footing is probably much less certain than yours.