I live for a high disagree-to-upvote ratio
I asked this in-person, but I figure it’d be nice for a broader audience to hear: How should I navigate pledging if I have taken a very low salary to do direct work? In my case, I have taken a salary that roughly covers my expenses without leaving much margin for error. Of course, ‘my expenses’ buries the lede a little bit, because I believe I could make more sacrifices to take 10% off the top, but I think doing so might make me much more anxious or hurt my productivity.
In my case, my organisation doesn’t really have much more budget to pay me; that money would be better spent elsewhere. And the market rate for my skills is much higher, even in the non-profit sector, even in India, where we operate (still probably +50% at a minimum).
If I pledged 10%, would I have to take a higher salary or donate it out of my existing salary? Or is there another way to account for this?
I agree with you that EA often implicitly endorses conclusions, and that this can be pernicious and sometimes confusing to newcomers. Here’s a really interesting debate on whether biodiversity loss should be an EA cause area, for example.
A lot of forms of global utilitarianism do seem to tend to converge on the ‘big 3’ cause areas of Global Health & Development, Animal Welfare, and Global Catastrophic Risks. If you generally value things like ‘saving lives’ or ‘reducing suffering’, you’ll usually end up at one of these (and most people seem to decide between them based on risk tolerance, assumptions about non-human moral values, or tractability—rather than outcome values). Under this perspective, it could be reasonable to dismiss cause areas that don’t fit into this value framework.
But this highlights where I think part of the problem lies, which is that value systems that lie outside of this can be good targets for effective altruism. If you value biodiversity for its own sake, it’s not unreasonable to ask ‘how can we save the greatest number of valuable species from going extinct?’. Or you might be a utilitarian, but only interested in a highly specific outcome, and ask ‘how can I prevent the most deaths from suicide?’. Or ‘how can I prevent the most suffering in my country?’—which you might not even do for value-system reasons, but because you have tax credits to maximise!
I wish EA were more open to this, especially as a movement that recognises the value of moral uncertainty. IMHO, some people in that biodiversity loss thread are bit too dismissive, and I think we’ve probably lost some valuable partners because of it! But I understand the appeal of wanting easy answers, and not spending too much time overthinking your value system (I feel the same!).
Similarly, I wonder if one of the major activities this group could do together is joint funding, possibly by forming a funding circle. When I was EtG, I just donated broadly to GiveWell Top Charities because I found cause selection overwhelming, but a community of similar funders with some hobbyist-type research into causes and charities might’ve engaged me more.
I hate to continue to ride this hobby horse, but I wish that questions about mental health as an EA cause area would distinguish between mental health as a global health problem, and mental health for EAs or other capacity-building purposes (or, if not, just leave them out). Conflating them in the same question without a clear disambiguation, especially around prioritisation, makes this data nearly useless because I don’t know what the answerer interpreted it to mean. (I hope it’s not too late to add a clarification now?)
I think most of the article is pretty stock-standard, but I did want to elucidate a novel angle to replying to these kinds of critiques if you see them around:
When Notre Dame caught on fire in 2019, affluent people in France rushed to donate to repair the cathedral, a beloved national landmark. Mr. Singer wrote an essay questioning the donations, asking: How many lives could have been saved with the charitable funds devoted to repairing this landmark? This was when a critique of effective altruism crystallized for Ms. Schiller. “He’s asking the wrong question,” she recalled thinking at the time. She wanted to know: How could anyone put a numerical value on a holy space?
Ms. Schiller had first become uncomfortable with effective altruism while working as a fund-raising consultant. She encountered donors who told her, effectively, “I’m looking for the best bang for my buck.” They just wanted to know their money was well spent. That made sense, though Ms. Schiller couldn’t help but feel there was something missing in this approach. It turned the search for a charitable cause into an exercise of bargain hunting.
The school of philanthropy that Ms. Schiller now proposes focuses on “magnificence.” In studying the literal meaning of philanthropy — “love of humanity” in Greek — she decided we need charitable causes that make people’s lives feel meaningful, radiant, sacred. Think nature conservancies, cultural centers and places of worship. These are institutions that lend life its texture and color, and not just bare bones existence.
I’d humbly propose that, without good guardrails, this kind of thinking has good shot at turning racist/anglo-centric. It’s notable, of course, that the article mentioned the Notre Dame, and not the ongoing destruction of religious history in Gaza or Syria or Afghanistan or Sudan or Ukraine (for example). If critics of EA don’t examine their own biases about what constitutes ‘magnificence’, they risk contributing to worldviews that they probably abhor. Moreover, in many of these cases, these kinds of fundraisers contribute to projects that should be—and usually otherwise would be—funded by government.
If you value civic life and culture, but only contribute to your local, Western civic life and culture, then you are a schmuck and have been taken advantage of by politicians who want to cut taxes for the wealthy. Please, at least direct your giving outward.
I am seeing here that they already work closely with Open Philanthropy and were involved in drafting the Executive Order on AI. So this does not seem like a neglected avenue.