A new article in the NYT out today heavily discussing effective giving and effective altruism.

Unfortunately pretty surface-level and not really examining why optimizing charity is indeed good, but rather stating old critiques and giving them no scrutiny. The conclusion sumps up the tone and take of the article pretty well: 

There’s nothing wrong with the desire to measure the value of our giving. But there’s also nothing wrong with thinking expansively about that value, or the tools for measuring it. Maybe a neighbor giving to another neighbor is what one fractured street needs. Maybe making someone else’s life magnificent is hard to price.

50

0
0

Reactions

0
0
Comments12


Sorted by Click to highlight new comments since:

I’m quite disappointed by this article. I talked to her and tried to steer her towards more substantive and novel concerns. I know that other knowledgeable people talked to her as well. That didn’t seem to make much of an impact.

The majority of online articles about effective altruism have always been negative (it used to be 80%+). In the past, EAs were coached not to talk to journalists, and perhaps people finally reversing this is why things are getting better, so I appreciate anyone who does it.

Of course there is FTX, but that doesn't explain everything-- many recent articles including this are mostly not about FTX. At the risk of being obvious, for an intelligent journalist (as many are) to write a bad critique despite talking to thoughtful people, it has to be that a negative portrayal of EA serves their agenda far better than a neutral or positive one. Maybe that agenda is advocating for particular causes, a progressive politics that unfortunately aligns with Torres' personal vendetta, or just a deep belief that charity cannot or should not be quantified or optimized. In these cases maybe there is nothing we can do except promote the ideas of beneficentrism, triage, and scope sensitivity, continue talking to journalists, and fix both the genuine problems and perceived problems created by FTX, until bad critiques are no longer popular enough to succeed.

Seems a lot of it is saying “you can’t put a price on x” — and then going ahead and putting a price on x anyway by saying we should prefer to fund x over y.

In her book, Ms. Schiller ties her criticism of effective altruism to broader questions about optimization, writing: “At a time when we are under enormous pressure to optimize our time, be maximally productive, hustle and stay healthy (so we can keep hustling), we need philanthropy to make pleasure, splendor and abundance available for everyone.”

Her conception of the good can include magnificence and meaning and abundance. But how can we make that available for everyone without the kinds of reasoning decried as ‘optimization’?

I feel like the people saying “you can’t put a price on a beautiful holy site” are trying to avoid saying “you can, and the holy site is worth more than the lives the money could have saved” - it’s not impossible that Notre Dame is worth the lives unsaved (with its millions of visitors a year), but it is impossible to refute the claim unless they are honest about how they’re valuing it.

It seems they’re missing the mood that our problems are larger than the resources we have to fix them, and so advocating for not facing the uncomfortable triage questions.

(My comments inspired by / plagiarised from https://x.com/trevposts/status/1865495961612542233 )

Here are the three most popular comments as of now. One, "giving to effective charities can create poverty in the form of exploited charity workers":

I’ve worked for a non-profit in the past at an unlivable wage. One of my concerns when I am looking at charities to give to and hearing that we need to give only to those that are most efficient, is that we are creating more poverty by paying the workers at some charities wages that they can’t live on.

Two, "US charities exist because the rich aren't taxed enough":

Our whole system of charity in the US has developed because the wealthy aren’t taxed enough, and hence our government doesn’t do enough. Allowing the rich to keep so much wealth means we don’t have enough national or state level funding for food, housing, healthcare, or education. We also don’t have adequate government programs to protect the environment, conduct scientific research, and support art and culture. I’m deluged every day by mail from dozens of organizations trying to fill these gaps. But their efforts will never have the impact that well planned longterm government action could.

Three, "I just tip generously":

Lately I’ve been in the mindset of giving money to anyone who clearly has less than me when I have the opportunity. This mostly means extra generous tipping (when I know the tips go to the workers and not a corporation). Definitely not efficient, but hopefully makes a tiny difference.

These just seem really weak to me. What other options did the underpaid charity workers have, that were presumably worse than working for the charity? Even if the US taxed the rich very heavily, there would still be lots of great giving opportunities (e.g., to help people in other countries, and to help animals everywhere). Tipping generously is sort of admirable, but if it's admittedly inefficient, why not do the better thing instead? I guess these comments just illustrate that there is a lot of room for the core ideas of effective altruism (and basic instrumental rationality) to gain wider adoption.

Overall, not one of the stronger critiques that I've read.

The "how could anyone put a numerical value on a holy space" snippet struck me. I'm no expert in measurement, but the answer to this question seems to be similar to "how do you measure how extraverted a person is?" or "how do you measure the sum total of all economic activity in a country?" or "how do you measure media censorship?" The answer is that you do it carefully, with the use of tools/assessments, proxies, parametric estimating, etc.

There is plenty of research that basically involves asking people "Would you rather have A or B," and with clever research design you really can measure how much people value various intangible things.[1] And I don't even study or specialize in that area. So it stuck me as odd to have such an established set of solutions which weren't even mentioned. How to Measure Anything is great, but there is also lots written about willingness to pay.

  1. ^

    For anyone not familiar with that kind of research, a simplistic version would basically be asking people "Would you rather have an extra $100 each week or have a local art museum," and by varying the numbers you can get an idea of what dollar value people put on that specific experience. For anyone familiar with the research, please forgive me for my vast simplifications.

So it stuck me as odd to have such an established set of solutions which weren't even mentioned. How to Measure Anything is great, but there is also lots written about willingness to pay.

I agree the article isn't particularly deep, but the plurality of possible measures arguably supports the central argument which appears to be that EA approaches to quantifying philanthropy isn't the be all and end all.[1] Willingness to pay, for example, is a measure which works against arguments by Singer that money voluntarily donated to the Notre Dame roof would be better redirected to alleviating global suffering.

  1. ^

    wait until she discovers how differently some EAs quantify different types of intervention!

huw
3
8
6
1

I think most of the article is pretty stock-standard, but I did want to elucidate a novel angle to replying to these kinds of critiques if you see them around:

When Notre Dame caught on fire in 2019, affluent people in France rushed to donate to repair the cathedral, a beloved national landmark. Mr. Singer wrote an essay questioning the donations, asking: How many lives could have been saved with the charitable funds devoted to repairing this landmark? This was when a critique of effective altruism crystallized for Ms. Schiller. “He’s asking the wrong question,” she recalled thinking at the time. She wanted to know: How could anyone put a numerical value on a holy space?

Ms. Schiller had first become uncomfortable with effective altruism while working as a fund-raising consultant. She encountered donors who told her, effectively, “I’m looking for the best bang for my buck.” They just wanted to know their money was well spent. That made sense, though Ms. Schiller couldn’t help but feel there was something missing in this approach. It turned the search for a charitable cause into an exercise of bargain hunting.

The school of philanthropy that Ms. Schiller now proposes focuses on “magnificence.” In studying the literal meaning of philanthropy — “love of humanity” in Greek — she decided we need charitable causes that make people’s lives feel meaningful, radiant, sacred. Think nature conservancies, cultural centers and places of worship. These are institutions that lend life its texture and color, and not just bare bones existence.

I’d humbly propose that, without good guardrails, this kind of thinking has good shot at turning racist/anglo-centric. It’s notable, of course, that the article mentioned the Notre Dame, and not the ongoing destruction of religious history in Gaza or Syria or Afghanistan or Sudan or Ukraine (for example). If critics of EA don’t examine their own biases about what constitutes ‘magnificence’, they risk contributing to worldviews that they probably abhor. Moreover, in many of these cases, these kinds of fundraisers contribute to projects that should be—and usually otherwise would be—funded by government.

If you value civic life and culture, but only contribute to your local, Western civic life and culture, then you are a schmuck and have been taken advantage of by politicians who want to cut taxes for the wealthy. Please, at least direct your giving outward.

It’s notable, of course, that the article mentioned the Notre Dame, and not the ongoing destruction of religious history in Gaza or Syria or Afghanistan or Sudan or Ukraine (for example

Not really. Notre Dame was mentioned because some prominent EAs have criticised its expensive restoration project as being an inappropriate use of philanthropic funding. As far as I'm aware, prominent EAs haven't devoted the same criticism to the opulence of Hindu or Buddhist monuments or attempts to protect antiquities in conflict zones, and I don't think that makes them racist or anglo-centric either.

Now people can and do make arguments for preserving archaeological sites in poorer countries on the grounds of them being more vulnerable and less expensive to repair which is essentially a cost-effectiveness argument. No doubt they would agree with your suggestion to direct giving outward, but I don't think that group overlaps with EAs at all. (And for those who think that rebuilding destroyed historical sites are a valid use of philanthropic funding there are also obvious arguments that people reasonably prefer to donate to things that they can see and that can be enjoyed by millions of people over, say, the restoration of the Bamiyan Buddhas in a remote area of a wartorn country since taken over by the entity which originally intentionally destroyed them. Nevertheless, there were serious discussions about restoring the Bamiyan Buddhas prior to the Taliban resurgence, but I don't think EA had anything to do with any of the debate)

Until I read this article, saw this post and read the comments on it, I kind of imagined that EA's were very similar to normal people, just a bit more altruistic and a bit more expansive and maybe a bit more thoughtful. 

This post scares the hell out of me. 

This article is one of the worst articles I've ever seen in the NY Times. It is utter bullshit, but coated in meaningless, sweet-sounding words. 

This is an attack on everything that we believe in! What the hell will it take to make EA's angry if this nonsense, in probably the most famous newspaper in the world, does not? 

Why do we just sit back and think "that's not a very fair analysis"?

Does nobody feel an urgent need to defend ourselves, to get on TV and radio and places other than the EA forum and explain to the world that this article totally misses the point of EA, totally mischaracterises what we're trying to achieve and why? 

If someone wrote an article about a minority group and described them with a few nasty racist stereotypes, there would be massive protests, retractions, apologies and a real effort to ensure that people were well informed about the reality.

The word "minority" is important here. If EA were the dominate mode of donating to charity, as it should be, then sure, it would be fine for someone to write that there is also value in donating to small, local charities, to challenge the status quo. 

But EA represents only a small minority of donors today, so it is totally inappropriate for a journalist to pick on it. 

But what really makes my blood boil are those who were not mentioned or consulted by this sad excuse for a journalist. For example, the people who desperately need food or medicine to survive. The animals who suffer in factory farms. The people who will suffer the most from climate-change. 

We need to call this out for the bullshit it is. EA's believe that, when you donate, you should think a bit more about the people and animals who desperately need your help, and about what they need and how to help them, and maybe think a little bit less about the warm fuzzy feeling you get helping someone who will thank you profusely in person. 

I absolutely refuse to accept that there is something wrong with that, and I find it shocking and appalling that the NY Times would publish this article as probably the only significant article they have published about EA since the last negative articles they published during the SBF affair. 

At the very minimum, they have a responsibility to get their facts straight. Just read the four paragraphs where she introduces effective altruism. For her it is not a ground-roots movement, it is all about billionaires and ultra-wealthy. This is just not true. But she doesn't even mention that 99.999% of EA's are not rich by American standards - it's just that, unlike most, we're aware of how rich we are by global standards. 

I would really hope to see a strong rebuttal submitted by someone in the EA movement. I would write it myself (and I will), but I don't think an article by me will get published in the NY Times. But there are people in the EA movement who are not millionaires but who do have the name-recognition and credibility to be listened to. This absolutely needs to happen, and fast. Maybe we could turn this negative into a positive. But giving season is already in full swing, and the people and animals who desperately depend on effective giving cannot afford to lose any of the insufficient donations they already get, even if it does mean that the local dog-shelter gets painted in bright Christmassy colours. 

For now I plan to share this on my own social media and use it as an excuse to talk about effective giving and, as a side note, to share an example of shoddy journalism. 

I upvoted this because I like the passion, and I too feel a desire to passionately defend EA and the disempowered beneficiaries EAs seek to protect, who are indirectly harmed by this kind of sloppy coverage. I do hope people respond, and I think EAs err towards being too passive about media coverage. 

But I think important parts of this take are quite wrong. 

Most people just aren't basically sympathetic to EA, let alone EAs-waiting-to-happen; they have a tangle of different moral intuitions and aren't very well-informed or thoughtful about it. Sure, they'll say they want more effective charity, but they also want to give back to their local community and follow fads and do what makes them feel good and support things that helped them in particular and keep the money for themselves and all kindsa stuff. So, I don't think this is surprising, and I think it's important for EAs to be clear-eyed about how they're different from other people.

I don't think that means EAs could never be a dominant force in philanthropy or whatever; most people throughout history didn't care about anti-racism or demoncracy but they're popular now; caring about what your ancestors has declined a lot; things can change, I just don't think it's inevitable or foregone (or couldn't reverse). 

If someone wrote an article about a minority group and described them with a few nasty racist stereotypes, there would be massive protests, retractions, apologies and a real effort to ensure that people were well informed about the reality.

People would do this for some kinds of minorities (racial or sex/gender minorities), and for racist stereotypes. I don't think they would for people with unusual hobbies or lifestyle choices or belief sets, with stereotypes related to those things. "not being racist" or discriminating against some kinds of minorities is a sacred value for much of liberal elite society, but many kinds of minorities aren't covered by that.  

Crappy stereotypes are always bad, but I don't think that means that just because you're a minority you shouldn't be potentially subject to serious criticism (of course, unfortunately this criticism isn't intellectually serious). 

Curated and popular this week
Paul Present
 ·  · 28m read
 · 
Note: I am not a malaria expert. This is my best-faith attempt at answering a question that was bothering me, but this field is a large and complex field, and I’ve almost certainly misunderstood something somewhere along the way. Summary While the world made incredible progress in reducing malaria cases from 2000 to 2015, the past 10 years have seen malaria cases stop declining and start rising. I investigated potential reasons behind this increase through reading the existing literature and looking at publicly available data, and I identified three key factors explaining the rise: 1. Population Growth: Africa's population has increased by approximately 75% since 2000. This alone explains most of the increase in absolute case numbers, while cases per capita have remained relatively flat since 2015. 2. Stagnant Funding: After rapid growth starting in 2000, funding for malaria prevention plateaued around 2010. 3. Insecticide Resistance: Mosquitoes have become increasingly resistant to the insecticides used in bednets over the past 20 years. This has made older models of bednets less effective, although they still have some effect. Newer models of bednets developed in response to insecticide resistance are more effective but still not widely deployed.  I very crudely estimate that without any of these factors, there would be 55% fewer malaria cases in the world than what we see today. I think all three of these factors are roughly equally important in explaining the difference.  Alternative explanations like removal of PFAS, climate change, or invasive mosquito species don't appear to be major contributors.  Overall this investigation made me more convinced that bednets are an effective global health intervention.  Introduction In 2015, malaria rates were down, and EAs were celebrating. Giving What We Can posted this incredible gif showing the decrease in malaria cases across Africa since 2000: Giving What We Can said that > The reduction in malaria has be
Ronen Bar
 ·  · 10m read
 · 
"Part one of our challenge is to solve the technical alignment problem, and that’s what everybody focuses on, but part two is: to whose values do you align the system once you’re capable of doing that, and that may turn out to be an even harder problem", Sam Altman, OpenAI CEO (Link).  In this post, I argue that: 1. "To whose values do you align the system" is a critically neglected space I termed “Moral Alignment.” Only a few organizations work for non-humans in this field, with a total budget of 4-5 million USD (not accounting for academic work). The scale of this space couldn’t be any bigger - the intersection between the most revolutionary technology ever and all sentient beings. While tractability remains uncertain, there is some promising positive evidence (See “The Tractability Open Question” section). 2. Given the first point, our movement must attract more resources, talent, and funding to address it. The goal is to value align AI with caring about all sentient beings: humans, animals, and potential future digital minds. In other words, I argue we should invest much more in promoting a sentient-centric AI. The problem What is Moral Alignment? AI alignment focuses on ensuring AI systems act according to human intentions, emphasizing controllability and corrigibility (adaptability to changing human preferences). However, traditional alignment often ignores the ethical implications for all sentient beings. Moral Alignment, as part of the broader AI alignment and AI safety spaces, is a field focused on the values we aim to instill in AI. I argue that our goal should be to ensure AI is a positive force for all sentient beings. Currently, as far as I know, no overarching organization, terms, or community unifies Moral Alignment (MA) as a field with a clear umbrella identity. While specific groups focus individually on animals, humans, or digital minds, such as AI for Animals, which does excellent community-building work around AI and animal welfare while
Recent opportunities in Building effective altruism
46
Ivan Burduk
· · 2m read