Recently I ran into a volunteer for UNICEF who was gathering donations for helping malnourished children. He gave me some explanation on why child wasting is a serious problem and how there are cheap ways to help children who are suffering from it (the UNICEF website has some information on child wasting and specifically on the treatment of wasting using simplified approaches, in case you are interested).
Since I happen to have taken the Giving What We Can pledge and have read quite a bit on comparing charities, I asked what evidence there is that compares this action to - say - protecting people from malaria with bednets or directly giving cash to very poor people. The response I got was quite specific: the volunteer claimed that UNICEF can save a life with just 1€ a day for an average period of 7 months. If these claims are true then that means they can save a life for 210€, a lot less than the >3000$ that Givewell estimates is needed for AMF to save one life. Probably these numbers should not be compared directly, but I am still curious to know why there can be over an order of magnitude difference between the two. So to practice my critical thinking on these kinds of questions, I made a list of possible explanations for the difference:
- The UNICEF campaign has little room for additional funding.
- The program would be funded anyway from other sources (e.g. governments).
- The 1€/day figure might not include all the costs.
- Some of the children who receive the food supplements might die of malnutrition anyway.
- Only some of the children who receive the food supplements would have died without them.
- Children who are saved from malnutrition could still die of other causes.
Obviously I do not have the time nor resources of GiveWell so it is hard to determine how much all of these explanations count in the overall picture, or if there are others that I missed. Unfortunately, there does not seem to be much information on this question from GiveWell (or other EA organizations) either. Looking on the GiveWell website, the most I could find is this blog post on mega-charities from 2011, which makes the argument that mega-charities like UNICEF have too many different campaigns running simultaneously, and that they do not have the required transparency for a proper evaluation. The first argument sounds fake to me: if there are different campaigns, then can you not just evaluate these individual campaigns, or at least the most promising ones? The second point about transparency is a real problem, but there is also the risk of measurability bias if we never even consider less transparent charities.
I would very much like to have a more convincing argument for why these kind of charities are not rated. If for nothing else then at least it would be useful for discussing with people who currently donate to them, or who try to convince me to donate to them. Perhaps the reason is just a lack of resources at GiveWell, or perhaps there is research on this but I just couldn't find it. But either way I believe the current state of affairs does not provide a convincing case of why the biggest EA evaluator barely even mentions one of the largest and most respected charity organizations.
[Comment: I'm not new here but I'm mostly a lurker on this forum. I'm open to criticism on my writing style and epistemics as long as you're kind!]
This is a great question! There is a great lack of good cost-effectiveness estimates of large multilaterals such as UNICEF. The problem is that they are extremely difficult to create for the reasons outlined in the Givewell article you linked.
Different vaccine programs carried out by GAVI, for example, vary massively in cost-effectiveness. HPV vaccines don't look as cost-effective as rotavirus vaccines, so depending on where additional funding will be spent the cost-effectiveness will vary quite a bit!
At aidpolicy.org we have been toying with ranking of multilaterals on $/daly in style of Givewell, but not only would it be a massive undertaking, the resulting estimates would have very high error bars to the point that we worry nobody would take them seriously.
There are some rankings such as the QUODA by CGD, which can give you a sense of the relative effectiveness of multilaterals (For $/daly purposes I would primarily look at their prioritization and evaluation criteria), but you won't be able to use the QUODA to compare a multilateral with Givewell.
I'm near certain the 1$/7 months claim is incorrect - or at least calculated with much fewer caveats than Givewell's CEAs. My best guess is that UNICEF is significantly less cost-effective than givewell's charities. Between any mega-charity and Givewell's maximum impact fund, I would recommend Givewell for individual donors.
As @freedomandutility points out, the question Givewell is trying to answer is: "what is the most impact you, an individual, can have on the margin with your donations". This answer is not necessarily going to be the same for a government with ten billion to spend. Even a single medium-sized government could cover Givewell's entire funding gap and have plenty left over. Finding something as cost-effective as Givewell's which can effectively absorb $100b is not easy!
I don't mean to say this to justify the current system, I believe governments and multilaterals alike are doing a less-than-stellar job with their development efforts. Were a government to actually fully fund Givewell, Givewell should just lower their bar and recommend additional charities.
Thanks George and Jason all good points
One small other point I will add is that this already happens a LOT, through a couple of mechanisms which include (this is just what I've seen in Uganda)
- Supplementary funding for programs - for example AMF give money to local government health departments to help them distribute nets
- Results based funding for government health centers, for example paying the local government providers money for every delievery they do.
- Straight programme funding - people like World vision and Save the children sometimes deposit money in
... (read more)