11

I'm curious to know if anyone has put thought or time into improving/overhauling existing charities from an EA perspective. Has there been much thought or discussion put into the idea of making existing charities more effective?

There are lots of organizations out there that contract for nonprofits to make their marketing or fundraising more effective, but has anyone contemplated creating a consulting organization that would work with organizations within an EA framework? This seems to be not just a potentially large opportunity to effect change but a really big empty space that no one is working in.

 

There are of course a few potential pitfalls. It is hard to instigate change anywhere, but particularly in organizations that believe they are doing good work, or organizations that have been around for a long time. This bias against change would be a hard one to overcome, but I think EAs have gotten particularly good at asking very pointed questions about doing great charity work. This insight could be a huge resource for groups that truly want to improve.

 

I've thought about this myself quite frequently and would be stoked to hear thoughts from others.

11

0
0

Reactions

0
0
Comments21


Sorted by Click to highlight new comments since:

I think this is a great area to experiment with, so I'd be keen for people to just go and try it on a small scale and see what works.

One problem to bear in mind is that the best EA content is about cause selection and intervention selection, and charities are usually unwilling to change these dimensions. Whereas there's already a lot of advice for people who just want to implement an intervention more effectively.

I agree. The bulk of the variance in 'charity effectiveness' looks to be along intervention lines. If charities are fairly hard to budge on these, then it looks less likely that efforts to shift the entire distribution of charities to the right are going to work better than focusing on the extreme right tail in the first place.

I agree that trying to branch out to, or add an EA cause to a current charities is unlikely to succeed. My experience is that you are right - there are lots of services and advice out there for charities that want to improve implementation or strategy (mainly focused around cultivating donors).

I would be interested to know if there are many resources out there aimed at getting organizations to collect more data. To asses their success rates more scientifically. It is also my understanding that the advice out there for creating more effective implementation is usually based around just getting better numbers, not if those numbers actually make change in a given cause area.

What would you suggest is a good place to start for small scale experimentation? I think you are right, just doing some of this is the best way to gauge tractability.

I'm interested in helping organizations collect more data, using independent surveys of households to measure bed net usage, as well as surveys around deworming programs. One organization that conducts independent surveys is PMA2020. They currently have family planning and WASH surveys, but may add additional modules in the future.

In the world of animal protection, we have Faunalytics (formerly Humane Research Council). I'm the founder and executive director, full disclosure. We've been around for 15 years, since before "EA" became a common term, but that's essentially what we do. We're a nonprofit research provider and we encourage animal charities to collect and utilize data. We identify and summarize third-party research that is relevant to animal advocacy, conduct fee-for-service projects for animal groups, and carry out independent studies to further animal advocacy. We are a backbone organization that does not directly advocate for animals ourselves, but strive to make animal charities more effective. I'd be happy to talk about our experience sometime or you can learn more at https://faunalytics.org

I am very intrigued by the potential upside of this idea. As I see it, one can change charity culture by changing consumer demand (generally what GiveWell does), which will eventually lead to a change in product. Alternatively, one can change charity culture by changing the product directly, on the assumption that many consumers care more about the brand than the product.

Would the service be free to the nonprofits? Would it help nonprofits conduct studies to assess their impact?

Anecdata: I have a friend who works at a big-name nonprofit who has been trying to find exactly this service.

Ben Todd made this comment here detailing organizations he knows about (sort-of) working in that vein. Try forwarding that list to your friend!

I’ve recently chatted with Tara of CEA about this and my recent post on raising the effectiveness waterline goes in a similar direction. Such programs will be limited to charities that operate in areas that allow for high effectiveness, and the charity has to be willing to do it of course.

My first charitable interpretation of the situation was that if a charity has the potential to be highly effective given its cause area and is willing to optimize its effectiveness but still fails to be on par with a top charity in the same area, they must be lacking in something that is hard to obtain, namely specialized knowledge they can get from the top charity. A cooperation between the charities would furthermore serve to thwart wasteful competition between the charities.

Tara’s experience with nonprofit counseling with Toyota, however, has been that what such charities lacked was not so much this specialized knowledge but general skills in accounting, controlling, and I don’t fully remember what else she mentioned. If the most salient problems of these charities are in such general areas, then a general EA consultancy firm would make sense.

The services of such a consultancy firm may be highly subsidized from donations, but I think the charity will be more likely to implement advice that it has paid for, and that should also make it easier for the consultancy firm to pay its bills. I haven’t done any calculations, but it feels to me like it will be very hard to keep this sort of operation afloat financially.

An alternative might be to find an existing, established consultancy firm with knowledge in the area of nonprofits that is ready to advise charities as to how they can maximize impact rather than just fundraising success. An EA funder, a charity, and this company could then agree on prices and a cofunding plan. This will usually involve lots of money, though, since this sort of optimization will be most cost-effective with charities that move a lot of non-EA donations, and those charities will be large and complex.

For your information, if effective altruism was to spearhead such consulting projects, they probably won't be initiated by Givewell (see my comment here). The Centre for Effective Altruism, in particular Effective Altruism Ventures, might be the best organization poised to initiate such work.

When I met Holden Karnofsky (the executive director of Givewell) at the 2014 Effective Altruism Summit, I asked him if Givewell ever intended to consult or revamp charities to become more effective rather than just evaluating and recommending already effective charities. He said no. His reason for this is because he believes it's substantially more difficult to create an effective charity than evaluate existing ones. I'm inclined to agree with him, as the risks and rewards of creating a charity are spread across the whole non-profit world, not jeopardizing the potential value of Givewell's marginal resources. Note I just mean I now think it makes sense for Givewell not to go into consulting, but I still think others need try to create effective charities.

Note Mr. Karnofsky's statement reflects the position of Givewell's leadership, but this doesn't mean other effective altruists working for, aligned with, or near Givewell couldn't get involved in such a project. Givewell already values independent thinking among its employees, exemplified by their annual blogs about where each of its research employees intends to donate and why.

I'm inclined to agree with Holden for a number of reasons. First and foremost being that this isn't really what GiveWell does. They are very good at what they do, which is evaluate existing charities; while I see the tie-in with knowing how a good charity is run, it is a far cry from making organizational changes. Which is the other reason I agree with him, doing this is hard. Like really, really, substantially hard.

However I think hard and 'not worth doing' are very different things. I also agree that CEA or EA Ventures would be more appropriate venues to incubate a testable idea around this. After speaking with Kerry at CEA about this he agrees that while this is very exciting and something that would be great, no one yet seems to have a good answer for how to go about doing this. I think the next step is more asking lots and lots of people how they would go about doing this, what the very first change would, should, or could be.

I am really intested to hear if some of this was implemented in a concrete project? We as Effective Altruism Netherlands receive an increasing amount of requests from very skilled people (e.g. from finance, data science, legal professions and change management) who want to contribute to existing charities. We are currently talking to effective charities to see if they need skilled volunteers, but I have strong doubts they can meet all the demand from skilled volunteers.

Letting them work at existing less effective charities, making them more effective, could be worthwile for the reasons mentioned in this post. We can provide volunteers with some formal training and standardized methods to ensure high quality. I´ve looked into Benjamin Todd´s post and we could try to collaborate with one of the organisation mentioned there.

Does anyone here have any shareable experiences on this?

Except for the purposes of obtaining more epistemic information later on, the general agreement within the EA crowd is that one should invest the vast majority of eggs in one basket, the best basket.

I just want to point out the exact same is the case here, where if someone wants to make a charity more effective, choosing oxfam or the red cross would be a terrible idea, but trying to make AMF, FHI, SCI etc more effective would be a great idea.

Effective altruism is a winners take all kind of thing, where the goal is to make the best better, not to make anyone else be as good as the best.

This is true with respect to where a rational, EA-inclined person chooses to donate, but I think you're taking it too far here. Even in the best case scenario, there will be MANY people who donate for non-EA reasons. Many of those people will donate to existing, well-known charities such as the Red Cross. If we can make the Red Cross more effective, I can't see how that would not be a net good.

At the end of the day, the metric will always be the same. If you can make the entire red cross more effective, it may be that each unit of your effort was worth it. But if you anticipate more and more donations going to EA recommended charities, then making them even more effective may be more powerful.

See also DavidNash comment.

Of course. But as I understand it, the hypothesis here is that given (i) the amount of money that will invariably go to sub-optimal charities; and (ii) the likely room for substantial improvements in sub-optimal charities (see DavidNash's comment), that one (arguably) might get more bang for their buck trying to fix sub-optimal charities. I think it's a plausible hypothesis.

I'm doubtful that one can make GiveWell charities substantially more effective. Those charities are already using the EA lens. It's the ones that aren't using the EA lens for which big improvements might be made at low cost.

EDIT: I suppose I'm assuming that's the OP's hypothesis. I could be wrong.

Yes this is indeed my hypothesis; thank you for stating it so plainly. I think you've summed up my initial idea quite well.

My assumption is that trying to improve a very effective charity is potentially a lot of work and research, while trying to improve an ineffective but well funded charity, even a little, could require less intense research and have a very large pay-off. Particularly given that there are very few highly effective charities but LOTS of semi-effective, or ineffective ones, meaning there is a larger opportunity. Even if only 10% of non EA charities agree to improve their programs by 1% I believe the potential for overall decrease in suffering is greater.

There is also the added benefit of signalling. Having an organization that is working to improve effectiveness (despite of funding problems [see Telofy's comment]) shows organizations that donors and community members really care about measuring and improving outcomes. It plants the idea that effectiveness and an EA framework are valuable and worth considering. Even if they don't use the service initially.

My thought here is this is another way (possibly a very fast one) to spread EA values through the charity world. Creating a shift in nonprofit culture to value similar things seems very beneficial.

The question I would ask then is, if you want to influence larger organization, why not governmental organizations, which have the largest quantities of resources that can be flipped by one individual? If you get a technical position in a public policy related organization, you may be responsible for substantial changes in allocation of resources.

I think that governmental orgs would be a great way to do this!

I do worry that doing this as an individual has it's draw backs. I think getting to this sort of position requires ingraining yourself into a dysfunctional culture and I worry about getting sucked into the dysfunction, or succumbing to the multiple pressures and restraints within such an organization. Whereas an independent organization could remain more objective & focused on effectiveness.

If you can make an organisation that deals with billions of dollars 1% more effective, I think that could have a similar outcome to making an effective charity that works with millions of dollars 1% more effective.

There may be more scope for change as well if it isn't that effective to begin with.

Also getting higher up an organisation will lead to greater opportunities to change it from within rather than always staying outside because they aren't as efficient.

Curated and popular this week
 ·  · 25m read
 · 
Epistemic status: This post — the result of a loosely timeboxed ~2-day sprint[1] — is more like “research notes with rough takes” than “report with solid answers.” You should interpret the things we say as best guesses, and not give them much more weight than that. Summary There’s been some discussion of what “transformative AI may arrive soon” might mean for animal advocates. After a very shallow review, we’ve tentatively concluded that radical changes to the animal welfare (AW) field are not yet warranted. In particular: * Some ideas in this space seem fairly promising, but in the “maybe a researcher should look into this” stage, rather than “shovel-ready” * We’re skeptical of the case for most speculative “TAI<>AW” projects * We think the most common version of this argument underrates how radically weird post-“transformative”-AI worlds would be, and how much this harms our ability to predict the longer-run effects of interventions available to us today. Without specific reasons to believe that an intervention is especially robust,[2] we think it’s best to discount its expected value to ~zero. Here’s a brief overview of our (tentative!) actionable takes on this question[3]: ✅ Some things we recommend❌ Some things we don’t recommend * Dedicating some amount of (ongoing) attention to the possibility of “AW lock ins”[4]  * Pursuing other exploratory research on what transformative AI might mean for animals & how to help (we’re unconvinced by most existing proposals, but many of these ideas have received <1 month of research effort from everyone in the space combined — it would be unsurprising if even just a few months of effort turned up better ideas) * Investing in highly “flexible” capacity for advancing animal interests in AI-transformed worlds * Trying to use AI for near-term animal welfare work, and fundraising from donors who have invested in AI * Heavily discounting “normal” interventions that take 10+ years to help animals * “Rowing” on na
 ·  · 3m read
 · 
About the program Hi! We’re Chana and Aric, from the new 80,000 Hours video program. For over a decade, 80,000 Hours has been talking about the world’s most pressing problems in newsletters, articles and many extremely lengthy podcasts. But today’s world calls for video, so we’ve started a video program[1], and we’re so excited to tell you about it! 80,000 Hours is launching AI in Context, a new YouTube channel hosted by Aric Floyd. Together with associated Instagram and TikTok accounts, the channel will aim to inform, entertain, and energize with a mix of long and shortform videos about the risks of transformative AI, and what people can do about them. [Chana has also been experimenting with making shortform videos, which you can check out here; we’re still deciding on what form her content creation will take] We hope to bring our own personalities and perspectives on these issues, alongside humor, earnestness, and nuance. We want to help people make sense of the world we're in and think about what role they might play in the upcoming years of potentially rapid change. Our first long-form video For our first long-form video, we decided to explore AI Futures Project’s AI 2027 scenario (which has been widely discussed on the Forum). It combines quantitative forecasting and storytelling to depict a possible future that might include human extinction, or in a better outcome, “merely” an unprecedented concentration of power. Why? We wanted to start our new channel with a compelling story that viewers can sink their teeth into, and that a wide audience would have reason to watch, even if they don’t yet know who we are or trust our viewpoints yet. (We think a video about “Why AI might pose an existential risk”, for example, might depend more on pre-existing trust to succeed.) We also saw this as an opportunity to tell the world about the ideas and people that have for years been anticipating the progress and dangers of AI (that’s many of you!), and invite the br
 ·  · 12m read
 · 
I donated my left kidney to a stranger on April 9, 2024, inspired by my dear friend @Quinn Dougherty (who was inspired by @Scott Alexander, who was inspired by @Dylan Matthews). By the time I woke up after surgery, it was on its way to San Francisco. When my recipient woke up later that same day, they felt better than when they went under. I'm going to talk about one complication and one consequence of my donation, but I want to be clear from the get: I would do it again in a heartbeat. Correction: Quinn actually donated in April 2023, before Scott’s donation. He wasn’t aware that Scott was planning to donate at the time. The original seed came from Dylan's Vox article, then conversations in the EA Corner Discord, and it's Josh Morrison who gets credit for ultimately helping him decide to donate. Thanks Quinn! I met Quinn at an EA picnic in Brooklyn and he was wearing a shirt that I remembered as saying "I donated my kidney to a stranger and I didn't even get this t-shirt." It actually said "and all I got was this t-shirt," which isn't as funny. I went home and immediately submitted a form on the National Kidney Registry website. The worst that could happen is I'd get some blood tests and find out I have elevated risk of kidney disease, for free.[1] I got through the blood tests and started actually thinking about whether to do this. I read a lot of arguments, against as well as for. The biggest risk factor for me seemed like the heightened risk of pre-eclampsia[2], but since I live in a developed country, this is not a huge deal. I am planning to have children. We'll just keep an eye on my blood pressure and medicate if necessary. The arguments against kidney donation seemed to center around this idea of preserving the sanctity or integrity of the human body: If you're going to pierce the sacred periderm of the skin, you should only do it to fix something in you. (That's a pretty good heuristic most of the time, but we make exceptions to give blood and get pier