As part of my recent application to the Charity Entrepreneurship Incubation Program[1], I was asked to spend ~1hr putting together a one page critique of a charitable community. I picked the EA community and wrote this.

The critique is summarised as follows:

I am reasonably confident that the Effective Altruism (EA) community is neglecting trying to influence non-EA actors (funders, NGOs and individuals). I am uncertain about the extent to which this represents a missed opportunity in the short-term. However, I believe that influencing non-EA actors will become increasingly important in the future and that the EA community should begin to explore this soon, if not now.

I'm sharing as:

  1. others have previously suggested they would be interested in reading something like this[2]
  2. I'd like to see if anyone else is interested in collaborating in some way to build on my thinking and develop a more robust critique ..... and possibly to make an argument for what the EA community could/should do differently.

Let me know if you'd be interested in collaborating, or otherwise please do leave comments on this post or the Google doc - I'm open to any and all engagement!

  1. ^

    For those interested in the CE program, I wasn't selected for the upcoming cohort but did make it to the final round of interviews. I have no way of knowing how well (or not!) I scored on this task, as it was completed alongside two other written assignments

  2. ^

    I found that this post shares, in bullet-points and in the comments, some of the critiques I articulate and others that I agree with but couldn't fit onto one page. I'd expect to flesh these out when I spend more time on this

16

0
0

Reactions

0
0
Comments4
Sorted by Click to highlight new comments since:

I generally agree with this critique.

A while back I wrote about an idea for an org that focuses on redirecting US private foundation grants toward more effective causes. Got a lot of feedback, and the consensus was that existing private foundations just aren't tractable. And I tend to agree with that.

But I have been working on a research paper where we interview private foundation grantmakers to try to better understand how they operate and the information used in their decision making. One of the takeaways is that trust-based philanthropy has had HUGE influence on private foundation grantmaking, despite being very new (every participant we interviewed indicated their foundation had implemented at least some trust based philanthropy practices). 

This got me thinking - has EA had any influence? Not a single participant indicated that EA had influenced their grantmaking, and I would say that 75% were neutral and 25% were openly hostile to the idea of EA influencing their grantmaking. 

I think EA would benefit from conversations around how to sell EA ideas to these other groups. I think it would require what some would view as "watering down"[1] of EA principles, but could substantially increase the overall impact of EA. Definitely interesting to think about what aspects of EA could be compromised before it ceases to be EA at all.

  1. ^

    For example, most US private foundations are severely constrained by the original founder's intent, such as spending funds in X geographic area. Could these foundations be persuaded and made more effective through a version of EA that encourages effective giving, given existing foundation constraints?

I agree. Involving other actors forces us to examine deeply EA's weirdness and unappealing behaviours, and brings a ton of experience, network, and amplifies impact. 

This is something that I have been seriously thinking about when organizing big projects, especially when it comes to determine the goals of a conference and the actors that we choose to invite. Specifically in a theme such as AI safety, where safety concerns should be propelled and advertised in policy among other non-EA policy actors. 

What about the Effective Institutions Project (website)? While they haven't posted on the EA Forum in a while, I remember the case studies from their "which institutions? a framework" writeup and their landscape analysis of institutional improvement opportunities (Amazon, the Chinese Communist Party’s Politburo, DeepMind, Meta, OpenAI, the Executive Office of the US President, and the US National Security Council make the top 10 in both their neartermist and longtermist rankings; Google, the State Council of China, and the World Health Organization round out their neartermist list and Alphabet, the European Union, and the US Congress round out their longtermist one). 

left some comments on the doc — i overall agree with this critique, but would like to see a bit more on your thoughts driving the research you've already done.

Curated and popular this week
Relevant opportunities