Or on the types of prioritization, their strengths, pitfalls, and how EA should balance them
The cause prioritization landscape in EA is changing. Prominent groups have shut down, others have been founded, and everyone is trying to figure out how to prepare for AI. This is the first in a series of posts examining the state of cause prioritization and proposing strategies for moving forward.
Executive Summary
* Performing prioritization work has been one of the main tasks, and arguably achievements, of EA.
* We highlight three types of prioritization: Cause Prioritization, Within-Cause (Intervention) Prioritization, and Cross-Cause (Intervention) Prioritization.
* We ask how much of EA prioritization work falls in each of these categories:
* Our estimates suggest that, for the organizations we investigated, the current split is 89% within-cause work, 2% cross-cause, and 9% cause prioritization.
* We then explore strengths and potential pitfalls of each level:
* Cause prioritization offers a big-picture view for identifying pressing problems but can fail to capture the practical nuances that often determine real-world success.
* Within-cause prioritization focuses on a narrower set of interventions with deeper more specialised analysis but risks missing higher-impact alternatives elsewhere.
* Cross-cause prioritization broadens the scope to find synergies and the potential for greater impact, yet demands complex assumptions and compromises on measurement.
* See the Summary Table below to view the considerations.
* We encourage reflection and future work on what the best ways of prioritizing are and how EA should allocate resources between the three types.
* With this in mind, we outline eight cruxes that sketch what factors could favor some types over others.
* We also suggest some potential next steps aimed at refining our approach to prioritization by exploring variance, value of information, tractability, and the
Don't ask what EA can do for you, ask what you can do for EA.
An obvious-in-hindsight statement I recently heard from a friend:
While this makes complete sense in theory, it is emotionally difficult to commit to it if most of your friends are in EA. This makes it hard for us to evaluate our impact on the community properly. Motivated reasoning is a thing.
So, it may be wothwhile for us to occasionally reflect on the following questions:
Yeah I've definitely stopped doing things that I think will harm the community (I've reduced flirting a lot). But that said I think the kinds of people likely to reduce behaviours are (unlike me) the people who least need to.
I think for most people, they need not worry. And for those that do, there are ways they can avoid harmful patters - avoid events where those patterns occur, go on courses, talk to friends and develop strategies to avoid them.
I don't think we need to be martyr's here, and for 99.9% of people there is a way for their social needs to be met in the community. But like 1% of people will have to change a bit.
My personal gold standard of good organizing is the Advice Process. Description by Burning Nest:
One of the problems the Advice Process tackles is what anarchist visionary madman Robert Anton Wilson calls the SNAFU-principle ["Situation Normal, All Fucked Up"]:
And the Advice Process does more than just prevent SNAFU: It also prevents the eternal deadlock of consensus-based decisionmaking I've suffered through in nonhierarchical collectives of the political left, the eternal bad compromises of basic democracy, and incredible amounts of time wasted on having to be in the room while decisions are made that you don't actually care about all that much.
1947, Churchill said:
Luckily, it is not 1947 anymore. Now, we have the Advice Process. It is very good, so you might want to use it.
https://www.loomio.com/burning-nest-advice-process/
Cited after http://www.idleworm.com/ideas/snafu.shtml , because most of my books are currently buried in cardboard boxes.
https://richardlangworth.com/worst-form-of-government
Cool! I've never heard of this, and it does indeed sound like a good process.
Yep - it reflects how many things in EA already work implicitly. That's one of the things I love about EA. And, I think it would be good if we use this as an explicit model more often, too.
If you want to dive a little bit deeper into these kinds of management practices, you may want to have a look into the Reinventing Organizations-wiki: https://reinventingorganizationswiki.com/en/theory/decision-making/
If you want to dive very, very deep, Frederik Laloux's "Reinventing Organizations" might be a worthwhile read. I'm halfway through, and it helped me build a whole bunch of intuitions for how to do community building better.