Or on the types of prioritization, their strengths, pitfalls, and how EA should balance them
The cause prioritization landscape in EA is changing. Prominent groups have shut down, others have been founded, and everyone is trying to figure out how to prepare for AI. This is the first in a series of posts examining the state of cause prioritization and proposing strategies for moving forward.
Executive Summary
* Performing prioritization work has been one of the main tasks, and arguably achievements, of EA.
* We highlight three types of prioritization: Cause Prioritization, Within-Cause (Intervention) Prioritization, and Cross-Cause (Intervention) Prioritization.
* We ask how much of EA prioritization work falls in each of these categories:
* Our estimates suggest that, for the organizations we investigated, the current split is 89% within-cause work, 2% cross-cause, and 9% cause prioritization.
* We then explore strengths and potential pitfalls of each level:
* Cause prioritization offers a big-picture view for identifying pressing problems but can fail to capture the practical nuances that often determine real-world success.
* Within-cause prioritization focuses on a narrower set of interventions with deeper more specialised analysis but risks missing higher-impact alternatives elsewhere.
* Cross-cause prioritization broadens the scope to find synergies and the potential for greater impact, yet demands complex assumptions and compromises on measurement.
* See the Summary Table below to view the considerations.
* We encourage reflection and future work on what the best ways of prioritizing are and how EA should allocate resources between the three types.
* With this in mind, we outline eight cruxes that sketch what factors could favor some types over others.
* We also suggest some potential next steps aimed at refining our approach to prioritization by exploring variance, value of information, tractability, and the
At the start of Chapter 6 in the precipice, Ord writes:
This made me recall hearing about Matsés, a language spoken by an indigenous tribe in the Peruvian Amazon, that has the (apparently) unusual feature of using verb conjugations to indicate the certainty of information being provided in a sentence. From an article on Nautilus:
I doubt the Matsés spend much time talking about existential risk, but their language could provide an interesting example of how to more effectively convey aspects of certainty, probability and evidence in natural language.
According to Fleck's thesis, Matsés has nine past tense conjugations, each of which express the source of information (direct experience, inference, or conjecture) as well as how far in the past it was (recent past, distant past, or remote past). Hearsay and history/mythology are also marked in a distinctive way. For expressing certainty, Matsés has a particle ada/-da and a verb suffix -chit which mean something like "perhaps" and another particle, ba, that means something like "I doubt that..." Unfortunately for us, this doesn't seem more expressive than what English speakers typically say. I've only read a small fraction of Fleck's 1279-page thesis so it's possible that I missed something. I wrote a lengthier description of the evidential and epistemic modality system in Matsés at https://forum.effectivealtruism.org/posts/MYCbguxHAZkNGtG2B/matses-are-languages-providing-epistemic-certainty-of?commentId=yYtEWoHQEFuWCehWt.
Participants in the 2008 FHI Global Catastrophic Risk conference estimated a probability of extinction from nano-technology at 5.5% (weapons + accident) and non-nuclear wars at 3% (all wars - nuclear wars) (the values are on the GCR wikipedia page). In the Precipice, Ord estimated the existential risk of Other anthropogenic risks (noted in the text as including but not limited to nano-technology, and I interpret this as including non-nuclear wars) as 2% (1 in 50). (Note that by definition, extinction risk is a sub-set of existential risk.)
Since starting to engage with EA in 2018 I have seen very little discussion about nano-technology or non-nuclear warfare as existential risks, yet it seems that in 2008 these were considered risks on-par with top longtermist cause areas today (nanotechnology weapons and AGI extinction risks were both estimated at 5%). I realize that Ord's risk estimates are his own while the 2008 data is from a survey, but I assume that his views broadly represent those of his colleagues at FHI and others the GCR community.
My open question is: what new information or discussion over the last decade lead the GCR to reduce their estimate of the risks posed by (primarily) nano-technology and also conventional warfare?
I too find this an interesting topic. More specifically, I wonder why I've seen as little discussion published in the last few years (rather than from >10 years ago) of nanotech as I have. I also wonder about the limited discussion of things like very long-lasting totalitarianism - though there I don't have reason to believe people recently had reasonably high x-risk estimates; I just sort-of feel like I haven't yet seen good reason to deprioritise investigating that possible risk. (I'm not saying that there should be more discussion of these topics, and that there are no good reasons for the lack of it, just that I wonder about that.)
I'm not sure that's a safe assumption. The 2008 survey you're discussing seems to have itself involved widely differing views (see the graphs on the last pages). And more generally, the existential risk and GCR research community seems to have widely differing views on risk estimates (see a collection of side-by-side estimates here).
I would also guess that each individual's estimates might themselves be relatively unstable from one time you ask them to another, or one particular phrasing of the question to another.
Relatedly, I'm not sure how decision-relevant differences of less than an order of magnitude between different estimates are. (Though such differences could sometimes be decision-relevant, and larger differences more easily could be.)
In case you hadn't seen it: 80,000 Hours recently released a post with a brief discussion of the problem area of atomically precise manufacturing. That also has links to a few relevant sources.
Thanks Michael, I had seen that but hadn't looked at the links. Some comments:
The cause report from OPP makes the distinction between molecular nanotechnology and atomically precise manufacturing. The 2008 survey seemed to be explicitly considering weaponised molecular nanotechnology as an extinction risk (I assume the nanotechnology accident was referring to molecular nanotechnology as well). While there seems to be agreement that molecular nanotechnology could be a direct path to GCR/extinction, OPP presents atomically precise manufacturing as being more of an indirect risk, such as through facilitating weapons proliferation. The Grey goo section of the report does resolve my question about why the community isn't talking about (molecular) nanotechnology as an existential risk as much now (the footnotes are worth reading for more details):
OPP's discussion on why molecular nanotechnology (and cryonics) failed to develop as scientific fields is also interesting:
It least in the case of molecular nanotechnology, the simple failure of the field to develop may have been lucky (at least from a GCR reduction perspective) as it seems that the research that was (at the time) most likely to lead to the risky outcomes was simply never pursued.
Update: Probably influenced a bit by this discussion, I've now made a tag for posts about Atomically Precise Manufacturing, as well as a link post (with commentary) for that Open Phil report.
I was recently reading the book Subvert! by Daniel Cleather (a colleague) and thought that this quote from Karl Popper and the author's preceding description of Popper's position sounded very similar to EAs method of cause prioritisation and theory of change in the world. (Although I believe Popper is writing in the context of fighting against threats to democracy rather than threats to well-being, humanity, etc.) I haven't read The Open Society and Its Enemies (or any of Popper's books for that matter), but I'm now quite interested to see if he draws any other parallels to EA.
I also quite enjoyed Subvert! And would recommend that as a fresh perspective on the philosophy of science. A key point from the book is: