I think this is less futile than you're suggesting. You're right that in a decentralized system without formal governance, power defaults to capital. But the argument doesn't require democratizing EA. It requires convincing some of the people with capital that broader epistemic empowerment is in their interest, measured by their own goals.
At least some of the people with money and power in EA are genuinely trying to do the most good they can. Alexander Berger is publicly writing about the streetlight problem and acknowledging costly false negatives. EA's own frameworks would instantly recognize pure exploitation with zero exploration as a failure mode in any other system. The ask isn't "share power because it's fair." It's "you're leaving impact on the table, by your own criteria, and the fix is cheap relative to the cost of what you're missing."
That's not a constitutional reform. It's one or two funders deciding that discovery infrastructure is worth building. Which is, admittedly, Path 1 again. But if the argument is strong enough, Path 1 is all it takes.
Thanks for this. I want to make sure I'm understanding you correctly, so let me try to paraphrase.
You're saying the discovery problem I describe is real but is a symptom of something deeper: EA has no formal governance, so whoever controls the money controls the priorities, and there's no institutional mechanism to resist that. The three groups you identify (people who think EA is just math, people who think decentralization is strategically good, and people who benefit from the status quo) form a coalition that blocks any structural reform, and they're concentrated in the places where power actually lives. So any solution I propose, like scouts or green-teaming or seed grants, will just get absorbed by the same dynamics that created the problem, unless there's something closer to a binding political structure that constrains how money translates into agenda-setting power.
And your most pessimistic read is that this might not be fixable from inside EA at all, and that the better move might be starting something new that builds in those structural safeguards from the beginning.
Is that roughly right? And if so, do you think there's any version of reform that works short of formal governance, or is your view that anything less is window dressing?
The funding allocation question and the structural argument aren't as separable as you're suggesting. Nobody decided 89% within-cause was optimal. It's the revealed preference of a system where within-cause work is legible, fundable, and career-safe, and discovery work is none of those things.
I'd also draw a distinction the RP piece doesn't make: even the 9% classified as "cause prioritization" is mostly ranking known cause areas against each other. That's a different task from discovering new ones. CEARCH is the closest thing to dedicated cause discovery in EA, and it's a tiny team doing top-down desktop research. There's no intake mechanism where a community member with unusual domain knowledge can surface an observation. EA Funds is organized into four pre-existing cause area buckets with no "other" category, and even within those buckets the lens is narrow. The entire system is built to optimize within the existing map. Nobody's job is to ask what's not on it.
To be clear, I don't think shutting down the Forum is the answer. But the Forum needs a real connection to the power structure. Right now, someone could discover Cause X and post it here, and in all likelihood it would get modest engagement from people without allocation power, sit for a day or two, and sink. The people who could actually act on it probably aren't reading the Forum systematically, and there's no process that routes a promising signal to them. That's what I mean by performance of openness. The door is open but it doesn't lead anywhere.
The suggestion that this work could be done essentially for free by volunteer EA groups is itself revealing. Nobody suggests within-cause prioritization should be unpaid side work. The system prices discovery at what it's willing to pay for it.
I think the key question is what portion of EA's total funding goes toward genuine discovery versus optimization within existing spotlights. Your examples may well be real successes. But if they represent a tiny fraction of total resource allocation, that's consistent with my argument rather than a counter to it.
It's interesting to think about the potential upsides of AGI from the perspective of people who struggle with suicidal thoughts. It seems like there are significant chances of an extremely long, happy future that probably is not balanced by the S-risk (it seems more likely misaligned AGI would annihilate us than perpetually torture us).
This has made suicidal thoughts much more compelling in the past than after recent developments. Thinking about losing the chance of an unimaginably good future (even just like 5-10%) chance that could be missed forecloses thoughts of further consideration of methods by which it could be achieved.
Maybe disseminating this line of thinking could be helpful for suicide prevention?
A lot of the specific things you've mentioned make a lot of sense.
Generally, I would be cautious about asking for help in contexts where review and/or supervision will be necessary or your reliance on the volunteer could be detrimental. People are often excited to help and overstate what they actually can do. Often the value of the time involved with dealing with volunteers is far more valuable than what they produce.
Teaching counterfactual reasoning in economics education
A crucial EA concept for high school economics is counterfactual reasoning – systematically asking "what would have happened if agent X had not done action Y?" This is essential for understanding the actual impact of interventions.
Why it matters:
Methods to evaluate counterfactual impact:
Randomized controlled trials (RCTs): Randomly assign some groups to receive an intervention and others not, then compare outcomes. The control group approximates what would have happened without the intervention.
Before-and-after with comparison groups: Compare changes in a treated group to changes in a similar untreated group over the same period. This helps account for broader trends that would have occurred anyway.
Trend analysis: Plot pre-intervention trends and project them forward. If post-intervention outcomes match the projected trend, the intervention may have had little counterfactual impact.
Natural experiments: Find situations where an intervention occurred in one place but not another similar place due to arbitrary reasons, allowing comparison.
Classroom applications:
This teaches students both to think counterfactually and to evaluate causal claims empirically.
(Comment made in collaboration with generative AI)
Thanks for sharing, this is a great reference. The 89/9/2 split is striking quantitative support for the concern I'm raising. I think my post is pointing at something their framework doesn't quite capture though: all three of their prioritization types operate on the existing map. Cause discovery, finding what's not on the map at all, is a different task, and the resources devoted to it are close to zero.