There’s no doubting the intellectual horsepower EAs bring to the table.
However, I’m confused by some of the cost-benefit decisions EA makes about which topics deserve investigation.
Specifically, it seems like being 0.1% more "weird," could make EA 100x more effective.
For example, consider the X-risk of climate change.
One would expect that since the now-openly-admitted existence of UFOs proves the possibility of paradigm-level breakthroughs in energy production, the logical next steps would be:
- Research how to build them, which means we should…
- Investigate previous research on how to build them, which means we should…
- Examine certain public patents owned by the US Navy and the like for decades.
But I’ve never seen any of these steps mentioned by EAs, even though they’re an obvious course of action, depend exclusively on free information disclosed by the US government itself, are indisputably high-impact (i.e., could revolutionize clean energy), and near-zero-cost to investigate given the wealth of free information available.
The possibility of developing breakthrough energy tech could also be interpreted negatively — as a new X-risk. It’s possible breakthrough energy tech would be so powerful that it would be impossible to handle responsibly, and produce the same quandaries as the proliferation of nuclear weapons (or worse), creating new X-risks. In that case, EA might consider petitioning governments to refuse to investigate UFO technology.
Even if there’s only a 0.1% chance aircraft no earthly government can explain might yield some breakthrough in physics, energy, or technology, the potential benefits (or costs, if interpreted negatively) make it worthy of investigation by EA.
Decades of previous research by UFO scientists brings the costs of an initial investigation to near-zero. Therefore, it seems impossible to justify neglect of the UFO topic on the basis that UFO technology is so unlikely to matter, that not even a free investigation is merited.
EA’s neglect of high-impact “weird” topics is not an isolated incident.
- Claims of suppressed cures for cancer, free energy, water-powered cars, etc. As with UFOs, the “highly unlikely” argument seems moot, given the wealth of public information on sites like RexResearch gives these topics an extremely low cost of investigation. A higher-cost version might look like Venture Science. The Institute for Venture Science “simultaneously funds multiple research groups worldwide for each selected proposal,” based on their “potential for instigating dramatic beneficial change.” Sounds like EA's mission! The premise is the same as in Venture Capital: invest in many 0.1%-chance-of-paradigm-shift ideas, with the hope that 1 or 2 will succeed. Why shouldn’t this model work in science? Why aren’t EA’s using it?
- Gandhian nonviolence. Studies have shown America’s population has virtually zero influence over the actions of its government. Gandhi demonstrated “it is possible for a single individual to defy the whole might of an unjust empire.” Teaching his tactics could empower populations around the world to persuade our governments to reduce X-risks like climate change and World War III. No “0.1% chance” argument needed here — we already know it works.
- Spiritual reality. EA seems to assume spiritual reality is nonexistent or irrelevant. But consider the Manifesto for Post-Materialist Science, signed by over 400 scientists and professors, which states “We believe that the sciences are being constricted by dogmatism, and in particular by a subservience to the philosophy of materialism, the doctrine that matter is the only reality and that the mind is nothing but the physical activity of the brain.” Even if there’s only a 0.1% chance these scientists are correct, the possibility would change the entire question of what constitutes “effectiveness,” and likely “altruism” as well. For example, it seems self-evident even now, that in a world of growing technological power, evil (however defined) creates a growing x-risk. If 'goodness' or ‘holiness’ also meaningfully exist, it could redefine effectiveness as a battle for holiness and against evil.
Even 1 breakthrough in one of these areas could be epochal. Simultaneously, so much research has been done on these already that to sift through it for overlooked gems would be an extremely low-cost endeavor. (And of course, the Gandhi breakthrough has already been made.)
Therefore, I’m confused by EA’s cost-benefit analysis on topics like these.
However, they do share a common thread: Divergence from the prevailing worldview of dominant scientific and media institutions. None of those voices are making the obvious call for UFO-based breakthrough energy and propulsion research, Gandhian nonviolence, or spiritual science, either. But if EA worldviews are so unanimously aligned with dominant and pervasive institutions like these, then what are EAs for?
It seems logical that only breakthrough divergences from establishment thinking could hope to yield breakthrough effectiveness by comparison.
Thank you for reading!
(Special thanks to Sam H. Barton for help with revisions, proofreading, steelmanning, etc.)