A week for posting incomplete, scrappy, or otherwise draft-y posts. Read more

Hide table of contents

[This is an essay I wrote based on lessons from an EA psych class I taught last year. I pitched it to multiple science outlets without success, so I finally decided to just post it here. I'm posting it during draft amnesty week because EAs aren't really the intended audience; it's written for a general audience that's less familiar with these issues. Hopefully some of you can learn something anyway.]

Next week, an earthquake could strike the Pacific Northwest that authorities estimate could kill up to 14,000 people and injure at least 100,000 more.

Or maybe it won’t happen for 100 years.

What’s certain is that the states of Oregon and Washington, along with northern California and southern British Columbia, lie atop the Cascadia Subduction Zone, a massive fault line that last triggered a major earthquake over 300 years ago. According to geologists, pressure has been building ever since. In 2010, they estimated a 37% chance of a 7.1 magnitude or higher earthquake in the region in the next 50 years.

What should the millions of Washington and Oregon residents do with that information? How should governments respond? (The Oregon Department of Emergency Management states plainly that residents should “anticipate being without services and assistance for at least two weeks”.)

This future earthquake – some have deemed it “the big one” – offers a frustrating combination of dizzying uncertainty and sobering certainty. The big one will be costly and tragic and will happen eventually, but it may be so far in the future that no one alive today will be around when it does. Ignoring it seems wrong, but it’s hard to justify preparing for an uncertain future disaster when so many pressing issues demand our attention and resources today.

If you’re like me and you live thousands of miles from the Pacific Northwest, all this might seem like an idle curiosity, albeit a morbid one.

We're all in danger

Unfortunately, we all live in a metaphorical Cascadia Subduction Zone. And we’re all facing multiple big ones. Some of humanity’s greatest threats – from climate change to nuclear weapons to pandemics – are just like the Pacific Northwest earthquake: low-probability events with catastrophic effects that might not be fully (or ever) realized for decades.

And, lucky us, we are notoriously bad at thinking about probabilities and the distant future. It’s as if an evil genius engineered a spate of existential threats that are diabolically hard for people to care about.

The most salient example is climate change. In 2016, most of the world’s governments agreed to a goal of capping global temperature increases at 1.5 degrees Celsius, but our current emissions and policies already have us on a trajectory to exceed that target by at least a degree. By the end of the century, these hotter temperatures are expected to make droughts much more common, causing up to 3 billion people to suffer water shortages, and to dramatically increase the number of people exposed to deadly heat levels to at least half the global population.

We’re not addressing climate change as aggressively as we should. But we also didn’t start as early as we could have.

Scientists have understood the causes behind climate change and its predicted effects since the 1960s, and a serious effort to explore a government policy solution in the US was briefly sparked by an influential 1979 report, commissioned by the National Academy of Sciences, which concluded that global temperatures would rise by about three degrees by 2035. Following the report’s publication, two dozen experts met in 1980 to generate climate policy proposals for the National Commission on Air Quality.

The committee failed to reach a consensus. The problem’s lack of immediacy didn’t help. Reportedly, David Slade, the director of the Energy Department’s Office of Carbon Dioxide Effects “considered the lag a saving grace. If changes did not occur for a decade or more, he said, those in the room couldn’t be blamed for failing to prevent them. So what was the problem?”

We're bad at imagining about the future

There are many reasons why governments have delayed major action on climate change for so long. But I believe that Slade’s comments highlight one important factor. A fatal flaw of climate change as an issue is that it moves too slowly for people to grapple with: politically, intellectually, and emotionally. Imagining a world decades into the future isn’t easy and doesn’t come naturally to us.

For example, in a 2006 survey, 600 people from around the world, ranging in age from 13 to 83, were asked a series of questions about the future. One question asked what the future meant to them; the median response was only 10 years. Another question asked how clearly they could imagine futures of different time frames. Beyond 10 to 20 years, most people reported very poor clarity.

In 1980, climate change felt like a problem for the next generation. Today, effects of climate change are already being felt around the world. But the worst effects of climate change, like widespread droughts and mass climate change-related migration, are still more than 20 years away. For most of us, that’s a future that barely seems real. And it’s hard to prioritize a problem you can scarcely imagine, no matter how costly it would be to push it off to tomorrow, like preparing for a massive earthquake that might not happen for 50 years.

We're bad at reasoning about probabilities

Climate change is happening, just too slowly for our minds to process as an emergency. But many of the greatest threats to humanity may not happen at all in our lifetimes. They’re probabilistic threats.

For example, in 2022, a group of 169 domain experts and skilled forecasters estimated that the probability of a pathogen killing at least 1% of the world population (Covid is estimated to have killed around 0.3%) by 2050 was between 1 and 8%; they estimated the probability of a nuclear weapon incident killing at least 10% of the world population by 2050 to be between 1 and 4%.

Unfortunately, plenty of research by cognitive psychologists shows that we’re pretty poorly calibrated when it comes to probabilities. It’s one thing to hear there’s a 37% chance of a devastating earthquake near Seattle within the next few decades. It’s another thing to comprehend what that means.

Here’s a test you can do on yourself. Make a list of ten things that you think each have a 50% chance of happening to you over the next week. At the end of the week, count up how many of them actually happened. If you estimated correctly, it should be about half. But research suggests it’ll be closer to one or two.

This sort of overconfidence when dealing with probabilities was demonstrated in a 1990 study by a group of researchers led by Robert Vallone. They asked 98 Stanford undergraduates to make predictions about the likelihood of 41 events happening to them over the ensuing academic year. The students provided confidence judgments with their predictions on a scale from 50% (total uncertainty) to 100% (total certainty). At the end of the year, the researchers surveyed the students again to find out which of the events happened.

The students were consistently overconfident, including about things that they thought would not happen. Their average confidence rating was 84% but only 70% of their predictions turned out accurate. And the more confident they were, the more overconfident they were. For events that they were 100% confident about, only 77% of their predictions were accurate, meaning they were overconfident by 23 percentage points.

We're bad at making decisions under uncertainty

With big ones like pandemics and nuclear weapon incidents, the problem isn’t just that we have difficulty properly comprehending the chances of these events happening, it’s that their potential costs are devastating. But how do we weigh a 1% chance of a deadly pandemic against a 37% chance of a massive earthquake? Decisions about preventing future catastrophes are also rarely made in isolation. The answer to this question isn’t simple, but research suggests most people aren’t well equipped to answer it.

In a recent study, Adam Elga, Jian-Qiao Zhu, and Thomas Griffiths devised many different variations of decisions like the one above: how to allocate a fixed amount of funding toward the prevention of three “existential” disasters—disasters that could wipe out humanity. The disasters differed in risk and the interventions presented to prevent them differed in effectiveness. For example, participants might be told that Disaster A has a probability of 60% and if all the money were allocated to prevent it, that probability would be reduced to 25%; Disaster B has a probability of 95% and if all the money were allocated to prevent it, that probability would be reduced to 90%.

Between these two disasters, B clearly presents the greater risk, but we can make a bigger dent in preventing A, reducing its risk from 60% to 25%. The researchers show mathematically that both factors matter for the optimal allocation strategy. That is, we should invest in cost-effective prevention plans, like the one for Disaster A. But reducing the probability of high-risk threats even a small amount can unexpectedly be worth it when our goal is reducing overall risk. Using the numbers in this example, if we reduced Disaster A’s probability to 25%, then the total probability of surviving both disasters is the product of the probabilities of each disaster not happening: 75% x 5% = 3.75%. But if we reduce Disaster B’s probability to 90%, we get: 40% x 10% = 4%. In other words, we have a better chance of survival in this situation by devoting resources to slightly reducing the high risk of Disaster B.

In their study, people were not optimal. They asked about 780 US participants to make decisions about how to allocate resources between prevention plans for disasters with different risks. While people did tend to favor more cost-effective plans, people didn’t invest any more in high-risk threats like Disaster B than relatively low-risk threats like Disaster A. Overall, people tended to allocate funds fairly evenly between all disasters, regardless of the risks they posed.

Our poor ability to effectively allocate resources among disasters and catastrophic risks has real consequences. Philosopher William MacAskill recounts in his book “What We Owe the Future” how, in 2017, he had the opportunity to pitch one policy proposal to Nicola Sturgeon, the first minister of Scotland: “I choose pandemic preparedness, focusing on worst-case pandemics. Everyone laughed.”

Three years later, the world was hit with the worst pandemic in 100 years. And yet, in 2023, about $8 billion was spent globally on research and treatment for infectious diseases, less than what Americans spend on grooming and boarding their pets, and several orders of magnitude less than the costs of the Covid pandemic.

We punish whistleblowers

As discouraging as this may sound, you might think that these results don’t matter much. So what if most people don’t understand probability or can’t imagine beyond next year or are bad at weighing disaster risks? Most people aren’t in charge. For the sake of argument, let’s set aside the fact that ordinary people elect who is in charge and indirectly decide which concerns get political priority. Suppose our leaders and the relevant experts manage to overcome these psychological limitations and biases and become convinced about an impending catastrophe. Would they bother trying to do anything about it? History and research aren’t encouraging.

Consider computer engineer Peter de Jager. In the 1990s, he insistently warned the public and policymakers about the urgency and seriousness of the impending Y2K computer bug, a widespread computer formatting error predicted to cause major disruptions in computer-dependent industries worldwide. Due in part to his warnings, a massive cooperative effort averted technological chaos, and we passed into the new millennium uneventfully.

How did we thank de Jager for his tireless advocacy? “I was crucified in the media: I'm the shyster, the doomsdayer, the fear-mongerer, the con artist. All we did was to beat the drum about a problem we needed to fix. Not much happened. Not because it didn't exist, but because we spent $300 billion fixing it.”

Peter de Jager learned the hard way how thankless a job being a whistleblower can be. But most people seem to feel this instinctively.

In a 2024 study by a group of researchers led by Lucius Caviola and Matthew Coleman, about 400 US adults were asked to imagine they were researchers who had discovered a deadly new virus with a 5% chance of killing millions of people if a vaccine were not developed – at an estimated cost of $10 billion. The participants were asked if they would recommend developing the vaccine; half wetre told their identities would be public and half were told they would remain anonymous if they made the recommendation.

People were more willing to recommend developing the vaccine if they would remain anonymous (on average, responding about 0.75 points higher on a 7-point scale). In a follow-up study with a different scenario, US adults were more willing to pay a quarter of their monthly salary to remain anonymous when warning about an unlikely risk than a likely one. (This result didn’t hold for Chinese adults, but both groups expected more blame for being wrong when warning about the unlikely risk.)

Were they right to fear judgment? In another follow-up study, the researchers found that an expert who had recommended an expensive defense against a costly but unlikely cyberattack that didn’t happen was more likely to be judged to have made the wrong decision, even by judges and lawyers, than one who advised the same defense against a less costly but more likely attack.


It’s human nature to focus on the stresses, fears, and threats facing us here and now. Unfortunately, that nature may cost us greatly if we do so to the exclusion of looming threats on the horizon.

Are we doomed?

I’ve focused primarily on the psychological factors related to preventing future big ones. Without even accounting for all the political and economic factors, our prospects look bleak. But there’s room for hope: none of these factors is insurmountable.

Consider the difficulty of imagining the future. In 1980, when the US government first failed to act on climate change, it might have been difficult to imagine the consequences of a warmer world decades in the future. But it’s not anymore. A 2025 Gallup poll found that 63% of Americans think that global warming is already affecting us. Whatever challenges we face in tackling climate change, imagination is no longer one of them.

People can also improve their understanding of probabilities. For example, research has shown that people are much better at thinking in terms of frequencies than percentages. So, whereas 1% and 0.1% may both seem negligibly small, presenting the same information as 1 in 100 and 1 in 1000 makes it clearer that the differences between the two is like difference between winning a raffle among all the attendees at a wedding reception and winning a raffle among all the attendees at a high school graduation ceremony.

Putting probabilities and risks into context can also be illuminating.

Compare the low but potentially devastating risk posed by nuclear weapons to the present and rising risk posed by climate change. One measure of public concern for these two risks is philanthropic investment. In 2020, about $48 million in private funding went to nonprofits working on nuclear issues. By contrast, it’s estimated that $8-9 billion is spent by nonprofits annually on combating climate change. Treating these figures as a proxy for public concern, we are nearly 200 times more concerned about climate change than the threat of nuclear weapons.

It's hard to think about a 1% chance of a nuclear weapon killing 10% of the population by 2050. But however you think about that possibility, you might ask yourself whether you are 200 times more worried about climate change than a minute chance of being annihilated by a nuclear weapon.

Finally, even the set of studies led by Lucius Caviola and Matthew Coleman, which found that people were reluctant to put their reputations on the line to warn others about impending disasters, and that people tended to be judged harshly when those disasters never materialized, provides some cause for optimism.

In one more follow-up, the researchers found that when someone warned about an unlikely natural disaster that didn’t end up happening, participants rated the warner as more trustworthy, less blameworthy, and were more confident that they had made the right decision when they said they provided the actual chances of the disaster (10%) than when they didn’t. A result like this suggests that the public may be less dismissive of catastrophic predictions about future big ones when they understand what the predictions are based on.

These examples show that overcoming these cognitive biases and limitations is possible. It may also be necessary to improve and save future lives.

2

0
0

Reactions

0
0

More posts like this

Comments
No comments on this post yet.
Be the first to respond.
Curated and popular this week
Relevant opportunities