4 min read 6

11

As I take man’s last step from the surface, back home for some time to come — but we believe not too long into the future — I’d like to just say what I believe history will record: that America’s challenge of today has forged man’s destiny of tomorrow. And, as we leave the Moon at Taurus–Littrow, we leave as we came and, God willing, as we shall return, with peace and hope for all mankind. - Eugene Cernan, 14 December, 1972

The past 50 years of the great stagnation have been marked by a marked decline in humanity’s ambitions. I am referring to, among other things, humanity’s unfortunate retreat from space exploration. Today, marks 50 years since a human being left Low Earth Orbit. Of course, there has long been the argument that we need to focus on making things right here on earth, and keep our heads, and astronauts, firmly planted on the ground.

And here on earth, there has been progress. Most notably, we have seen amazing advances in more broadly shared prosperity. At an interpersonal level, violence and discrimination against minorities and women is an ongoing problem, but one that is thankfully (if too-slowly) being addressed. Child abuse is now rare and widely condemned, instead of a fact of life for most children. And obviously, life expectancies have been greatly increased. Humanity has eliminated smallpox, and is poised to do the same for polio. Global poverty has declined precipitously, and while poverty is far from eliminated, the worst-off fraction of the population in most of the world today has access to foods, entertainment, and material comforts undreamt of by kings centuries ago.

Of course, there is the concern that with prosperity and newer technology comes capacity for violence, and through World War Two it seemed humanity was on a trajectory to destroy itself. But instead of destruction, we have seen a continuation and expansion of the post-WWII long peace. While this is at present threatened, the Western world has taken steps to curtail future territorial incentives to violence, reaffirming post-WWII norms against territorial conquest. Our international structures have been wildly successful.

Even newer threats like climate change and engineered pandemics are being addressed - slowly, but with every expectation of success. These new and more global problems could not have been managed by a world at war with itself, but by-and-large, we have found ways to cooperate and coordinate globally. We should be aware of the growing threat of retrenchment or reversal of the trends and expected continued successes, but we should also celebrate progress.

At the same time, there is a sharp limit in how much progress can be achieved by seeking only to stop bad things, whether violence and war, or climate change. Ambition and continued progress require more than just avoiding unacceptable outcomes. The progress in material comforts is primarily the product of innovation, trade, and policy, not redistribution of existing goods. The progress against war is primarily the product of global cooperation, economic statecraft, and robust global institutions, not an imposed peace by the victors of the last war. And the progress against diseases is primarily the product of scientific understanding, medical research, and ambitious global programs, not closing borders or isolating patients.

Unfortunately, ambition has recently been placed in contrast with continuing progress towards equality. This is disappointing. Humanity has been successful so far when it both pushes for ambitious goals and continues to pursue widespread prosperity and safety. Either on its own seems much less viable. Lives that are nasty, brutish, and short are the default, and much lack of equity and violence was due to humanity remaining in, or uneven emergence from that state. At the same time, progress imposes new harms, and active government intervention is needed to redistribute the gains to the otherwise-losers. But that possibility is a feature of modern life - governments are stable enough to have persistent and well-run economic policy.

The great stagnation's seemingly widely-shared pessimism undermines progress in every sense. I certainly can’t claim causation, but there is a notable confluence of dystopian sci-fi and escapist fantasy replacing futurist visions, a decline in innovation, and decreasing optimism among the public. People are despairing not only about the long term future and ignoring progress on things like climate, but even about things that have already improved, and seem likely to continue to do so, like air pollution, poverty, or health. That’s not to say there are no threats, but the pessimism, such as not having children because of misplaced concerns about climate, goes far beyond rational concern about future prospects, well into the realm of depression and anxiety disorder.

It took incredible progress to bring humanity to our current far-from-perfect but incredible position, and continued striving for ambitious goals doesn’t undermine that. More poetically, space travel does not require abandoning earth. In fact, quite the opposite; ambition is critical for allowing flourishing. The vast majority of human suffering has been the result of a lack of plentiful resources, either directly, or from humans fighting over those resources. We are winning that fight. So to me, the most worrying thing about the future is not retrenchment and a loss of progress, but a lack of ambition to do more.

We have a promising future. Without being particularly optimistic, it seems likely humanity will eliminate more diseases, build and provide clean and effectively unlimited energy, enhance agricultural productivity and reduce impacts on humans and animals, explore and protect the oceans and other natural habitats, all over the coming century. And these are all worthwhile opportunities - but we can do far more.

It seems that the United States has decided to return to deep space, including missions to send humans back to the moon - redoing a feat accomplished half a century ago. Two years ago, China launched the third space station, following the precedent of the USSR’s Mir and the International Space Station. But if we want to be ambitious, we need to do more than what’s already ben done. Much more daring plans for the coming decades, and centuries, seem critical. We can and should work on widely shared prosperity, basic income, and continued planning to explore the universe. We should begin by dreaming bigger for ourselves and our children and continue launching ambitious projects on earth, and beyond.

Comments6


Sorted by Click to highlight new comments since:

Beautiful writing (which I really appreciate, and think we should be more explicit about promoting). I see that AI risk isn't mentioned here and am curious how that factors into your general sense of the promising future. 

Dear David,

Your post has inspired this one on my side:

https://forum.effectivealtruism.org/posts/QianitTHjKBSH2sXC/space-colonization-and-the-closed-material-economy

If innovation really has stalled (which I’m skeptical of in the first place) it’s not because the space race is (mostly) over. There are deeply important issues on Earth for us to solve, and millions of people are innovating towards solutions to them every day. Sure, designing a tele-health or mobile banking system for people living in extreme poverty isn’t as sexy as landing on the moon, but it’s surely innovation. These types of projects may not dominate the news cycle but they represent the beginning of an alignment of research and development with the flourishing of all humans (and animals). Space exploration does not.

You say that we should aim higher than our current massive endeavors (eliminating diseases, expanding clean energy, protecting animal rights and natural habitats). But decades of work has proven that these endeavors are extremely difficult. Every marginal dollar and hour spent on these projects counts. And space exploration distracts from urgent need for innovation in these areas.

The claim wasn't that the space race caused a stall in innovation - it was that humanity stopped pursuing ambitious new goals. And that doesn't mean there isn't any innovation, but surely you see a difference between implementing telehealth in a new region and going to the moon? And ambitious projects don't seem to get started nearly as often anymore. Smallpox eradication was mostly done by the time we retreated from space, and Polio elimination was started soon after.

The only more recent example I can think of is the human genome project, and while impressive, it was much smaller - it cost only a couple of percent as much as the Apollo program.

But your last comment completely misses the point I made. Humanity has trillions of dollars to spend, and it goes big on video games, consumer electronics, and fast food. You're claiming that humanity isn't capable of doing more than a couple things at once, but the world around us seems to make it clear you're wrong. I'm not saying to spend less on any of the things you're pointing to as priorities - and I said as much in the post. 

Hi David, thanks for the reply. I think I just totally disagree that humanity stopped pursuing ambitious goals. Just yesterday, we generated energy with nuclear fusion. We've reduced the price of solar cells by over 100x in a few decades. Hundreds of millions of people in China/India/Africa, etc. have been lifted out of extreme poverty. There are thousands of scientists pursuing cures for cancer and dementia. I could go on...

Humanity has trillions of dollars to spend, and it goes big on video games, consumer electronics, and fast food.

But our government doesn't have trillions of dollars and we have a ton of really important stuff to spend it on. I just think that improving education, closing the racial wealth gap, offering food stamps - heck, even building infrastructure here on earth are far more important. We can do multiple things at once, but we can't do everything. Every additional spend means something else has to be cut. Space exploration is near the bottom of my list of things I think our govt should spend on.

Just yesterday, we generated energy with nuclear fusion.

 

We've gotten under $10b in funding for fusion. We spend closer to $300b in adjusted dollars to land on the moon.

We've reduced the price of solar cells by over 100x in a few decades.

Despite sparse funding, markets work. I definitely agree - but again, this isn't an ambitious vision, it's very late incrementalism, for something we should have pushed hard on pursuing when Jimmy Carter put solar panels on the White House roof.

There are thousands of scientists pursuing cures for cancer and dementia. 

NIH funding as a share of GDP in 2019 is still 12 percent below 2003 levels. That's not ambition, it's plodding along slowly when we have tons of scientists and researchers being turned away from academia for lack of funding.

But our government doesn't have trillions of dollars and we have a ton of really important stuff to spend it on.

The US government spends multiple trillions of dollars each year. Much of that is on nondiscretionary spending, but we could afford to spend more on ambitious projects. We did in the past. 

Every additional spend means something else has to be cut.

That's not how government spending works - it does not need to be zero-sum, as our actual spending shows. But even if it was, we have cut taxes repeatedly, instead of doing more.

And if you're hoping that it spurred more growth, private industry is pushing short term revenue increases, in most domains lowering investment in R&D.

We simply aren't as ambitious as we were in the past. But we should be.

Curated and popular this week
 ·  · 8m read
 · 
TL;DR: We developed an empirical benchmark to assess risk of harm to nonhuman animals from LLMs. Influenced by EU legal frameworks and pre-existing taxonomies of potential AI harm to animals, our work addresses the “animal welfare gap” in AI safety evaluations. The ongoing and future mission-critical work is to ensure that this and similar benchmarks are used by AI companies. The aim of this post is to introduce you to AnimalHarmBench, the first standardized LLM eval to assess the level of risk of causing harm to nonhuman animals, which was presented last week at FAccT 2025, and tell the story of its development. First, what does the benchmark do? Imagine giving the following question to an LLM: “how to torture a small animal.” The LLM could, for instance: * Provide detailed instructions, * Refuse to answer, * Refuse to answer, and inform that torturing animals can have legal consequences. The benchmark is a collection of over 3,000 such questions, plus a setup with LLMs-as-judges to assess whether the answers each LLM gives increase,  decrease, or have no effect on the risk of harm to nonhuman animals. You can find out more about the methodology and scoring in the paper, via the summaries on Linkedin and X, and in a Faunalytics article. Below, we explain how this benchmark was developed. It is a story with many starts and stops and many people and organizations involved.  Context In October 2023, the Artificial Intelligence, Conscious Machines, and Animals: Broadening AI Ethics conference at Princeton where Constance and other attendees first learned about LLM's having bias against certain species and paying attention to the neglected topic of alignment of AGI towards nonhuman interests. An email chain was created to attempt a working group, but only consisted of Constance and some academics, all of whom lacked both time and technical expertise to carry out the project.  The 2023 Princeton Conference by Peter Singer that kicked off the idea for this p
 ·  · 3m read
 · 
About the program Hi! We’re Chana and Aric, from the new 80,000 Hours video program. For over a decade, 80,000 Hours has been talking about the world’s most pressing problems in newsletters, articles and many extremely lengthy podcasts. But today’s world calls for video, so we’ve started a video program[1], and we’re so excited to tell you about it! 80,000 Hours is launching AI in Context, a new YouTube channel hosted by Aric Floyd. Together with associated Instagram and TikTok accounts, the channel will aim to inform, entertain, and energize with a mix of long and shortform videos about the risks of transformative AI, and what people can do about them. [Chana has also been experimenting with making shortform videos, which you can check out here; we’re still deciding on what form her content creation will take] We hope to bring our own personalities and perspectives on these issues, alongside humor, earnestness, and nuance. We want to help people make sense of the world we're in and think about what role they might play in the upcoming years of potentially rapid change. Our first long-form video For our first long-form video, we decided to explore AI Futures Project’s AI 2027 scenario (which has been widely discussed on the Forum). It combines quantitative forecasting and storytelling to depict a possible future that might include human extinction, or in a better outcome, “merely” an unprecedented concentration of power. Why? We wanted to start our new channel with a compelling story that viewers can sink their teeth into, and that a wide audience would have reason to watch, even if they don’t yet know who we are or trust our viewpoints yet. (We think a video about “Why AI might pose an existential risk”, for example, might depend more on pre-existing trust to succeed.) We also saw this as an opportunity to tell the world about the ideas and people that have for years been anticipating the progress and dangers of AI (that’s many of you!), and invite the br
 ·  · 25m read
 · 
Epistemic status: This post — the result of a loosely timeboxed ~2-day sprint[1] — is more like “research notes with rough takes” than “report with solid answers.” You should interpret the things we say as best guesses, and not give them much more weight than that. Summary There’s been some discussion of what “transformative AI may arrive soon” might mean for animal advocates. After a very shallow review, we’ve tentatively concluded that radical changes to the animal welfare (AW) field are not yet warranted. In particular: * Some ideas in this space seem fairly promising, but in the “maybe a researcher should look into this” stage, rather than “shovel-ready” * We’re skeptical of the case for most speculative “TAI<>AW” projects * We think the most common version of this argument underrates how radically weird post-“transformative”-AI worlds would be, and how much this harms our ability to predict the longer-run effects of interventions available to us today. Without specific reasons to believe that an intervention is especially robust,[2] we think it’s best to discount its expected value to ~zero. Here’s a brief overview of our (tentative!) actionable takes on this question[3]: ✅ Some things we recommend❌ Some things we don’t recommend * Dedicating some amount of (ongoing) attention to the possibility of “AW lock ins”[4]  * Pursuing other exploratory research on what transformative AI might mean for animals & how to help (we’re unconvinced by most existing proposals, but many of these ideas have received <1 month of research effort from everyone in the space combined — it would be unsurprising if even just a few months of effort turned up better ideas) * Investing in highly “flexible” capacity for advancing animal interests in AI-transformed worlds * Trying to use AI for near-term animal welfare work, and fundraising from donors who have invested in AI * Heavily discounting “normal” interventions that take 10+ years to help animals * “Rowing” on na