Meta:
- I'm re-posting this from my Shortform (with minor edits) because someone indicated it might be useful to apply tags to this post.
- This was originally written as quick summary of my current (potentially flawed) understanding in an email conversation.
- I'm not that familiar with the human progress/progress studies communities and would be grateful if people pointed out where my impression of them seems off, as well as for takes on whether I seem correct about what the key points of agreement and disagreement are.
- I think some important omissions from my summary might include:
- Potential differences in underlying ethical views
- More detail on why at least some 'progress studies' proponents have significantly lower estimates for existential risk this century, and potential empirical differences regarding how to best mitigate existential risk.
- Another caveat is that both the progress studies and the longtermist EA communities are sufficiently large that there will be significant diversity of views within these communities - which my summary sweeps under the rug.
[See also this reply from Tony from the 'progress studies' community .]
Here's a quick summary of my understanding of the 'longtermist EA' and 'progress studies' perspectives, in a somewhat cartoonish way to gesture at points of agreement and disagreement.
EA and progress studies mostly agree about the past. In particular, they agree that the Industrial Revolution was a really big deal for human well-being, and that this is often overlooked/undervalued. E.g., here's a blog post by someone somewhat influential in EA:
https://lukemuehlhauser.com/industrial-revolution/
Looking to the future, the progress studies community is most worried about the Great Stagnation. They are nervous that science seems to be slowing down, that ideas are getting harder to find, and that economic growth may soon be over. Industrial-Revolution-level progress was by far the best thing that ever happened to humanity, but we're at risk of losing it. That seems really bad. We need a new science of progress to understand how to keep it going. Probably this will eventually require a number of technological and institutional innovations since our current academic and economic systems are what's led us into the current slowdown.
If we were making a list of the most globally consequential developments from the past, EAs would in addition to the Industrial Revolution point to the Manhattan Project and the hydrogen bomb: the point in time when humanity first developed the means to destroy itself. (They might also think of factory farming as an example for how progress might be great for some but horrible for others, at least on some moral views.) So while they agree that the world has been getting a lot better thanks to progress, they're also concerned that progress exposes us to new nuclear-bomb-style risks. Regarding the future, they're most worried about existential risk - the prospect of permanently forfeiting our potential of a future that's much better than the status quo. Permanent stagnation would be an existential risk, but EAs tend to be even more worried about catastrophes from emerging technologies such as misaligned artificial intelligence or engineered pandemics. They might also be worried about a potential war between the US and China, or about extreme climate change. So in a sense they aren't as worried about progress stopping than they are about progress being mismanaged and having catastrophic unintended consequences. They therefore aim for 'differential progress' - accelerating those kinds of technological or societal change that would safeguard us against these catastrophic risks, and slowing down whatever would expose us to greater risk. So concretely they are into things like "AI safety" or "biosecurity" - e.g., making machine learning systems more transparent so we could tell if they were trying to deceive their users, or implementing better norms around the publication of dual-use bio research.
The single best book on this EA perspective is probably The Precipice by my FHI colleague Toby Ord.
Overall, EA and the progress studies perspective agree on a lot - they're probably closer than either would be to any other popular 'worldview'. But overall EAs probably tend to think that human progress proponents are too indiscriminately optimistic about further progress, and too generically focused on keeping progress going. (Both because it might be risky and because EAs probably tend to be more "optimistic" that progress will accelerate anyway, most notably due to advances in AI.) Conversely, human progress proponents tend to think that EA is insufficiently focused on ensuring a future of significant economic growth and the risks imagined by EAs either aren't real or that we can't do much to prevent them except encouraging innovation in general.
Hi Jason, thank you for sharing your thoughts! I also much appreciated you saying that the OP sounds accurate to you since I hadn't been sure how good a job I did with describing the Progress Studies perspective.
I hope to engage more with your other post when I find the time - for now just one point:
'The growth rate' is a key parameter when assuming unbounded exponential growth, but due to physical limits exponential growth (assuming familiar growth rates) must be over after thousands if not hundreds of years.
This also means that the significance of increasing the growth rate depends dramatically on whether we assume civilization will last for hundreds or billions of years.
In the first case, annual growth at 3% rather than 2% could go on until we perish - and could make the difference between, e.g., 21 rather than 14 doublings over the next 500 years. That's a difference by a factor of roughly 100 - the same factor that turned the world of 1900 to what we have today, so a really big deal! (Imagine the ancient Greeks making a choice that determines whether civilization is going to end at year-1900-level or year-2020-levels of global well-being.)
But in the latter case, almost all of the future - millions, billions, trillions, or orders of magnitude longer aeons - will be characterized by subexponential growth. Compared to this, the current 'exponential era' will be extremely brief and transient - and differences in its growth rate at best determine whether it will last for another tens, hundreds, thousands, or perhaps tens of thousands of years. These differences are a rounding error on cosmic timescales, and their importance is swamped by even tiny differences in the probability of reaching that long, cosmic future (as observed, e.g., by Bostrom in Astronomical Waste).
Why? Simply because (i) there are limits in how much value (whether in an economic or moral sense) we can produce per unit of available energy, and (ii) we will eventually only be able to expand the total amount of available energy subexponentially (there can only be so much stuff in a given volume of space, and the amount of available space is proportional to the speed of light cubed - polynomial rather than exponential growth).
And once we plug the relevant numbers from physics and do the maths we find that, e.g.:
And: