I'm trying to get to the crux of the differences between the progress studies (PS) and the EA / existential risk (XR) communities. I'd love input from you all on my questions below.
The road trip metaphor
Let me set up a metaphor to frame the issue:
Picture all of humanity in a car, traveling down the highway of progress. Both PS and EA/XR agree that the trip is good, and that as long as we don't crash, faster would be better. But:
- XR thinks that the car is out of control and that we need a better grip on the steering wheel. We should not accelerate until we can steer better, and maybe we should even slow down in order to avoid crashing.
- PS thinks we're already slowing down, and so wants to put significant attention into re-accelerating. Sure, we probably need better steering too, but that's secondary.
(See also @Max_Daniel's recent post)
My questions
Here are some things I don't really understand about the XR position (granted that I haven't read the literature on it extensively yet, but I have read a number of the foundational papers).
(Edit for clarity: these questions are not proposed as cruxes. They are just questions I am unclear on, related to my attempt to find the crux)
How does XR weigh costs and benefits?
Is there any cost that is too high to pay, for any level of XR reduction? Are they willing to significantly increase global catastrophic risk—one notch down from XR in Bostrom's hierarchy—in order to decrease XR? I do get that impression. They seem to talk about any catastrophe less than full human extinction as, well, not that big a deal.
For instance, suppose that if we accelerate progress, we can end poverty (by whatever standard) one century earlier than otherwise. In that case, failing to do so, in itself, should be considered a global catastrophic risk, or close to it. If you're willing to accept GCR in order to slightly reduce XR, then OK—but it feels to me that you've fallen for a Pascal's Mugging.
Eliezer has specifically said that he doesn't accept Pascal's Mugging arguments in the x-risk context, and Holden Karnofsky has indicated the same. The only counterarguments I've seen conclude “so AI safety (or other specific x-risk) is still a worthy cause”—which I'm fine with. I don't see how you get to “so we shouldn't try to speed up technological progress.”
Does XR consider tech progress default-good or default-bad?
My take is that tech progress is default good, but we should be watchful for bad consequences and address specific risks. I think it makes sense to pursue specific projects that might increase AI safety, gene safety, etc. I even think there are times when it makes sense to put a short-term moratorium on progress in an area in order to work out some safety issues—this has been done once or twice already in gene safety.
When I talk to XR folks, I sometimes get the impression that they want to flip it around, and consider all tech progress to be bad unless we can make an XR-based case that it should go forward. That takes me back to point (1).
What would moral/social progress actually look like?
This idea that it's more important to make progress in non-tech areas: epistemics, morality, coordination, insight, governance, whatever. I actually sort of agree with that, but I'm not sure at all that what I have in mind there corresponds to what EA/XR folks are thinking. Maybe this has been written up somewhere, and I haven't found it yet?
Without understanding this, it comes across as if tech progress is on indefinite hold until we somehow become better people and thus have sufficiently reduced XR—although it's unclear how we could ever reduce it enough, because of (1).
What does XR think about the large numbers of people who don't appreciate progress, or actively oppose it?
Returning to the road trip metaphor: while PS and EA/XR debate the ideal balance of resources towards steering vs. acceleration, and which is more neglected, there are other passengers in the car. Many are yelling to just slow down, and some are even saying to turn around and go backwards. A few, full of revolutionary zeal, are trying to jump up and seize the steering wheel in order to accomplish this, while others are trying to sabotage the car itself. Before PS and EA/XR even resolve our debate, the car might be run off the road—either as an accident caused by fighting groups, or on purpose.
This seems like a problem to me, especially in the context of (3): I don't know how we make social progress, when this is what we have to work with. So a big part of progress studies is trying to just educate more people that the car is valuable and that forward is actually where we want to go. (But I don't think anyone in EA/XR sees it this way or is sympathetic to this line of reasoning, if only because I've never heard them discuss this faction of humanity at all or recognize it as a problem.)
Thank you all for your input here! I hope that understanding these issues better will help me finally answer @Benjamin_Todd's question, which I am long overdue on addressing.
Thanks for writing this post! I'm a fan of your work and am excited for this discussion.
Here's how I think about costs vs benefits:
I think XR reduction is at least 1000x as bad as a GCR that was guaranteed not to turn into an x-risk. The future is very long, and humanity seems able to achieve a very good one, but looks currently very vulnerable to me.
I think I can have a tractable impact on reducing that vulnerability. It doesn't seem to me that my impact on human progress would equal my chance of saving it. Obviously that needs some fleshing out — what is my impact on x-risk, what is my impact on progress, how likely am I to have those impacts, etc. But that's the structure of how I think about it.
After initially worrying about pascal's mugging, I've come to believe that x-risk is in fact substantially more likely than 1 in several million, and whatever objections I might have to pascal's mugging don't really apply.
How I think about tech progress:
From an x-risk perspective, I'm pretty ambivalent about tech progress. I've heard arguments that it's good, and that it's bad, but mostly I think it's not a very predictably-large effect on the margin.
But while I care a lot about x-risk reduction, I have different world-views that I put substantial credence in as well. And basically all of those other world-views care a whole lot about human progress. So while I don't view human progress as the cause of my life the way I do x-risk reduction, I'm strongly in favor of more of it.
Finally, as you can imagine from my last answer, I definitely have a lot of conversations where I try to convey my optimism about technology's ability to make lives better. And I think that's pretty common — your blog is well-read in my circles.
By that token most particular scientific experiments or contributions to political efforts may be such: e.g. if there is a referendum to pass a pro-innovation regulatory reform and science funding package, a given donation or staffer in support of it is very unlikely to counterfactually tip it into passing, although the expected value and average returns could be high, and the collective effort has a large chance of success.