I'm trying to get to the crux of the differences between the progress studies (PS) and the EA / existential risk (XR) communities. I'd love input from you all on my questions below.
The road trip metaphor
Let me set up a metaphor to frame the issue:
Picture all of humanity in a car, traveling down the highway of progress. Both PS and EA/XR agree that the trip is good, and that as long as we don't crash, faster would be better. But:
- XR thinks that the car is out of control and that we need a better grip on the steering wheel. We should not accelerate until we can steer better, and maybe we should even slow down in order to avoid crashing.
- PS thinks we're already slowing down, and so wants to put significant attention into re-accelerating. Sure, we probably need better steering too, but that's secondary.
(See also @Max_Daniel's recent post)
My questions
Here are some things I don't really understand about the XR position (granted that I haven't read the literature on it extensively yet, but I have read a number of the foundational papers).
(Edit for clarity: these questions are not proposed as cruxes. They are just questions I am unclear on, related to my attempt to find the crux)
How does XR weigh costs and benefits?
Is there any cost that is too high to pay, for any level of XR reduction? Are they willing to significantly increase global catastrophic risk—one notch down from XR in Bostrom's hierarchy—in order to decrease XR? I do get that impression. They seem to talk about any catastrophe less than full human extinction as, well, not that big a deal.
For instance, suppose that if we accelerate progress, we can end poverty (by whatever standard) one century earlier than otherwise. In that case, failing to do so, in itself, should be considered a global catastrophic risk, or close to it. If you're willing to accept GCR in order to slightly reduce XR, then OK—but it feels to me that you've fallen for a Pascal's Mugging.
Eliezer has specifically said that he doesn't accept Pascal's Mugging arguments in the x-risk context, and Holden Karnofsky has indicated the same. The only counterarguments I've seen conclude “so AI safety (or other specific x-risk) is still a worthy cause”—which I'm fine with. I don't see how you get to “so we shouldn't try to speed up technological progress.”
Does XR consider tech progress default-good or default-bad?
My take is that tech progress is default good, but we should be watchful for bad consequences and address specific risks. I think it makes sense to pursue specific projects that might increase AI safety, gene safety, etc. I even think there are times when it makes sense to put a short-term moratorium on progress in an area in order to work out some safety issues—this has been done once or twice already in gene safety.
When I talk to XR folks, I sometimes get the impression that they want to flip it around, and consider all tech progress to be bad unless we can make an XR-based case that it should go forward. That takes me back to point (1).
What would moral/social progress actually look like?
This idea that it's more important to make progress in non-tech areas: epistemics, morality, coordination, insight, governance, whatever. I actually sort of agree with that, but I'm not sure at all that what I have in mind there corresponds to what EA/XR folks are thinking. Maybe this has been written up somewhere, and I haven't found it yet?
Without understanding this, it comes across as if tech progress is on indefinite hold until we somehow become better people and thus have sufficiently reduced XR—although it's unclear how we could ever reduce it enough, because of (1).
What does XR think about the large numbers of people who don't appreciate progress, or actively oppose it?
Returning to the road trip metaphor: while PS and EA/XR debate the ideal balance of resources towards steering vs. acceleration, and which is more neglected, there are other passengers in the car. Many are yelling to just slow down, and some are even saying to turn around and go backwards. A few, full of revolutionary zeal, are trying to jump up and seize the steering wheel in order to accomplish this, while others are trying to sabotage the car itself. Before PS and EA/XR even resolve our debate, the car might be run off the road—either as an accident caused by fighting groups, or on purpose.
This seems like a problem to me, especially in the context of (3): I don't know how we make social progress, when this is what we have to work with. So a big part of progress studies is trying to just educate more people that the car is valuable and that forward is actually where we want to go. (But I don't think anyone in EA/XR sees it this way or is sympathetic to this line of reasoning, if only because I've never heard them discuss this faction of humanity at all or recognize it as a problem.)
Thank you all for your input here! I hope that understanding these issues better will help me finally answer @Benjamin_Todd's question, which I am long overdue on addressing.
(Context note: I read this post, all the comments, then Ben Todd's question on your AMA, then your Progress Studies as Moral Imperative post. I don't really know anything about Progress Studies besides this context, but will offer my thoughts now below in the hope it will help with identifying the crux.)
None of the comments so far have engaged with your road trip metaphor, so I'll bite:
In your Progress Studies as Moral Imperative post it sounds like you're concerned that humanity might just slow the car down, stop, and just stay there indefinitely or something due to a lack of appreciation or respect for progress. Is that right?
Personally I think that sounds very unlikely and I don't feel concerned at all about that. I think nearly all other longtermists would probably agree.
This first thing your Moral Imperative post made me think of is Factfulness by Rosling et al. Before reading the book in 2019 I had often heard the idea that roughly "people don't know how much progress we've made lately." I felt like I heard several people say this for a few years without actually encountering the people who were ignorant about the progress.
In the beginning of Factfulness Rosling talks about how a bunch of educated people on a UN council (or something) were ignorant of basic facts of humanity's progress in recent decades. I defer to his claim and yours that these people who are ignorant of the progress we've made exist.
That said, when I took the pre-test quiz at the beginning of the book about the progress we've made I got all of his questions right, and I was quite confident in essentially all of the answers. I recall thinking that other people I know (in the EA community, for example) would probably also get all the questions correct, despite the poor performance on the same quiz by world leaders and other audiences that Rosling spoke to over the years.
I say all this to suggest that maybe Progress Studies people are reactionary to some degree and longtermists (what you're calling "EA/XR" people) aren't? (Maybe PS people are used to seeing a lot of people in society (including some more educated and tech people) being ignorant of progress and opposed to it, while maybe EA people have experienced less of this or just don't care to be as reactionary to such people?) Could this be a crux? Longtermists just aren't very concerned that we're going to stop progressing (besides potentially crashing the car--existential risk or global catastrophic risk), whereas Progress Studies people are more likely to think that progress is slowing and coming to a stop?