This is a special post for quick takes by Technoliberal. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.
Sorted by Click to highlight new quick takes since:

I wanted to make this poll to see how the community views the speed/x-risk tradeoff. I'm personally 99% x-risk and 1% speed, so I would hard agree. My prediction is most people will agree, maybe a 70/30 split, but I'm curious to see.

Craig Green 🔸
5
0
0
100% agree ➔ 50% disagree

Initially I just calculated a naive expected value function and put 100% agree, but then I realized that I don't value realizing potential lives nearly as much as I value improving existing ones. While I do value realizing potential lives, the loss of them is not experienced by anyone other than present-day people like myself who think about them abstractly, which seems to me in sum to be less bad than the suffering otherwise avertible due to technological progress in the next 100 years. But I obviously haven't thought about this enough or I wouldn't have made my initial mistake.

One thing I didn't consider in my revised answer is that I didn't actually do the math. Taking an existential event as literally causing the end of earth-originating life, the question is whether the difference in probability multiplied by the immediate mass extinction itself would represent more death and suffering than the avertible death and suffering occurring over a 100-year period. I just don't know. It seems unlikely that the avertible death and suffering amounts to as much as the amount caused by the mass-extinction event itself, but after multiplying by the difference in probability and acknowledging the ambiguity of the timeline proposed in this question, things become less clear. However, let's say that the probability-adjusted, undetermined-timing mass-extinction event does cause more suffering and death and I change my answer to 50% agree. I don't think this is what most people would interpret 50% agree to express.

I should also be clear that I'm taking the question to mean literally ending earth-originating life in more-or-less one, fell swoop. Obviously, traditional x-risks actually have a spectrum of severity, so this is not so straightforward to apply to real-world resource allocation.

If I had to be more specific I would mean "reducing the probability of all humanity (and only humanity) dying in a few short days/weeks from 50% to 10%" by "significantly reduce existential risk".

Also, I disagree with your methods. X risks aren't especially bad because of all the utility lost (and "negative utility" created), they're bad because after they happen there's never any utility again. Unless apes re-evolve into humans and reestablish all of civilization all over again, but we're getting too hypothetical. What's 100, or even 1000 years of death and suffering compared to 10000 of utopia? If stalling/slowing down technological progress for 1000 years made the P(Doom) go from 50% to 1%, I would definitely take it. Unless of course you think utopia is gonna be some short lived thing, but I seriously doubt that.

You are rightly grasping that we disagree, but I don't think you are understanding my view (and to be clear, reasonable people can disagree about this).

My wife and I are debating whether we will have more children or not. Having another child is desirable to us. So much so that she's willing to undergo the relatively risky process of child birth to have another one. However, failing to have another child is significantly less bad than losing one of our existing children, IMO. I'd even say that, failing to have 100 more children is significantly less bad than losing one of our existing children. The reason why is that the child who never existed is not sentient and so does not experience any deprivation. They do not suffer. And my suffering of that abstract loss is not nearly as bad as would be the suffering I would experience losing a living child who I know.

Now you may disagree with that, and mourn all the lost utility, and that is a reasonable perspective, but its not mine, and as you can see, this is a deeper philosophical difference and not some sort of misunderstanding about expected utility or something like that.

FYI, about this sentence: "X risks aren't especially bad because of all the utility lost ... they're bad because after they happen there's never any utility again." I don't really see a difference between these two statements.

I agree with Craig here. I've written about problems with most conceptions of utility people use and describe alternatives that I think better match what Craig is saying in this sequence.

John Salter
2
0
0
60% disagree

I would be willing to delay technological innovation by up to 100 years to significantly reduce existential risk

I think the question is too imprecise phrased to be answered precisely. When would the delay start? Over what time period would it be felt?  (e.g. a 100% delay for 100 years is very different than 1% delay over 10,000 years)

I'm thus giving a directional answer assuming we're talking about whether seeking to dramatically reducing technological progress in exchange for safety is a feasible way to make the world a better place. I don't think this is, but I'm not sure.

My biggest gripe is that any attempt to reduce technological innovation dramatically would entail a bunch of side-effects that would degrade the quality of existence (e.g. requiring authoritarianism, moving power from cooperators to defectors, to people skilled at deception to people less skilled, incentivises fighting for a larger slice of the pie instead of expanding it as expanding it is far harder without improved technology)

I can’t respond because I don’t know what “significantly reduce” means. 0.01%? 10%?

I would imagine "significantly reducing" as going from 50% to 10%, but I should have been more clear

Technoliberal
2
0
2
100% agree

Wrote a post about it, but the TL;DR is that extintion is THE worst case scenario. It is the end of all utility and completely irreversible, whereas progress can always be made at a later date.

S risks are a thing. There exist fates worse than death.

That's fair, but I imagine X risks and S risks are very heavily correlated. Especially in regards to "speed of progress", accelerationism will, in my view, obviously increase X risks (safety research takes time, the more time you have, the more time for research you have, the more research is done, therefore reducing risk) but also increase S risks (this is more personal opinion, but I don't think the current leaders of AI innovation have stuff like animal welfare in mind. if we just keep chugging along, the first ASI might not care about animals at all).

dan.pandori
1
0
0
90% agree

'significantly reduce' could mean a lot of things. I'm answering as if this reduces absolute X-risk by 20% or more over the next 10 centuries.

Curated and popular this week
Relevant opportunities