A

Arepo

5085 karmaJoined

Participation
1

Sequences
4

EA advertisements
Courting Virgo
EA Gather Town
Improving EA tech work

Comments
728

Topic contributions
17

Fwiw I commented on Thorstad's linkpost for the paper when he first posted about it here. My impression is that he's broadly sympathetic to my claim about multiplanetary resilience, but either doesn't believe we'll get that far or thinks that the AI counterconsideration dominates it.

In this light, I think that the claim that annual x-risk being lower than 1/(10^-9) being 'implausible' is much too strong if it's being used to undermine EV reasoning. Like I said - if we become interstellar and no universe-ending doomsday technologies exist, then multiplicativity of risk gets you there pretty fast. If each planet has, say 1/(10^-5) annual chance of extinction, then n planets have 1/(10^(-5^n)) chance of all independently going extinct in a given year. For n=2 that's already one in ten billion.

Obviously there's a) a much higher chance that they could go extinct in different years and b) that they could go all extinct in any given period from non-independent events such as war. But even so, it's hard to believe that increasing k, say to double digits, doesn't rapidly outweigh such considerations, especially given that an advanced civilisation could probably create new self-sustaining settlements in a matter of years.

I feel it is highly speculative on the difficulties of making comebacks and on the likelihood of extreme climate change

I don't understand how you think climate change is more speculative than AI risk. I think it's reasonable to have higher credence in human extinction from the latter, but those scenarios are entirely speculative. Extreme climate change is possible if a couple of parameters turn out to have been mismeasured.

As for the probability of making comebacks, I'd like to write a post about this, but the narrative goes something like this:

  • to 'flourish' (in an Ordian sense), we need to reach a state of sufficiently low x-risk
  • per above, by far the mathematically plausible way of doing this is just increasing our number of self-sustaining settlements
    • you could theoretically do it with an exceptionally stable political/social system, but I'm with Thorstad that the level of political stability this requires seems implausible
  • to reach that state, we have to develop advanced technologies - well beyond what we have now. So the question about 'comebacks' is misplaced - the question is about our prospect of getting from the beginning to (a good) end of at least one time of perils without a catastrophe
  • Dating our current time of perils to 1945, it looks like we're on course, barring global catastrophes, to develop a self-sustaining civilisation in maybe 120-200 years
  • Suppose there's a constant k probability annual risk of a catastrophe that regresses us to pre-time-of perils technology. Then our outlook in 1945 was, approximately, (1-k)^160 chance of getting to a multiplanetary state. Since we've made it 80 years in, we have a substantially better ~(1-k)^80.
  • If we restart from pre-1945 levels of technology, we will do so with max 10% of the fossil fuel energy we had, as well as many other depleted resources (fertiliser, uranium, etc). This will slow any kind of reboot substantially. See e.g. comparisons of coal to photovoltaics here.
  • There's huge uncertainty here, but when you multiply out the friction from all the different depleted resources, I think progress the second time around will be optimistically 1/2 the speed, and pessimistically 1/10x or worse. (Based on above link, if photovoltaics were to entirely substitute fossil fuels, that drag alone would be around a ~30/5.5 multiplier on the cost of generating energy, which seems like it could easily slow economic development by a comparable amount)
  • That means in a reboot, we have optimistically (1-k)^320 chance of getting to a good outcome, pessimistically (1-k)^1600
  • During that reboot, we can expect the new civilisation to preferentially use up the most efficient resources just as we do (it doesn't have to destroy them, just move them to much higher entropy states, such as our current practice of flushing fertiliser into the ocean) - but they have 2x, 10x or however much longer doing so.
  • That means civilisation 3 would have as much a disadvantage over civilisation 2 as civilisation 2 would over us, giving it optimistically a (1-k)^640 chance of a good outcome, pessimistically a (1-k)^16000 chance.

If we plug in k=0.001, which seems to be a vaguely representative estimate among x-risk experts, then in 1945 we would have had an 85% chance, today we would have a 92% chance, after one backslide we would have optimistically 73%, pessimistically 20%, and after Backslide Two we would have optimistically 53%, pessimistically basically 0.

We can roughly convert these to units of 'extinction' by dividing the loss of probability by our current prospects. So going to probability 53%, would be losing 32% of our current prospects, which is 32%/85% as bad in the long term as extinction.

This is missing a lot of nuance, obviously, which I've written about in this sequence, so we certainly shouldn't take these numbers very seriously. But I think they overall paint a pretty reasonable picture of a 'minor' catastrophe being, in long-run expectation and aside from any short-term suffering or change in human morality, perhaps in the range of 15-75% as bad as extinction. Lots of room for discussing particulars, but not something we should dismiss as extinction being 'much worse' than - and in particular, not sufficiently lower that we can in practice afford to ignore the relative probabilities of extinction vs lesser global catastrophe.

Thanks for the write-up. I'm broadly sympathetic to a lot of these criticisms tbh, despite not being very left-leaning. A couple of points you relate I think are importantly false:
 

(Thorstad's claim that) there’s no empirical basis for believing existential risk will drop to near-zero after our current, uniquely dangerous period before achieving long-term stability.

I don't know about 'empirical', but there's a simple mathematical basis for imagining it dropping to near zero in a sufficiently advanced future where we have multiple self-sustaining and hermetically independent settlements e.g. (though not necessarily) on different planets. Then even if you assume disasters befalling one aren't independent, you have to believe they're extremely correlated for this not to net out to extremely high civilisational resilience as you get to double digit settlements. That level of correlation is possible if it turns out to be possible e.g. to trigger a false vacuum decay - in which case Thorstad is right - or if a hostile AGI could wipe out everything before it - though that probability will surely either be realised or drop close to 0 within a few centuries. 

If you accept the concept of Existential Risk and give them any credence, it logically follows that any such risk is much worse than any other horrible, terrible, undesirable one that does not lead to human extinction.

It doesn't, and I wish the EA movement would move away from this unestablished claim. Specifically, one must have some difference in credence between achieving whatever longterm future one desires given no 'minor' catastrophe and achieving it given at least one. That credence differential is, to a first approximation, the fraction representing how much of '1 extinction' your minor catastrophe is. Assuming we're reasonably ambitious in our long term goals (e.g., per above, developing a multiplanetary or interstellar civilisation), it seems crazy to me to suppose that fraction should be less than 1/10. I suspect it should be substantially higher, since on restart we would have to survive a high risk in time-of-perils-2 while proceeding to the safe end state much slower, given the depletion of fossil fuels and other key resources.

If we think a restart is >= 1/10x as bad as extinction then we have to ask serious questions about whether it's >= 10x as likely. I think it's at least defensible to claim that e.g. extreme climate change is 10x as likely as an AI destroying literally all humanity.  

Hi Eitan, you're very welcome to! No need to book or anything - you can just show up, find a suitable area and run the event if you want, as long as it's comfortably under the space capacity (currently 60) :) 

If you want map-editing privileges, or just for me to show you around and give a few pointers, feel free to DM me on here - or just log onto the Gather Town and send me a message if I'm around, which I usually am. If I'm not physically present at the time it's still probably the fastest way to reach me.

There are many ways to reduce existential risk. I don't see any good reason to think that reducing small chances of extinction events is better EV than reducing higher chances of smaller catastrophes, or even just building human capacity in preferentially non-destructive way. The arguments that we should focus on extinction have always boiled down to 'it's simpler to think about'.

It's still in use, but it has the basic problem of EA services that unless there's something to announce, there's not really any socially acceptable way of advertising it.

I was nodding along until I got to here:
 

 Some reduce the problem to AI-not-kill-everyone-ism, which seems straightforward enough and directed and the most robust source of value here,

By any normal definition of 'robust', I think this is the opposite of true. The arguments for AI extinction are highly speculative. By the arguments that increasingly versatile AI destabilises the global economy and/or military are far more credible. Many jobs already seem to have been lost to contemporary AI, and OpenAI has already signed a deal with autonomous arms dealer Anduril.

I think it's not hard to imagine worlds where even relatively minor societal catastrophes significantly increase existential risk, as I've written about elsewhere, and AI credibly (though I don't think obviously) makes these more likely. 

So while I certainly wouldn't advocate the EA movement pivoting toward soft AI risk or even giving up on extinction risk entirely, I don't see anything virtuous in leaning too heavily into the latter.

This philosophy seems at stark odds with 80k's recent hard shift into AI safety. The arguments for the latter, at least as an extinction risk, necessarily lack good evidence. If you're still reading this I'm curious whether you disagree with that assessment, or have shifted the view you espoused in the OP?

Have you checked out the EA Gather? It's been languishing a bit for want of more input from me, but I still find it a really pleasant place for coworking, and it's had several events run or part-run on there - though you'd have to check in with the organisers to see how successful they were.

Reading the Eliezer thread, I think I agree with him that there's no obvious financial gain for you if you hard-lock the money you'd have to pay back. 

I don't follow this comment. You're saying Vasco gives you X now, 2X to be paid back after k years. You plan to spend X/2 now, and lock up X/2, but somehow borrow 3/(2X) money now, such that you can pay the full amount back in k years? I'm presumably misunderstanding - I don't see why you'd make the bet now if you could just borrow that much, or why anyone would be willing to lend to you based on money that you were legally/technologically committed to giving away in k years.

One version that makes more sense to me is planning to pay back in installments, on the understanding that you'd be making enough money to do so at the agreed rate - though a) that comes with obviously increased counterparty risk, and b) it still doesn't make much sense if your moneymaking strategy is investing money which you have rather than selling service/labour, since, again, it seems irrational for you to have any money at the end of the k-year period.

Load more