DM

David Mathers🔸

5268 karmaJoined

Posts
11

Sorted by New

Comments
592

Yeah, I think I recall David Thorstad complaining that Ord's estimate was far too high also.

Be careful not to conflate "existential risk" in the special Bostrom-dervied definition that I think Ord, and probably Will as well, are using with "extinction risk" though. X-risk from climate *can* be far higher than extinction risk, because regressing to a pre-industrial state and then not succeeding in reindustrialising (perhaps because easily accessible coal has been used up), counts as an existential risk, even though it doesn't involve literal extinction. (Though from memory, I think Ord is quite dismissive of the possibility that there won't be enough accessible coal to reindustrialise, but I think Will is a bit more concerned about this?) 

Is there actually an official IPCC position on how likely degrowth from climate impacts is? I had a vague sense that they were projecting a higher world gdp in 2100 than now, but when I tried to find evidence of this for 15 minutes or so, I couldn't actually find any. (I'm aware that even if that is the official IPCC best-guess position that does not necessarily mean that climate experts are less worried about X-risk from climate than AI experts are about X-risk from AI.)

The global warming thing is interesting to me because my sense is that Ord and MacAskill think of themselves as relying on expert consensus and published literature, rather than as having somehow outsmarted it. So why the difference between them and the author in what it shows?

Interesting, the paper is older than Thorstad's blogposts, but it could still be that people are thinking of this as "the answer". 

I do think one issue people may be underrating is that we might just not bother with space colonization, if the distances and costs mean that no one on Earth will ever see significant material gain from it. 

I think that given a few generations of expansion to different stars in all directions, it is not implausible (i.e. at least 25% chance) that X-risk becomes extremely low (i.e. under 1 in 100,000  per century, once there are say, 60 colonies with expansion plans, and a lot less once there are 1000 colonies.) After all, we've already survived a million years, and most X-risks not from AI seem mostly to apply to single planet civilizations, plus the lightspeed barrier makes it hard for a risk to reach everywhere at once. But I think I agree that thinking through this stuff is very, very hard, and I'm sympathetic to David Thorstad's claim that if we keep finding ways current estimates of the value of X-risk reduction could be wildly wrong, at some point we should just lose trust in current estimates (see here for Thorstad making the claim: https://reflectivealtruism.com/2023/11/03/mistakes-in-the-moral-mathematics-of-existential-risk-part-5-implications/), even though I am a lot less confident than Thorstad is that very low future per year risk is an "extreme" assumption. 

It is disturbing to me how much Thorstad's work on this stuff seems to have been ignored by leading orgs; it is very serious work criticizing key assumptions that they base their decisions on, even if I personally think he tends to push points in his favour a bit far. I assume the same is true for the Rethink report you cite, although it is long and complicated enough, unlike Thorstad's short blog posts, that I haven't read any of it. 

What are the "extreme beliefs" you have in mind? 

"More generally, I am very skeptical of arguments of the form "We must ignore X, because otherwise Y would be bad". Maybe Y is bad! What gives you the confidence that Y is good? If you have some strong argument that Y is good, why can't that argument outweigh X, rather than forcing us to simply close our eyes and pretend X doesn't exist?"

This is very difficult philosophical territory, but I guess my instinct is to draw a distinction between:

 a) ignoring new evidence about what properties something has, because that would overturn your prior moral evaluation of that thing.

b) Deciding that well-known properties of a thing don't contribute towards it being bad enough to overturn the standard evaluation of it, because you are committed to the standard moral evaluation. (This doesn't involve inferring that something has particular non-moral properties from the claim that it is morally good/bad, unlike a).)

A) feels always dodgy to me, but b) seems like the kind of thing that could be right, depending on how much you should trust judgments about individual cases versus judgements about abstract moral principles. And I think I was only doing b) here, not a). 

Having said that, I remember a conversation I had in grad school with a faculty member who was probably much better at philosophy than me claimed that even a) is only automatically bad if you assume moral anti-realism. 

"Who framed it in terms of individual rights?"

Nuno did. I'm not criticizing you or suggesting this legislation is other than bad.

Load more