DM

David Mathers🔸

5264 karmaJoined

Posts
11

Sorted by New

Comments
591

Is there actually an official IPCC position on how likely degrowth from climate impacts is? I had a vague sense that they were projecting a higher world gdp in 2100 than now, but when I tried to find evidence of this for 15 minutes or so, I couldn't actually find any. (I'm aware that even if that is the official IPCC best-guess position that does not necessarily mean that climate experts are less worried about X-risk from climate than AI experts are about X-risk from AI.)

The global warming thing is interesting to me because my sense is that Ord and MacAskill think of themselves as relying on expert consensus and published literature, rather than as having somehow outsmarted it. So why the difference between them and the author in what it shows?

Interesting, the paper is older than Thorstad's blogposts, but it could still be that people are thinking of this as "the answer". 

I do think one issue people may be underrating is that we might just not bother with space colonization, if the distances and costs mean that no one on Earth will ever see significant material gain from it. 

I think that given a few generations of expansion to different stars in all directions, it is not implausible (i.e. at least 25% chance) that X-risk becomes extremely low (i.e. under 1 in 100,000  per century, once there are say, 60 colonies with expansion plans, and a lot less once there are 1000 colonies.) After all, we've already survived a million years, and most X-risks not from AI seem mostly to apply to single planet civilizations, plus the lightspeed barrier makes it hard for a risk to reach everywhere at once. But I think I agree that thinking through this stuff is very, very hard, and I'm sympathetic to David Thorstad's claim that if we keep finding ways current estimates of the value of X-risk reduction could be wildly wrong, at some point we should just lose trust in current estimates (see here for Thorstad making the claim: https://reflectivealtruism.com/2023/11/03/mistakes-in-the-moral-mathematics-of-existential-risk-part-5-implications/), even though I am a lot less confident than Thorstad is that very low future per year risk is an "extreme" assumption. 

It is disturbing to me how much Thorstad's work on this stuff seems to have been ignored by leading orgs; it is very serious work criticizing key assumptions that they base their decisions on, even if I personally think he tends to push points in his favour a bit far. I assume the same is true for the Rethink report you cite, although it is long and complicated enough, unlike Thorstad's short blog posts, that I haven't read any of it. 

What are the "extreme beliefs" you have in mind? 

"More generally, I am very skeptical of arguments of the form "We must ignore X, because otherwise Y would be bad". Maybe Y is bad! What gives you the confidence that Y is good? If you have some strong argument that Y is good, why can't that argument outweigh X, rather than forcing us to simply close our eyes and pretend X doesn't exist?"

This is very difficult philosophical territory, but I guess my instinct is to draw a distinction between:

 a) ignoring new evidence about what properties something has, because that would overturn your prior moral evaluation of that thing.

b) Deciding that well-known properties of a thing don't contribute towards it being bad enough to overturn the standard evaluation of it, because you are committed to the standard moral evaluation. (This doesn't involve inferring that something has particular non-moral properties from the claim that it is morally good/bad, unlike a).)

A) feels always dodgy to me, but b) seems like the kind of thing that could be right, depending on how much you should trust judgments about individual cases versus judgements about abstract moral principles. And I think I was only doing b) here, not a). 

Having said that, I remember a conversation I had in grad school with a faculty member who was probably much better at philosophy than me claimed that even a) is only automatically bad if you assume moral anti-realism. 

"Who framed it in terms of individual rights?"

Nuno did. I'm not criticizing you or suggesting this legislation is other than bad.

One reason to be suspicious of taking into account lost potential lives here is that if you always do so, it looks like you might get a general argument for "development is bad". Rich countries have low fertility compared to poor countries. So anything that helps poor countries develop is likely to prevent some people from being born. But it seems pretty strange to think we should wait until we find out how much development reduces fertility before we can decide if it is good or bad. 

Load more