E

EuanMcLean

Bio

Ex particle physicist and AI safety guy.

Now I run the integral altruism network. integralaltruism.com/

Sequences
1

Big Picture AI Safety

Comments
4

Topic contributions
2

Obviously a line by line response to your line by line response to my line by line response to your article would be somewhat over the top. So I'll refrain!

Yes we could waste our lives falling down a hole here.

But then you also respond to most of (not all) my points without actually giving a counter-argument, just claiming that I'm clearly mistaken.

Huh. I must have messed up the tone of my last message because that wasn't the intention at all. For some of my responses I thought I was basically agreeing with you, and others I was clarifying what I (or rather Diego) was trying to say rather than saying you are wrong.

[This comment is no longer endorsed by its author]Reply

Thanks for your comment Thom.

In my response to this, I really want to avoid this vibe of “you just don’t understand” or “you need to wake up” and all that kind of thing. I know how annoying and unproductive that kind of response can be. In what I write below I’m not trying to just assert that my position is obviously more right, if only you could see through the matrix. I’m interested in clarifying the metacrisis position/how metacrisis folk think, as maybe if it becomes clear enough it will no longer seem obviously stupid to you!

That being said, the standard metacrisis-y response to most of what you’ve said here would be to try and pick apart the (modernist-y) assumptions that might lie underneath it. Being contextualizing isn’t just about awareness of the context of the system you’re studying, but also the context of your own mind/rationality/beliefs – all the quite deep assumptions driving those beliefs that work well in a number of contexts (like building bridges, training LLMs, or saving kids from malaria) but might not generalize as well as you think to statements about a rapidly evolving and highly interconnected world.

I’ll pick out a couple of examples to pick on from your comment:

Modernity has led to the mental health crisis: I’m just not sure this empirically stacks up. It is really hard to measure mental health over time, given that its measurement is so culturally contingent

I want to pick on this because I think it’s a good example of what the vibey crowd like to call “scientism”, which in my head means something like the view that “the only possible way to know something to be true is if it’s published in a peer-reviewed journal”. That is obviously one of the most reliable ways to know something is true, but it limits your toolkit for making sense of the world.

In the case of modernity leading to a mental health crisis: yes, you’re right that it’s a very hard thing to measure, and therefore no signal has been found by any study. But when you include lived experience… I don’t know man, it just feels so true, at least in some sense, from my own life and the lives of so many around me. For example, there’s no signal that shows that social media can mess up teenagers’ mental health, but this exchange we’re having in the comments of the EA forum is making my muscles tense and my mind race, and when I extrapolate this to being a teenager and dealing with high stakes things like how I look or who fancies me, this is some good evidence for that causation in my book.

Sometimes I think of EA as the “analytical philosophy of changemaking” while metacrisis is the “continental philosophy of changemaking”, since in analytical philosophy something is true because you can prove it to be true, while in continental philosophy something is true because it feels true. We need both.

> Growth has historically been the single biggest driver of human welbeing.

So… how are you defining human wellbeing here? In terms of stuff you can measure (life expectancy, economic prosperity, etc), yea there’s no argument, you’re right. But all the other things that contribute to a broader definition of wellbeing? Community, meaning, connection to nature, etc? Wellbeing is a complex beast. I don’t have the arguments or the data to say you’re wrong, but you’re saying it here without any argument as if this is obviously right.

You may well be right even in the broader sense of the word wellbeing, but a metacrisis person might also say that once you’ve optimised hard enough for economic growth, goodhart’s law bites you in the arse and growth starts to decorrelate from wellbeing. Some might argue that that is starting already (see, for example, Scandinavia scoring best on various happiness metrics while having a smaller economy than, say, the US).

To reiterate something I said in the post, I’m aware that if you’re constantly questioning everything and constantly having to keep all the mystery of the definitions of all your terms you’re using in your attention, you’ll never get anything done. Getting things done is good. But if you commit to some metric and then never revisit the assumptions behind it, you might find yourself getting the wrong things done when your optimisation pushes the world out of the regime in which your metric makes sense.

The idea that rivalry (caused by human nature) is a background assumption and not necessarily the case: the point here surely is that, yes of course humans can be more or less cooperative at times and given different cultural assumptions, but this kind of game theory describes dynamics that are independent of how most people behave.

This feels like a statement about the strength & generality of game theory when applied to humans on various scales. The metacrisis nerds would probably try to poke at how much confidence you have in game theory to support the claim that rivalry will always arise. This kind of background acceptance in game theory has a modernist vibe.

 

Anyway, I don’t expect I’ve changed your mind on any of this, which is fine! Even if we don’t agree it’s good to more deeply understand each other’s position. Ok bye

This qualitative survey of AI safety experts I just finished, I think it might be a useful resource for people just starting their career in AI safety! https://www.lesswrong.com/s/xCmj2w2ZrcwxdH9z3