M

MountainPath

37 karmaJoined

Comments
2

Yes, this was the biggest reason why I was considering to exit AI safety. I grappled with this question multiple months. Complex cluelessness triggered a small identity crisis for me haha. 

"If you can't predict the second and third order effects of your actions, what is the point of trying to do good in the first place?" Open Phil funding OpenAI is a classical example here.

But here is why I am still going:

I'm doing no one a favour by coming to the conclusion the risk that it's just not tractable at all is too high, so I'm just not going to do it at all. AGI is still going to happen. It's still going to be determined by a relatively small number of people. They're going to, on average, both care less about humanity and have thought less rigorously about what's most tractable. So I'm not really doing anyone a favor by dropping out.

More concretely:

Even if object-level actions are not tractable, the EV of doing meta-research still seems to significantly outweigh other cause areas. Positively steering the singularity remains for me to be the most important challenge of our time (assuming one subscribes to longtermism and acknowledges both the vast potential of the future and the severe risks of s-risks). 

Even if we live in a world where there is a 99% chance of being entirely clueless about effective actions and only a 1% chance of identifying a few robust strategies, it is still highly worthwhile to focus on meta-research aimed at discovering those strategies.

Strongest reason for pausing and AI safety I can think about: In order to build a truth-seeking super intelligence, that not only maximises paperclips, but also tries to understand the nature of the universe, you need to align it to that goal. And we have not accomplished this yet or figured out how to do so. Hence, regardless of whether you believe in the inherent value of humanity or not, AI safety is still important, and pausing probably too. Otherwise we won’t be able to create a truth-seeking ASI.