FO

freest one

11 karmaJoined

Comments
5

Thank you! I thought I read fairly carefully but I missed that footnote.

This is a great post on a highly important topic, perhaps the most important topic

One blind spot I have to point out (and I'm not trying to be edgy here or anything) is that a longtermist perspective itself clearly has the potential to spawn ideological fanaticism. This is not because any longtermists today are fanatics. It's because anyone with a certain disposition towards dogmatic moral and epistemological certainty will do the math and see that virtually infinite longterm ends justify any means now. There are already arguably cases of this, which will be known to anyone in the community; and there is debate over whether they're really longtermist or really rationalist etc. 

Going forward, longtermism will need to deal very carefully with its abuse by fanatics, for many of the reasons catalogued in this post. 

Thanks, I appreciate the comment. I hadn’t seen your piece, that’s great though. The difficulty of gene/brain alignment is a good analogy for how unlikely human/AI alignment is on a first try. & I share your scepticism about humans having some general utility function.

Thanks for the comment. Yes, I’m still uncertain about the mechanism of self-reproduction in future AIs… with humans, it’s certainly possible to decouple sex from reproduction but if a large enough proportion of people do that, then we will assuredly start to disappear.

I think Tegmark is still too optimistic. The arguments against nuclear war happening are typically very weak (variations of "it hasn't happened yet, people believe in MAD, leaders are rational). And even when pundits have considered the risks higher (Cuban missile crisis) their actions have not reflected this at all.  We should take this as a signal of massive status quo bias and denial. 

http://www.mcgannfreestone.com.au/?p=2255