NK

Nick K.

92 karmaJoined Apr 2023

Comments
20

You don't need to be an extreme longtermist to be sceptical about AI, it suffices to care about the next generation and not want extreme levels of change. I think looking too much into differing morals is the wrong lens here.

The most obvious explanation for how Altman and people more concerned about AI safety (not specifically EAs) differ seems to be in their estimates about how likely AI risk is vs other risks.

That being said, the point that it's disingenuous to ascribe cognitive bias to Altman for having whatever opinion he has, is a fair one - and one shouldn't go too far with it in view of general discourse norms. That said, given Altman's exceptional capability for unilateral action due to his position, it's reasonable to be at least concerned about it.

I realize that my question sounded rethorical, but I'm actually interested in your sources or reasons for your impression. I certainly don't have a good idea of the general opinion and the media I consume is biased towards what I consider reasonable takes. That being said, I haven't encountered the position you're concerned about very much and would be interested to be hear where you did. Regarding this forum, I imagine one could read into some answers, but overall I don't get the impression that the AI CEO's are seen as big safety proponents.

Who is considering Altman and Hassabis thought leaders in AI safety? I wouldn't even consider Altman a thought leader in AI - his extraordinary skill seems mostly social and organizational. There's maybe an argument for Amodei, as Anthropic is currently the only one of the company whose commitment to safety over scaling is at least reasonably plausible.

Noted! The key point I was trying to make is that I'd think it helpful for the discourse to separate 1) how one would act in a frame and 2) why one thinks each one is more or less likely (which is more contentious and easily gets a bit political). Since your post aims at the former, and the latter has been discussed at more length elsewhere, it would make sense to further de-emphasize the latter.

May I ask what your feelings on a pause were beforehand?

I like your proposed third frame as a somewhat hopeful vision for the future. Instead of pointing out why you think the other frames are poor, I think it would be helpful to maintain a more neutral approach and elaborate which assumptions each frame makes and give a link to your discussion about these in a sidenote.

I'm just noting that you are assuming that we have many robustly aligned AI's, in which case I agree that take-over seems less likely.

Absent this assumption, I don't think that "AIs will form a natural, unified coalition" is the necessary outcome, but it seems reasonable that the other outcomes will look functionally the same for us.

Again, this is just one salient example, but: Do you find it unrealistic that a top human level persuasion skills (think interchangeably Mao, Sam Altman and FDR depending on the audience) together with 1 million times ordinary communication bandwidth (i.e. entertaining this amount of conversations) would enable you to take over the world? Or would you argue that AI is never going to get to that level?

I agree that this would be interesting to explore, but heavily disagree that having a detailed answer to that influences the prediction of X risk substantially.

That's fair enough and levels of Background understanding vary (I don't have a relevant PhD either), but then the criticism should be about this point being easily misunderstood rather than making a big deal about the strawman position being factually wrong. In which case it would also be much more constructive than adversarial criticism.

Load more