NS

Nate Sharpe

Mechanical Engineer
113 karmaJoined Retired

Comments
5

I agree that it seems like a valuable framing, thanks Matthew.

Thanks for the comments Holly! Two follow-ups:

  1. The PauseAI website says "Individual countries can and should implement this measure right now." Doesn't that mean that they're advocating for unilateral pausing, regardless of other actors choices, even if the high level/ideal goal is a global pause?
  2. If all of the important decision makers (globally) agreed on the premise that powerful AI/AGI/ASI is too risky, then I think there would still be a discussion around aligning on how close we are, how close we should be when we pause, how to enforce a pause, and when it would be safe to un-pause. But to even get to that point, you need to convince those people of the premise, so it seems pre-emptive to me to focus the messaging on the pause aspect if the underlying reasons for the pause aren't agreed upon. So something more like "We should pause global AI development at some point/soon/now, but only if everyone stops at once, because if we get AGI right now we're probably doomed, because [insert arguments here]". But it sounds like you think that messaging is perhaps too complex and it's beneficial to simplify it to just "Pause NOW"?

Thanks for the thoughtful feedback Carl, I appreciate it. This is one of my first posts here so I'm unsure of the norms - is it acceptable/preferred that I edit the post to add that point to the bulleted list in that section (and if so do I add a "edited to add" or similar tag) or just leave it to the comments for clarification?

I hope the bulk of the post made it clear that I agree with what you're saying - a pause is only useful if it's universal, and so what we need to do first is get universal agreement among the players that matter on why, when, and how to pause.

Great point, thanks Matthew, upon reflection I agree that the section you quoted isn't quite accurate. I guess I would restate it as something like "This confluence of factors creates incredibly powerful incentives to not think too hard about potential downsides of this new technology we're racing towards. Motivated reasoning is much more likely when the motivations are so strong and the risks (while large) diffuse, distant (perhaps) and uncertain." Does that make more sense?

I guess I do still think there are aspects of the situation that I would still call a coordination problem though. Imagine a situation where there are two actors, each agree that pushing a button has a 10% chance of killing them both, if it doesn't kill them then the button-pusher gets $1 million, and each time it gets pushed the probability of the next push killing them both goes up by 10%. They each agree on the facts of the situation, but there is an incentive to defect first if you believe the other actor might defect, right? See this o3 analysis of the situation for more math than I can summon at this time of night 😅

Thanks Michael, I hadn't seen Wei's comments before and that was quite a fruitful rabbit hole 😅