Global moratorium on AGI, now (Twitter). Founder of CEEALAR (née the EA Hotel; ceealar.org)
Nothing short of a global non-proliferation treaty on ASI (or a Pause, for short) is going to save us. So we have to make it realistic. We have to always be bringing comms back to that.
In terms of explaining the problem to public audience, lethalintelligence.ai is great.
I don't think it's discount rate (esp given short timelines); I think it's more that people haven't really thought about why their p(doom|ASI) is low. But people seem remarkably resistant to actually tackle the cruxes of the object level arguments, or fully extrapolate the implications of what they do agree on. When they do, they invariably come up short.
"EA getting swamped by normies with high inferential distances"
This seems like completely the wrong focus! We need huge numbers of normies involved to get the political pressure necessary to act on AI x-risk before it's too late. We've already tried the "EA's lobbying behind closed doors" approach, and it has failed (/been co-opted by the big AGI companies).
Of course, if we do somehow survive all this, people will accuse me and others like me of crying wolf. But 1/10 outcomes aren't that uncommon! I'm willing to take the reputation hit though, whether justified or not.
I think in general a big problem with AI x-risk discourse is that there are a lot of innumerate people around, who just don't understand what probability means (or at least act like they don't, and count everything as a confident statement even if appropriately hedged).
those doing so should caveat that they are designed to mitigate the possibility (and not certainty) of catastrophic outcomes. This should be obvious, but given that people will be waiting in the wings to weaponise anything that could be called a regulatory overreaction, I think it’s worth doing.
I think to a lot of people, it matters just how much of a possibility there is. From what I've seen, many people are just (irrationally imo!) willing to bite the bullet on yolo-ing ASI if there is "only a 10%" chance of extinction. For this reason I counter with my actual assessment: doom is the default outcome of AGI/ASI (~90% likely). Only very few people are willing to bite that bullet! (And much more common is for people to fall back on dismissing the risk as "low" - e.g. experts saying "only" 1-25%).
Beyond capacity building; it's not completely clear to me that there are robustly good interventions in AI safety, and I think more work is needed to prioritize interventions.
I think it's pretty clear[1] that stopping further AI development (or Pausing) is a robustly good intervention in AI Safety (reducing AI x-risk).
However, what happens if these tendencies resurface when “shit hits the fan”?
I don't think this could be pinned on PauseAI, when at no point has PauseAI advocated or condoned violence. Many (basically all?) political campaigns attract radical fringes. Non-violent moderates aren't responsible for them.
After nearly 7 years, I intend to soon step down as Executive Director of CEEALAR, founded by me as the EA Hotel in 2018. I will remain a Trustee, but take more of a back seat role. This is in order to focus more of my efforts on slowing down/pausing/stoping AGI/ASI, which for some time now I've thought of as being the most important, neglected and urgent cause.
We are hiring for my replacement. Please apply if you think you'd be good in the role! Or send on to others you'd like to see in the role. I'm hoping that we find someone who is highly passionate about CEEALAR, and able to take it to the next level (possibly even franchising the model to other locations? Something that has been talked about a lot for various locations but has still yet to happen.)