Global moratorium on AGI, now (Twitter). Founder of CEEALAR (née the EA Hotel; ceealar.org)
o1 is further evidence that we are living in a short timelines world, that timelines are short, p(doom) is high: a global stop to frontier AI development until x-safety consensus is our only reasonable hope.
One high leverage thing people could do right now is encourage letter writing to California's Governor Newsom requesting he signs SB 1047. This would be a much needed precedent for enabling US federal legislation and then global regulation.
I mean, in terms of signalling it's not great to bet people (or people from a community) who are basically on your side, i.e. think AI x-risk is a problem, but just not that big a problem; as opposed to people who think the whole thing is nonsense and are actively hostile to you and dismissive of your concerns.
I've been getting a few offers from EAs recently. I might accept some. What I'd really like to do though is bet against an e/acc.