Thanks Caleb, very useful. @ConnorA I'm interested in your thoughts re how to balance comms on catastrophic/existential risks and things like Deepfakes. (I don't know about the particular past efforts Caleb mentioned, and I think I am more open to comms of Deepfakes being useful to develop a broader coalition, even though deepfakes are a tiny fraction of what I care about wrt AI.)
Have you applied to LTFF? Seems like the sort of thing they would/should fund. @Linch @calebp if you have actually already evaluated this project I would be interested in your thoughts as would others I imagine! (Of course, if you decided not to fund it, I'm not saying the rest of us should defer to you, but it would be interesting to know and take into account.)
I think I directionally agree!
One example of timelines feeling very decision-relevant is for people who are looking to specialise in partisan influence, you might want to specialise far more in Republicans the larger your credence in TAI/ASI by Jan 2029. Whereas for longer timelines on priors Democrats have a ~50% chance of controlling the presidency from 2029, so specialising in Dem political comms could make more sense.
Of course criticism is only a partially overlapping set with advice, but this post reminded me a bit of this take on giving and receiving criticism.
Seems somewhat epistemically toxic to give in to a populist backlash against AI art if I don't buy the arguments for it being bad myself.