This seems right to me - personally I am more likely to read a post if it is by someone I know (in person or by reputation). I think selfishly this is the right choice as those posts are more likely to be interesting/valuable to me. But it is also perhaps a bad norm as we want new writers to have an easy route in, even if no-one recognises their name. So I try to not index too heavily on whether I know the person.
Thanks Caleb, very useful. @ConnorA I'm interested in your thoughts re how to balance comms on catastrophic/existential risks and things like Deepfakes. (I don't know about the particular past efforts Caleb mentioned, and I think I am more open to comms of Deepfakes being useful to develop a broader coalition, even though deepfakes are a tiny fraction of what I care about wrt AI.)
Have you applied to LTFF? Seems like the sort of thing they would/should fund. @Linch @calebp if you have actually already evaluated this project I would be interested in your thoughts as would others I imagine! (Of course, if you decided not to fund it, I'm not saying the rest of us should defer to you, but it would be interesting to know and take into account.)
Very interesting, and props in particular for assembling the cosmic threats dataset - that does seem like a lot of work!
I tend to agree with you and Joseph that there isn't anything on the object level to be done about these things yet, beyond just trying to ensure we get a long reflection before intersetellar colonisation, as you suggest.
On hot take 2, this relies on the risks from each start system being roughly independent, so breaking this assumption seems like a good solution, but then each star system being very correlated maybe seems bad for liberalism and diversity of forms of flourishing and so forth. But maybe some amount of regularity and conformity is the price we need to pay for galactic security.
Acausal trade/cooperation may end up being crucial here too once civilisation is spread across distances where it is hard or impossible to interact normally.