Excellent post! I think the thing of some (hopefully non-trivial) fraction of people/capital/agents will want to promote The Good, while hopefully a near-zero fraction will want to specifically promote The Bad seems especailly important to me.
A couple snippets I wrote last year are relevant:
But yours is more systematic, these focused on just narrow pieces of the relevant idea space.
Very interesting, and props in particular for assembling the cosmic threats dataset - that does seem like a lot of work!
I tend to agree with you and Joseph that there isn't anything on the object level to be done about these things yet, beyond just trying to ensure we get a long reflection before intersetellar colonisation, as you suggest.
On hot take 2, this relies on the risks from each start system being roughly independent, so breaking this assumption seems like a good solution, but then each star system being very correlated maybe seems bad for liberalism and diversity of forms of flourishing and so forth. But maybe some amount of regularity and conformity is the price we need to pay for galactic security.
Acausal trade/cooperation may end up being crucial here too once civilisation is spread across distances where it is hard or impossible to interact normally.
This seems right to me - personally I am more likely to read a post if it is by someone I know (in person or by reputation). I think selfishly this is the right choice as those posts are more likely to be interesting/valuable to me. But it is also perhaps a bad norm as we want new writers to have an easy route in, even if no-one recognises their name. So I try to not index too heavily on whether I know the person.
Thanks Caleb, very useful. @ConnorA I'm interested in your thoughts re how to balance comms on catastrophic/existential risks and things like Deepfakes. (I don't know about the particular past efforts Caleb mentioned, and I think I am more open to comms of Deepfakes being useful to develop a broader coalition, even though deepfakes are a tiny fraction of what I care about wrt AI.)
Have you applied to LTFF? Seems like the sort of thing they would/should fund. @Linch @calebp if you have actually already evaluated this project I would be interested in your thoughts as would others I imagine! (Of course, if you decided not to fund it, I'm not saying the rest of us should defer to you, but it would be interesting to know and take into account.)
My guess is that for the (perhaps rare) person who has short-ish AI timelines (e.g. <10 years median) and cares in particular about animals, investing money to give later is better than donating now. At least, if we buy the claim that the post-AGI future will likely be pretty weird/wild, and we have a fairly low pure time discount rate, it seems presumptuous to think we will allocate money better now than after we see how things pan out.
And to a (very rough) first approximation, I expect the values of the future to be somewhat in proportion to the wealth/power of present people. Ie if there are more (and more powerful/wealthy) pro-animal people, that seems somewhat robustly good for the future of animals. Though not fully robust.