Hi there! I'm an EA from Madrid. I am currently finishing my Ph.D. in quantum algorithms and would like to focus my career on AI Safety. Send me a message if you think I can help :)
I think one could argue that creating an index across causes makes sense, because it allows for exposure on things that are hard to compare (eg, we are not perfect utilitarians). I think in practical terms, it would help EAs have some reference and an easy way to donate across causes. For example, one could create a fund that is indexed to the elicited preferences of the EA community, or to some group of experts. The closest I am aware of is what Giving What We Can do.
I don’t know about other folks but I think this is my first criticism of them as long as I can remember, both online and offline. In general I think they have been fairly responsible with AI safety, or as responsible as I would expect a company to be. But even if I did criticise them a lot, I think it would still be a valid criticism. After all, as a non American I feel quite unease about this, even if they are arguably not the main actor. In any case, I think liberal democracies should oppose mass surveillance in general.
Worth noting that the mass surveillance friction point is only about domestic mass surveillance. Thus, does Anthropic believes mass surveillance of non-Americans is just fine?
I’m pretty confident the EA community is underdiscussing on how to prevent global AGI powered autocracy, especially if the US democracy implodes under AGI pressure. There are two key questions here: (I) How to make the US more resilient, and (ii) how can we make the world less dependent on the US democracy resilience.
AGI could, in principle, find solutions for the key problems that animals face, but I would argue the main issue is that it won't automatically enlighten humans.