Suppose you believe AGI (or superintelligence) will be created in the future. In that case, you should also acknowledge its super capabilities in addressing EA problems like global health and development, pandemics, animal welfare, and cause prioritization decision-making.
Suppose you don't believe superintelligence is possible. In that case, you can continue pursuing other EA problems, but if you do believe superintelligence is coming, then why are you spending time and money on issues that will likely all be solved by AI, assuming superintelligence comes aligned with human values?
I've identified a few potential reasons why people continue to devote their time and money to non-AI-related EA causes:
- You aren't aware of the potential capabilities of superintelligence.
- You don't think that superintelligence will arrive for a long time, or you remain uncertain about a timeline.
- You're passionate about a particular cause, and superintelligence doesn't interest you.
- You believe that present suffering matters intrinsically, and that the suffering occurring now has a moral weight that can't be dismissed.
- You might even think that superintelligence won't be able to address particular problems.
It's widely believed (at least in the AI safety community) that the development of sufficiently advanced AI could lead to major catastrophes, a global totalitarian regime, or human extinction, all of which seem to me to be more pressing and critical than any of the above reasons for focusing on other EA issues. I post this because I'd like to see more time and money allocated to AI safety, particularly in solving the alignment problem through automated AI labor (since I don't believe human labor can solve it anytime soon, but that's beyond the scope of this post).
So, do any of the reasons presented above apply to you? Or do you have different reasons for not focusing on AI risks?
Ah okay, I didn't state this, but I'm operating under the definition of superintelligence being inherently uncontrollable, and thus not a tool. For now, AI is being used as a tool, but in order to gain more power, states/corporations will develop it to the point where it has its own agency, as described by Bostrom and others. I don't see any power-seeking entity reaching a point in their AI's capability where they're satisfied and stop developing it, since a competitor could continue development and gain a power/capabilities advantage. Moreover, a sufficiently advanced AI would be motivated to improve its own cognitive abilities to further its goals.
It may be possible that states/corporations could align superintelligence just to themselves if they can figure out which values to specify and how to hone in on them, but the superintelligence would be acting on its own accord and still out of their control in terms of how it's accomplishing its goals. This doesn't seem likely to me if superintelligence is built via automated self-improvement, though, as there are real possibilities of value drift, instrumental goals that broaden its moral scope to include more humans, emergent properties that appear (which produce unexpected behavior), or competing superintelligences that are designed to align with all of humanity. All of these possibilities, with the exception of the last one, are problems for aligning superintelligence with all of humanity too.