Suppose you believe AGI (or superintelligence) will be created in the future. In that case, you should also acknowledge its super capabilities in addressing EA problems like global health and development, pandemics, animal welfare, and cause prioritization decision-making.
Suppose you don't believe superintelligence is possible. In that case, you can continue pursuing other EA problems, but if you do believe superintelligence is coming, then why are you spending time and money on issues that will likely all be solved by AI, assuming superintelligence comes aligned with human values?
I've identified a few potential reasons why people continue to devote their time and money to non-AI-related EA causes:
- You aren't aware of the potential capabilities of superintelligence.
- You don't think that superintelligence will arrive for a long time, or you remain uncertain about a timeline.
- You're passionate about a particular cause, and superintelligence doesn't interest you.
- You believe that present suffering matters intrinsically, and that the suffering occurring now has a moral weight that can't be dismissed.
- You might even think that superintelligence won't be able to address particular problems.
It's widely believed (at least in the AI safety community) that the development of sufficiently advanced AI could lead to major catastrophes, a global totalitarian regime, or human extinction, all of which seem to me to be more pressing and critical than any of the above reasons for focusing on other EA issues. I post this because I'd like to see more time and money allocated to AI safety, particularly in solving the alignment problem through automated AI labor (since I don't believe human labor can solve it anytime soon, but that's beyond the scope of this post).
So, do any of the reasons presented above apply to you? Or do you have different reasons for not focusing on AI risks?
The reason is that AI is at best a tool that could be used for good or bad, or at worst intrinsically misaligned against any human interests.
Or alternatively AI just isn't solving any of our problems because AI will just be a mere extension of power of states and corporations. Whether moral problems are solved by AI is then up to the whim of corporate or state interests. AI just as well IS being used right now to conquer. The obvious military application has been explored in science fiction for decades. Reducing the cost of deployment of literal killer robots.
Obvious example, look how the profit motive is transforming OpenAI right now. Obvious example, look how AI is "solving" nefarious actors' abilities to create fake news and faked media.
There is no theory that our glorious AI overlords are going to be effective altruists, or Buddhists, or Kantians, or utilitarians, or whatever else. As far as I'm aware AI may just as likely become a raging kill all humans fascist.
There is a distinction between "control" and "alignment. "
The control problem addresses our fundamental capacity to constrain AI systems, preventing undesired behaviors or capabilities from manifesting, regardless of the system's goals. Control mechanisms encompass technical safeguards that maintain human authority over increasingly autonomous systems, such as containment protocols, capability limitations, and intervention mechanisms.
The alignment problem, conversely, focuses on ensuring AI systems pursue goals compatible with human va... (read more)