As will be very clear from my post, I'm not a computer scientist. However, I am reasonably intelligent and would like to improve my understanding of AI risk.
As I understand it (please do let me know if I've got this wrong), the risk is that:
- an AGI could rapidly become many times more intelligent and capable than a human: so intelligent that its relation to us would be analogous to our own relation to ants.
- such an AGI would not necessarily prioritise human wellbeing, and could, for example, could decide that its objectives were best served by the extermination of humanity.
And the mitigation is:
- working to ensure that any such AGI is "aligned," that is, is functioning within parameters that prioritise human safety and flourishing.
What I don't understand is why we (the ants in this scenario) think our efforts have any hope of being successful. If the AGI is so intelligent and powerful that it represents an existential risk to humanity, surely it is definitionally impossible for us to rein it in? And therefore surely the best approach would be either to prevent work to develop AI (honestly this seems like a nonstarter to me, I can't see e.g. Meta or Google agreeing to it), or to accept that our limited resources would be better applied to more tractable problems?
Any thoughts very welcome, I am highly open to the possibility that I'm simply getting this wrong in a fundamental way.
Epistemic status: bewitched, bothered and bewildered.
Thank you, that is helpful. I still don't see, I think, why we think an AGI would be incapable of assessing its own values and potentially altering them, if it's intelligent enough to be an existential risk to humanity - but we're hoping that the result of any such assessment would be "the values humans instilled in me seem optimal"? Is that it? Because then my question is which values exactly we're attempting to instill. At the risk of being downvoted to hell I will share that the thought of a superpowerful AI that shares the value system of e.g. LessWrong is slightly terrifying to me. Relatedly(?) I studied a humanities subject :)
Thank you again!