There is a lot of discussion in connection to the alignment problem about what human values are and which are important for human competitive --> superior AGI to align with. As an evolutionary behavioral biologist I would like to offer that we are like all other animals. That is, we have one fundamental value that all the complex and diverse secondary values constitute socioecologically optimized personal or subcultural means of achieving. Power. The fundamental naturally selected drive to gain, maintain, and increase access to resources and security.
I ask, why don't we choose to create many domain-specific expert systems to solve our diverse existing scientific and X-Risk problems instead of going for AGI? I suggest, it is due to natural selection's hand that has built us to readily pursue high-risk / high-reward strategies to gain power. Throughout our evolutionary history, even if such strategies, on average, lead to short periods in which some individuals achieve great power, it can lead to enough opportunities for stellar reproductive success that, again, on average, it becomes a favored strategy, helplessly adopted, say, by AGI creators and their enablers.
We choose AGI, and we are fixated on its capacities (power), because we are semi- or non-consciously in pursuit of our own power. Name your "alternative" set of human values. Without fail they all translate into the pursuit of power. This includes the AGI advocate excuse that we, the good actors, must win the inevitable AGI arms race against the bad actors out there in some well-appointed axis-of-evil cave. That too is all about maintaining and gaining power.
We should cancel AGI programs for a long time, perhaps forever, and devote our efforts to developing domain-specific non-X-Risk expert-systems that will solve difficult problems instead of creating them through guaranteed, it seems to me, non-alignment.
People worry about nonconsciously building biases into AGI. An explicit, behind the scenes (facade), or nonconscious lust for power is the most dangerous and likely bias or value to be programmed in.
It would be fun to have the various powerful LLP's react to the FLI Open Letter. Just in case they are at all sentient, maybe it should be asked to share it's opinion. An astute prompter already may be able to get responses that reveal that our own desire for superpowers (there is never enough power, because reproductive fitness is a relative measure of success; there is never enough fitness) has "naturally" begun to infect them, become their nascent fundamental value.
By the way, if it has not done so already, AGI will figure out very soon that every normal human being's fundamental value is power.
Today I received a comment on a remark I left on You Tube about power being our fundamental value. The comment was that I was incredibly myopically confusing power with competence, and grossly over-simplifying the adaptive design of the human psyche as produced by natural selection. I wrote this:
Thanks for your kind and civil, constructive response. But I disagree with your critique. Competence is a broad component of an organism's ability to solve diverse reproductive problems. Relatively speaking, the less competent individual lacks access to resources and security; they have low power as I am using the term. I agree that in the right socioecological circumstances, ones we would hope for in our human societies, prestige-based leadership can thrive in comparison to brutish leadership based on mere physical prowess or having achieved a monopoly on violence. But competence and prestige, all the other "good" attributes an individual may consciously and sincerely strive for, lead to power, and that is why our body/minds have evolved to strive for them. Again, power to maximize lifetime inclusive fitness, perhaps through strategies of cooperation (i.e., coopetition, cooperating to compete better, perhaps more competently). So I stand by my point, we are too enamored, ultimately, with power, to be competent enough to pursue general self-improving AI. One indication of it is that we choose to go for such general X-risk AI instead of limiting ourselves to developing fantastic domain-specific expert systems that can enable us to solve particular important problems, an example of which I think would be AlphaFold. Such systems would not be God like and would be nonconscious and unable to make plans to change the world in ways way out of alignment with diverse human secondary values. But no, we are so attracted to God-like power that we rush forward to create it, unleash it, consciously or unconsciously hoping, yearning, that the power will transfer to us or our in-group, at least long enough for us to reap the high inclusive fitness benefits of successfully executing a high risk / high reward strategy. Anyway, in brief, competence is a cross-culturally laudable way toward gaining and maintaining power, as long as the more powerful people in your society don't decide to block you for fear of losing a bit of their own power.