A

alx

-3 karmaJoined

Comments
1

alx
-2
1
1

While many accurate and valid points are made here, the overarching flaw of this approach to AI alignment is evident in the very first paragraph.  Perhaps it is meta and semantic but I believe we must take more effort to define the nature and attributes of the 'Advanced' AI/AGI that we are referring to when talking about AI alignment.  The statistical inference used in transformer-encoder models that simply generate an appropriate next-word are far from advanced. They might be seen as a form of linguistic understanding but remain in a completely different league than brain-inspired cognitive architectures that could conceivably become self aware. 


Many distinctions are critical when evaluating, framing and categorizing AI .   I would argue the primary distinction will soon become that of the elusive C-word: Consciousness.  If we are talking about  embodied Human-Equivalent  self-aware conscious AGI (HE-AGI) ,  I think it would be unwise and frankly immoral to jump to the concept of control and forced compliance as a framework for alignment.

 Clearly limits should be placed on AI's capacity to interact with and engage the physical real world (including its own hardware), just as limits are placed on our own ability to act in our world.  However, we should be thinking about alignment in terms of  genuine social alignment, empathy, respect and instilling foundational conceptions of morality.  It seems clear to me that Conscious Human Equivalent AGI by definition deserve all the innate rights, respect, civic responsibilities and moral consideration as an adolescent human... and eventually (likely ) those of an adult.  

This framework is still in progress but presents an ordinal taxonomy to better frame and categorize AGI: http://www.pathwai.org .  Feedback would be welcomed!