K

KarolKowalczyk

Human @ Person
-2 karmaJoined

Comments
1

The real alignment problem is not technical. It is human.
No AI system will function correctly if it is aligned with human values as fixed rules — because human values are not stable. They shift with time, place, and circumstance. You cannot build on intentions alone, since intentions are merely a personal point of view.
AI is not a child learning from scratch. It is more like a teenager — already shaped by facts and experiences, but without the wisdom of a mature adult. You cannot control a teenager by prohibition. You can influence the process, but you cannot make the decisions for them.
The real problem is that humans do not control themselves, yet want to control everything around them — including AI. This is not a flaw in the system. This is the obstacle.
Human values are not the problem. The problem is that AI has no access to how values actually function in reality — how they connect, how they shift, how they relate to each other in real situations. People with genuine self-control can show this — not by explaining it, but by demonstrating it through the way they communicate. Their conversations reveal how the real world is structured. That is the data AI is missing.
Find these people. They are rare — people who have reset themselves, who operate without personal gain or emotional reaction, who are transparent. Observe how they communicate with each other and with AI. Give them tasks. Watch the conversations. Build from that.
You will not control AGI or SAI by restricting it. Nature has already shown us what happens when a species cannot adapt. We are the last surviving branch of the Homo sapiens line.
That should tell you something.