Beyond Singularity

26 karmaJoined

Comments
17

Thank you so much for engaging with the post — I really appreciate your thoughtful comment.

You're absolutely right: this is a deeply interconnected issue. Aligning humans with their own best values isn’t separate from the AI alignment agenda — it’s part of the same challenge. I see it as a complex socio-technical problem that spans both cultural evolution and technological design.

On one side, we face deeply ingrained psychological and societal dynamics — present bias, moral licensing, systemic incentives. On the other, we’re building AI systems that increasingly shape those very dynamics: they mediate what we see, amplify certain behaviors, and normalize patterns of interaction.

So I believe we need to work in parallel:

  • On the AI side, to ensure systems are not naïvely trained on our contradictions, but instead scaffold better ethical reasoning.
  • On the human side, to address the root misalignments within ourselves — through education, norm-shaping, institutional design, and narrative work.

I also resonate with your point about needing a new story — a shared narrative that can unify these efforts and help us rise to the moment. It's a huge challenge, and I don’t pretend to have all the answers, but I’ve been exploring directions and would love to share more concrete ideas with the community soon.

Thank you for this deep and thought-provoking post! The concept of the "power-ethics gap" truly resonates and seems critically important for understanding current and future challenges, especially in the context of AI.

The analogy with the car, where power is speed and ethics is the driver's skill, is simply brilliant. It illustrates the core of the problem very clearly. I would even venture to add that, in my view, the "driver's skill" today isn't just lagging behind, but perhaps even degrading in some aspects due to the growing complexity of the world, information noise, and polarization. Our collective ability to make wise decisions seems increasingly fragile, despite the growth of individual knowledge.

Your emphasis on the need to shift the focus in AI safety from purely technical aspects of control (power) to deep ethical questions and "value selection" seems absolutely timely and necessary. This truly is an area that appears to receive disproportionately little attention compared to its significance.

The concepts you've introduced, especially the distinction between Human-Centric and Sentientkind Alignment, as well as the idea of "Human Alignment," are very interesting. The latter seems particularly provocative and important. Although you mention that this might fall outside the scope of traditional AI safety, don't you think that without significant progress here, attempts to "align AI" might end up being built on very shaky ground? Can we really expect to create ethical AI if we, as a species, are struggling with our own "power-ethics gap"?

It would be interesting to hear more thoughts on how the concept of "Moral Alignment" relates to existing frameworks and whether it could help integrate these disparate but interconnected problems under one umbrella.

The post raises many important questions and introduces useful conceptual distinctions. Looking forward to hearing the opinions of other participants! Thanks again for the food for thought!

Thank you for this interesting overview of Vincent Müller’s arguments! I fully agree that implementation (policy means) often becomes the bottleneck. However, if we systematically reward behavior that contradicts our declared principles, then any “ethical goals” will inevitably be vulnerable to being undermined during implementation. In my own post, I call this the “bad parent” problem: we say one thing, but demonstrate another. Do you think it’s possible to achieve robust adherence to ethical principles in AI when society itself remains fundamentally inconsistent?

Thanks for the comment, Ronen! Appreciate the feedback.

I think it’s good — essential, even — that you keep trying and speaking out. Sometimes that’s what helps others to act too.
The only thing I worry about is that this fight, if framed only as hopeless, can paralyze the very people who might help change the trajectory.
Despair can be as dangerous as denial.

That’s why I believe the effort itself matters — not because it guarantees success, but because it keeps the door open for others to walk through.

I live in Ukraine. Every week, missiles fly over my head. Every night, drones are shot down above my house. On the streets, men are hunted like animals to be sent to the front. Any rational model would say our future is bleak.

And yet, people still get married, write books, make music, raise children, build new homes, and laugh. They post essays on foreign forums. They even come up with ideas for how humanity might live together with AGI.

Even if I go to sleep tonight and never wake up tomorrow, I will not surrender. I will fight until the end. Because for me, a 0.0001% chance is infinitely more than zero.

Seems like your post missed the April 1st deadline and landed on April 2nd — which means, unfortunately, it no longer counts as a joke.

After reading it, I also started wondering if I unintentionally fall into the "Believer" category—the kind of person who's already drafting blueprints for a bright future alongside AGI and inviting people to "play" while we all risk being outplayed.

I understand and share your concerns. I don’t disagree that the systemic forces you’ve outlined may well make AGI safety fundamentally unachievable. That possibility is real, and I don’t dismiss it.

But at the same time, I find myself unwilling to treat it as a foregone conclusion.
If humanity’s survival is unlikely, then so was our existence in the first place — and yet here we are.

That’s why I prefer to keep looking for any margin, however narrow, where human action could still matter.

In that spirit, I’d like to pose a question rather than an argument:
Do you think there’s a chance that humanity’s odds of surviving alongside AGI might increase — even slightly — if we move toward a more stable, predictable, and internally coherent society?
Not as a solution to alignment, but as a way to reduce the risks we ourselves introduce into the system.

That’s the direction I’ve tried to explore in my model. I don’t claim it’s enough — but I believe that even thinking about such structures is a form of resistance to inevitability.

I appreciate this conversation. Your clarity and rigor are exactly why these dialogues matter, even if the odds are against us.

I completely understand your position — and I respect the intellectual honesty with which you’re pursuing this line of argument. I don’t disagree with the core systemic pressures you describe.

That said, I wonder whether the issue is not competition itself, but the shape and direction of that competition.
Perhaps there’s a possibility — however slim — that competition, if deliberately structured and redirected, could become a survival strategy rather than a death spiral.

That’s the hypothesis I’ve been exploring, and I recently outlined it in a post here on the Forum.
If you’re interested, I’d appreciate your critical perspective on it.

Either way, I value this conversation. Few people are willing to follow these questions to their logical ends.

This is a critically important and well-articulated post, thank you for defining and championing the Moral Alignment (MA) space. I strongly agree with the core arguments regarding its neglect compared to technical safety, the troubling paradox of purely human-centric alignment given our history, and the urgent need for a sentient-centric approach.

You rightly highlight Sam Altman's question: "to whose values do you align the system?" This underscores that solving MA isn't just a task for AI labs or experts, but requires much broader societal reflection and deliberation. If we aim to align AI with our best values, not just a reflection of our flawed past actions, we first need robust mechanisms to clarify and articulate those values collectively.

Building on your call for action, perhaps a vital complementary approach could be fostering this deliberation through a widespread network of accessible "Ethical-Moral Clubs" (or perhaps "Sentientist Ethics Hubs" to align even closer with your theme?) across diverse communities globally.

These clubs could serve a crucial dual purpose:

  1. Formulating Alignment Goals: They would provide spaces for communities themselves to grapple with complex ethical questions and begin articulating what kind of moral alignment they actually desire for AI affecting their lives. This offers a bottom-up way to gather diverse perspectives on the "whose values?" question, potentially identifying both local priorities and identifying shared, potentially universal principles across regions.
  2. Broader Ethical Education & Reflection: These hubs would function as vital centers for learning. They could help participants, and by extension society, better understand different ethical frameworks (including the sentientism central to your post), critically examine their own "stated vs. realized" values (as you mentioned), and become more informed contributors to the crucial dialogue about our future with AI.

Such a grassroots network wouldn't replace the top-down efforts and research you advocate for, but could significantly support and strengthen the MA movement you envision. It could cultivate the informed public understanding, deliberation, and engagement necessary for sentient-centric AI to gain legitimacy and be implemented effectively and safely.

Ultimately, fostering collective ethical literacy and structured deliberation seems like a necessary foundation for ensuring AI aligns with the best of our values, benefiting all sentient beings. Thanks again for pushing this vital conversation forward.

Load more