Really enjoyed this post! It made me realize something important: if we’re serious about creating good long-term futures, maybe we should actively search for scenarios that do more than just help humanity survive. Scenarios that actually make life deeply meaningful.
Recently, an idea popped into my head that I keep coming back to. At first, I dismissed it as a bit naive or unrealistic, but the more I think about it, the more I feel it genuinely might work. It seems to solve several serious problems at once. For example destructive competition, the loss of meaning in life in a highly automated future, and the erosion of community. Every time I return to this idea, I am amazed at how naturally it all fits together.
I'm really curious now. Has anyone else here had that kind of experience—where an idea that initially seemed strange turned out to feel surprisingly robust? And from your perspective, what are the absolute "must-have" features any serious vision of the future should include?
Thanks, I agree — positive experiences matter morally too. I didn’t emphasize it explicitly, but the text defines valence as “how good or bad an experience feels”, so both sides are included.
Since there’s no consensus on how to weigh suffering vs happiness, I think the relative weight between them should itself come from feedback — like through pairwise comparisons and aggregation of moral intuitions across perspectives.
Thank you.
You're absolutely right: an artificial mind trained on our contradictory behavior could easily infer that our moral declarations lack credibility or consistency — and that would be a dangerous misinterpretation.
That’s why I believe it's essential to explicitly model this gap — not to excuse it, but to teach systems to expect it, interpret it correctly, and even assist in gradually reducing it.
I fully agree that moral evolution is a central part of the solution. But perhaps the gap itself isn’t just a flaw — it may be part of the mechanism. It seems likely that human ethics will continue to evolve like a staircase: once our real moral weights catch up to the current ideal, we move the ideal further. The tension remains — but so does the direction of progress.
In that sense, alignment isn't just about closing the gap — it’s about keeping the ladder intact, so that both humanity and AI can keep climbing.
Thank you for this thoughtful exchange—it’s helped clarify important nuances. I genuinely admire your commitment to ethical transformation. You’re right: the future will need not just technological solutions, but new forms of human solidarity rooted in wisdom and compassion.
While our methodologies differ, your ideas inspire deeper thinking about holistic approaches. To keep this thread focused, I suggest we continue this conversation via private messages—particularly if you’d like to explore:
For other readers: This discussion vividly illustrates how the Time × Scope framework operates in practice—‘high-moral’ ideals (long-term δ, wide-scope w) must demonstrate implementability (↑ρ) before becoming foundational norms. I’d love to hear: What examples of such moral transitions do you see emerging today?
Thank you for elaborating — your vision of creating a rational 'moral elite' is truly fascinating! You’re absolutely right about the core issue: today’s hierarchy, centered on financial achievement and consumption, stifles moral development. Your proposed alternative — a system where status derives from prosocial behavior (‘saintliness without dogma’) — strikes at the heart of the problem.
However, I see two practical challenges:
This is where EA’s evolutionary approach — and your own work — shines:
A timely synthesis: I’m currently drafting a post applying Time × Scope to AI alignment. It explores how a technologically mediated moral hierarchy (not sermons or propaganda) could act as a sociotechnical solution by:
Your insights are invaluable here! If you’d like to deepen this discussion:
Perhaps the ‘lighthouse’ we need isn’t a utopian ideology, but a practical, scalable approach — anchored in evidence, open to all, and built step by step. Would love your thoughts!
I'm not an expert on moral weights research itself, but approaching this rationally, I’m strongly in favour of commissioning an independent, methodologically distinct reassessment of moral weights—precisely because a single, highly-cited study can become an invisible “gravity well” for the whole field.
Two design suggestions that echo robustness principles in other scientific domains:
The result doesn’t have to dethrone RP; even showing that key conclusions are insensitive to modelling choices (or, conversely, highly sensitive) would be valuable decision information for funders.
In other words: additional estimates may not be “better” in isolation, but they increase our collective confidence interval—and for something as consequential as cross-species moral weights, that’s well worth the cost.
Thank you for the thoughtful follow-up. I fully agree that laws and formal rules work only to the extent that people actually believe in them. If a regulation lacks internal assent, it quickly turns into costly policing or quiet non-compliance. So the external layer must rest on genuine, internalised conviction.
Regarding the prospect of a new “behavior-first” ideology: I don’t dismiss the idea at all, but I think such an ideology would need to meet three demanding criteria to avoid repeating the over-promising grand narratives of the past:
You mentioned the possibility of viable formulas that have never been tried. I would be very interested to hear your ideas: what practical steps or pilot designs do you think could meet the inclusivity, transparency, and truthfulness tests outlined above?
Thanks — I’ll DM you an address; I’d love to read the full book.
And I really like the cookie example: it perfectly illustrates how self-prediction turns a small temptation into a long-run coordination problem with our future selves. That mechanism scales up neatly to the dam scenario: when a society “eats the cookie” today, it teaches its future selves to discount tomorrow’s costs as well.
Those two Ainslie strategies — self-prediction and early pre-commitment — map nicely onto Time × Scope: they effectively raise the future’s weight (δ) without changing the math. I’m keen to plug his hyperbolic curve into the model and see how it reshapes optimal commitment devices for individuals and, eventually, AI systems.
Thanks again for offering the file and for the clear, memorable examples!
I agree that we should shift our focus from pure survival to prosperity. But I disagree with the dichotomy that the author seems to be proposing. Survival and prosperity are not mutually exclusive, because long-term prosperity is impossible with a high risk of extinction.
Perhaps a more productive formulation would be the following: “When choosing between two strategies, both of which increase the chances of survival, we should give priority to the one that may increase them slightly less, but at the same time provides a huge leap in the potential for prosperity.”
However, I believe that the strongest scenarios are those that eliminate the need for such a compromise altogether. These are options that simultaneously increase survival and ensure prosperity, creating a synergistic effect. It is precisely the search for such scenarios that we should focus on.
In fact, I am working on developing one such idea. It is a model of society that simultaneously reduces the risks associated with AI and destructive competition and provides meaning in a world of post-labor abundance, while remaining open and inclusive. This is precisely in the spirit of the “viatopias” that the author talks about.
If this idea of the synergy of survival and prosperity resonates with you, I would be happy to discuss it further.