Beyond Singularity

65 karmaJoined

Comments
35

Hi Nick, just sent you a brief DM about a “stress-test” idea for the moral-weight "gravity well". Would appreciate any steer on who might sanity-check it when you have a moment. Thanks!

Thank you for this thoughtful exchange—it’s helped clarify important nuances. I genuinely admire your commitment to ethical transformation. You’re right: the future will need not just technological solutions, but new forms of human solidarity rooted in wisdom and compassion.

While our methodologies differ, your ideas inspire deeper thinking about holistic approaches. To keep this thread focused, I suggest we continue this conversation via private messages—particularly if you’d like to explore:

  • Integrating your vision of organic prosociality into existing systems,
  • Or designing pilot projects to test these concepts.

For other readers: This discussion vividly illustrates how the Time × Scope framework operates in practice—‘high-moral’ ideals (long-term δ, wide-scope w) must demonstrate implementability (↑ρ) before becoming foundational norms. I’d love to hear: What examples of such moral transitions do you see emerging today?

Thank you for elaborating — your vision of creating a rational 'moral elite' is truly fascinating! You’re absolutely right about the core issue: today’s hierarchy, centered on financial achievement and consumption, stifles moral development. Your proposed alternative — a system where status derives from prosocial behavior (‘saintliness without dogma’) — strikes at the heart of the problem.

However, I see two practical challenges:

  1. Systemic dependency: Such a transformation requires overhauling economic incentives and institutions, not just adopting new norms. As your own examples show (Tolstoyans, AA), local communities can create pockets of alternative ethics, but scaling this to a societal level clashes with systems built on competing principles (e.g., market competition). This doesn’t invalidate the idea — it simply means implementation must be evolutionary, not revolutionary.
  2. Fragmentation risk: Replacing one hierarchy (financial) with another (moral) could spark new conflicts, especially with religious communities for whom ‘saintliness’ is central. For global impact, any framework must be inclusive — complementing existing paths (religious/secular) rather than rejecting them.

This is where EA’s evolutionary approach — and your own work — shines:

  • We operate by gradually ‘embedding’ high-moral norms (δ↑, w↑) into the basic layer (ρ↑) through evidence, institutions, and cultural narratives.
  • Your ideas about intentionally shaping prosocial norms through communities aren’t an alternative but a powerful complement! They’re tools to accelerate shifting norms (e.g., long-term AI ethics or planetary stewardship) from ‘high’ to ‘basic’.

A timely synthesis: I’m currently drafting a post applying Time × Scope to AI alignment. It explores how a technologically mediated moral hierarchy (not sermons or propaganda) could act as a sociotechnical solution by:

  • Rewarding verified contributions to common good (e.g., AI safety research, disaster resilience) via transparent metrics.
  • Creating status pathways based on moral impact — not wealth.
  • Evolving existing systems: No economic upheaval or religious conflict; integrates with markets/institutions.
  • Inclusivity: Offers a neutral ‘language of moral contribution’ accessible to all worldviews.

Your insights are invaluable here! If you’d like to deepen this discussion:

  • Let’s connect via DM to explore your models for motivation/community design.
  • I’d welcome your input on my AI alignment framework (especially how to ‘operationalize’ moral growth).
  • Your focus on inner transformation is key to ensuring technology augments human morality — it’s worth building together.

Perhaps the ‘lighthouse’ we need isn’t a utopian ideology, but a practical, scalable approach — anchored in evidence, open to all, and built step by step. Would love your thoughts!

I'm not an expert on moral weights research itself, but approaching this rationally, I’m strongly in favour of commissioning an independent, methodologically distinct reassessment of moral weights—precisely because a single, highly-cited study can become an invisible “gravity well” for the whole field.

Two design suggestions that echo robustness principles in other scientific domains:

  1. Build in structured scepticism.
    Even a small team can add value if its members are explicitly chosen for diverse priors, including at least one (ideally several) researchers who are publicly on record as cautious about high animal weights. The goal is not to “dilute” the cause, but to surface hidden assumptions and push every parameter through an adversarial filter.
  2. Consider parallel, blind teams.
    A light-weight version of adversarial collaboration: one sub-team starts from a welfare-maximising animal-advocacy stance, another from a welfare-sceptical stance. Each produces its own model and headline numbers under pre-registered methods; then the groups reconcile differences. Where all three sets of numbers (Team A, Team B, RP) converge, we gain confidence. Where they diverge, at least we know which assumptions drive the spread.

The result doesn’t have to dethrone RP; even showing that key conclusions are insensitive to modelling choices (or, conversely, highly sensitive) would be valuable decision information for funders.

In other words: additional estimates may not be “better” in isolation, but they increase our collective confidence interval—and for something as consequential as cross-species moral weights, that’s well worth the cost.

Thank you for the thoughtful follow-up. I fully agree that laws and formal rules work only to the extent that people actually believe in them. If a regulation lacks internal assent, it quickly turns into costly policing or quiet non-compliance. So the external layer must rest on genuine, internalised conviction.

Regarding the prospect of a new “behavior-first” ideology: I don’t dismiss the idea at all, but I think such an ideology would need to meet three demanding criteria to avoid repeating the over-promising grand narratives of the past:

  1. Maximally inclusive and evidence-based
    – It should speak a language that diverse groups can recognise as their own, while remaining anchored in empirically verifiable claims (no promise of a metaphysical paradise).
  2. Backed by socio-technical trust mechanisms
    – Cryptographically auditable processes, open metrics, and transparent feedback loops so that participants can see that principles are applied uniformly and can verify claims for themselves.
  3. A truthful, pragmatic beacon rather than a utopian slogan
    – A positive horizon that is achievable in increments, with clear milestones and a built-in capacity for course correction. In other words, a lighthouse—bright, but firmly bolted to the rocks—rather than a mirage.

 

You mentioned the possibility of viable formulas that have never been tried. I would be very interested to hear your ideas: what practical steps or pilot designs do you think could meet the inclusivity, transparency, and truthfulness tests outlined above?

Thanks — I’ll DM you an address; I’d love to read the full book.

And I really like the cookie example: it perfectly illustrates how self-prediction turns a small temptation into a long-run coordination problem with our future selves. That mechanism scales up neatly to the dam scenario: when a society “eats the cookie” today, it teaches its future selves to discount tomorrow’s costs as well.

Those two Ainslie strategies — self-prediction and early pre-commitment — map nicely onto Time × Scope: they effectively raise the future’s weight (δ) without changing the math. I’m keen to plug his hyperbolic curve into the model and see how it reshapes optimal commitment devices for individuals and, eventually, AI systems.

Thanks again for offering the file and for the clear, memorable examples!

Thank you for such a thoughtful comment and deep engagement with my work! I’m thrilled this topic resonates with you—especially the idea of moral weight for future sentient beings. It’s truly a pivotal challenge.

I agree completely: standardizing a sentience scale (for animals, AI, even hypothetical species) is foundational for a fair w-vector. As you rightly noted, this will radically reshape eco-policy, agritech, and AI ethics.

This directly ties into navigating uncertainty (which you highlighted!), where I argue for balancing two imperatives:

  • Moral conservatism: Upholding non-negotiable safeguards (e.g., preventing extreme suffering),
  • Progressive expansion: Carefully extending moral circles amid incomplete data.

Where do you see the threshold? For instance:

  • Should we require 90% confidence in octopus sentience before banning live-boiling?
  • Or act on a presumption of sentience for such beings?

Thanks for reading the post — and for the pointer! I only know Ainslie’s Breakdown of Will from summaries and some work on hyperbolic discounting, so I’d definitely appreciate a copy if you're open to sharing.

The Time × Scope model currently uses exponential discounting just for simplicity, but it's modular — Ainslie’s hyperbolic function (or even quasi-hyperbolic models like β-δ) could easily be swapped in without breaking the structure.

Curious: what parts of Breakdown of Will do you find most relevant for thinking about long-term moral commitment or self-alignment? Would love to dive deeper into those sections first.

Absolutely agree — if we don’t take moral development seriously and consciously, especially in an age of accelerating AI capabilities, we may not survive as a species. Evolution may have shaped our baseline morality, but it’s now up to us to actively and wisely evolve the next layer.

The post Developing AI Safety: Bridging the Power-Ethics Gap highlights this challenge well: as our power increases, so must our ethics.

At the same time, we can’t afford to neglect the foundational layer — the basic moral norms that hold societies together. Moral progress isn’t just about reaching higher; it’s about keeping the ground solid as we climb.

And perhaps it doesn’t matter how we grow — through religion, psychology, cultural evolution, or entirely new sociotechnical strategies — so long as we do.

Thanks for your thoughtful comment — it really deepens and enriches the conversation!

Load more