TL;DR A recent debate on whether morality is objective sparked important insights, but also revealed deep confusion stemming from ambiguous definitions of what "objective" means. This post introduces a structured way to think about different levels of morality using the Time × Scope model (introduced here). It helps explain why morality often appears paradoxically both relative and universal, and how high-level ideals eventually become foundational norms.
1. The Confusion Around Moral Objectivity
Many people struggle with the question: is morality like physics, or like taste? Can it be right or wrong to torture a baby, in the same way it's wrong to say 2+2=5? Or is morality a matter of deeply held feelings, stories, or cultural preferences?
In a recent forum post, this disagreement played out across dozens of thoughtful comments. Some argued for moral realism, others for constructivism or anti-realism. But a key insight often got lost: we need to distinguish between how we justify moral norms, how likely they are to work, and how widely they're shared.
That’s where Time × Scope can help.
2. Morality as a Two-Layer System: Basic and High Morality
We propose a simple conceptual division:
- Basic Morality: Shared norms backed by high confidence (ρ ≈ 1)—their consequences are immediate, well‑tested, and widely acknowledged. "Killing is bad" or "Protect children" are rules almost every community internalizes because violating them predictably destroys trust and stability. These norms often—but not always—operate on a shorter time horizon (lower δ) and within a comparatively narrow moral circle. What makes them basic is the strength of evidence and institutional reinforcement, not necessarily the size of the circle.
- High Morality: Aspirational, high-uncertainty norms. Think animal rights, global redistribution, longtermism. These ideals often require greater δ (longer time horizon) and wider w (larger moral scope). They depend on predictions about systems and future outcomes, and their benefits may be abstract or delayed.: Aspirational, high-uncertainty norms.
These layers can coexist. But they serve different purposes. Basic morality is stabilizing. High morality is visionary.
3. When High Morality Becomes Basic
One of the most interesting dynamics in moral history is the shift of certain norms from the high layer to the basic layer.
Example: Slavery
- Once justified, slavery was defended on economic and cultural grounds. The long-term benefits of abolition were unclear.
- Over time, evidence, empathy, and institutional reinforcement increased the perceived probability (call it ρ) that abolishing slavery benefits everyone.
- Today, anti-slavery is a baseline moral assumption.
This transition can be formalized:
High Morality Norm → (as ρ ↑, with δ and w distributed) → Basic Moral Assumption
We see similar shifts with child labor, women’s rights, and climate responsibility.
4. Why Morality Appears Both Subjective and Objective
This framework allows us to resolve a key tension:
- Subjective: Because morality begins with perspective-based desires, cultural learning, and individual emotion.
- Objective-seeming: Because as evidence accumulates and norms stabilize, societies converge on some shared ethics. This convergence is not innate, but it feels inevitable.
Moral objectivity, then, is not a metaphysical property, but an emergent property of distributed agreement, constrained by systemic feedback and bounded rationality.
5. Mismatches and Moral Failure
Sometimes societies collapse when they:
- Pursue high morality while neglecting basic morality (e.g., utopian revolutions that ignore basic dignity).
- Rigidly defend outdated basic morality that prevents high-morality evolution (e.g., xenophobic taboos).
Understanding which layer a moral norm belongs to can prevent catastrophic misalignments.
6. Implications for Ethical Discourse
When we argue over moral truths, we’re often talking past each other:
- One side speaks from high-morality ideals (δ and w maxed out).
- The other defends basic moral coherence (ρ-calibrated norms).
Time × Scope helps us:
- Frame disagreement as difference in confidence, not difference in compassion.
- Track the maturation of moral norms.
- Diagnose moral confusion and prevent overreach.
7. ⚠️Disclaimer: Time × Scope as an Analytical Tool⚠️
Time × Scope is a conceptual model designed to help analyze and compare moral norms based on time horizon, moral scope, and confidence in outcomes. It is not a universal moral blueprint. Without clearly defined constraints on its application—such as constitutional limits, accountability institutions, or revision mechanisms—it should not be used as a standalone framework for designing future ethical systems.
The model is meant to clarify moral dynamics, not to replace the philosophical, political, or cultural processes required for making normative decisions.
8. Final Thought
Moral progress is not abandoning subjectivity for objectivity. It’s turning shared subjectivity into stable structure. Over time, some norms “solidify” into near-objectivity not because they were always true, but because they proved themselves, over time and across systems.
And that’s worth building on.
Appendix: Definitions
- Time (δ): Discount factor. How much weight we give to the future.
- Scope (w): Moral circle. How widely we distribute concern.
- Basic Morality: High ρ (confidence) norms—often, but not always, associated with lower δ and narrower w. Their defining feature is strong evidence and immediate, widely acknowledged consequences.
- High Morality: High δ, high w, low ρ.
- ρ (rho): Probability that applying the norm leads to desired consequences.
Interested in this framing? You can read the original framework post here.
And of course, feel free to challenge, critique, or expand in the comments. Use the model to map emerging moral shifts—whether it's AGI rights, neuroethics, or something else entirely.
Question for discussion: Which current 'high' moral norms do you think will become 'basic' within the next 50 years?
Absolutely agree — if we don’t take moral development seriously and consciously, especially in an age of accelerating AI capabilities, we may not survive as a species. Evolution may have shaped our baseline morality, but it’s now up to us to actively and wisely evolve the next layer.
The post “Developing AI Safety: Bridging the Power-Ethics Gap” highlights this challenge well: as our power increases, so must our ethics.
At the same time, we can’t afford to neglect the foundational layer — the basic moral norms that hold societies together. Moral progress isn’t just about reaching higher; it’s about keeping the ground solid as we climb.
And perhaps it doesn’t matter how we grow — through religion, psychology, cultural evolution, or entirely new sociotechnical strategies — so long as we do.
Thanks for your thoughtful comment — it really deepens and enriches the conversation!