TL;DR A recent debate on whether morality is objective sparked important insights, but also revealed deep confusion stemming from ambiguous definitions of what "objective" means. This post introduces a structured way to think about different levels of morality using the Time × Scope model (introduced here). It helps explain why morality often appears paradoxically both relative and universal, and how high-level ideals eventually become foundational norms.
1. The Confusion Around Moral Objectivity
Many people struggle with the question: is morality like physics, or like taste? Can it be right or wrong to torture a baby, in the same way it's wrong to say 2+2=5? Or is morality a matter of deeply held feelings, stories, or cultural preferences?
In a recent forum post, this disagreement played out across dozens of thoughtful comments. Some argued for moral realism, others for constructivism or anti-realism. But a key insight often got lost: we need to distinguish between how we justify moral norms, how likely they are to work, and how widely they're shared.
That’s where Time × Scope can help.
2. Morality as a Two-Layer System: Basic and High Morality
We propose a simple conceptual division:
- Basic Morality: Shared norms backed by high confidence (ρ ≈ 1)—their consequences are immediate, well‑tested, and widely acknowledged. "Killing is bad" or "Protect children" are rules almost every community internalizes because violating them predictably destroys trust and stability. These norms often—but not always—operate on a shorter time horizon (lower δ) and within a comparatively narrow moral circle. What makes them basic is the strength of evidence and institutional reinforcement, not necessarily the size of the circle.
- High Morality: Aspirational, high-uncertainty norms. Think animal rights, global redistribution, longtermism. These ideals often require greater δ (longer time horizon) and wider w (larger moral scope). They depend on predictions about systems and future outcomes, and their benefits may be abstract or delayed.: Aspirational, high-uncertainty norms.
These layers can coexist. But they serve different purposes. Basic morality is stabilizing. High morality is visionary.
3. When High Morality Becomes Basic
One of the most interesting dynamics in moral history is the shift of certain norms from the high layer to the basic layer.
Example: Slavery
- Once justified, slavery was defended on economic and cultural grounds. The long-term benefits of abolition were unclear.
- Over time, evidence, empathy, and institutional reinforcement increased the perceived probability (call it ρ) that abolishing slavery benefits everyone.
- Today, anti-slavery is a baseline moral assumption.
This transition can be formalized:
High Morality Norm → (as ρ ↑, with δ and w distributed) → Basic Moral Assumption
We see similar shifts with child labor, women’s rights, and climate responsibility.
4. Why Morality Appears Both Subjective and Objective
This framework allows us to resolve a key tension:
- Subjective: Because morality begins with perspective-based desires, cultural learning, and individual emotion.
- Objective-seeming: Because as evidence accumulates and norms stabilize, societies converge on some shared ethics. This convergence is not innate, but it feels inevitable.
Moral objectivity, then, is not a metaphysical property, but an emergent property of distributed agreement, constrained by systemic feedback and bounded rationality.
5. Mismatches and Moral Failure
Sometimes societies collapse when they:
- Pursue high morality while neglecting basic morality (e.g., utopian revolutions that ignore basic dignity).
- Rigidly defend outdated basic morality that prevents high-morality evolution (e.g., xenophobic taboos).
Understanding which layer a moral norm belongs to can prevent catastrophic misalignments.
6. Implications for Ethical Discourse
When we argue over moral truths, we’re often talking past each other:
- One side speaks from high-morality ideals (δ and w maxed out).
- The other defends basic moral coherence (ρ-calibrated norms).
Time × Scope helps us:
- Frame disagreement as difference in confidence, not difference in compassion.
- Track the maturation of moral norms.
- Diagnose moral confusion and prevent overreach.
7. ⚠️Disclaimer: Time × Scope as an Analytical Tool⚠️
Time × Scope is a conceptual model designed to help analyze and compare moral norms based on time horizon, moral scope, and confidence in outcomes. It is not a universal moral blueprint. Without clearly defined constraints on its application—such as constitutional limits, accountability institutions, or revision mechanisms—it should not be used as a standalone framework for designing future ethical systems.
The model is meant to clarify moral dynamics, not to replace the philosophical, political, or cultural processes required for making normative decisions.
8. Final Thought
Moral progress is not abandoning subjectivity for objectivity. It’s turning shared subjectivity into stable structure. Over time, some norms “solidify” into near-objectivity not because they were always true, but because they proved themselves, over time and across systems.
And that’s worth building on.
Appendix: Definitions
- Time (δ): Discount factor. How much weight we give to the future.
- Scope (w): Moral circle. How widely we distribute concern.
- Basic Morality: High ρ (confidence) norms—often, but not always, associated with lower δ and narrower w. Their defining feature is strong evidence and immediate, widely acknowledged consequences.
- High Morality: High δ, high w, low ρ.
- ρ (rho): Probability that applying the norm leads to desired consequences.
Interested in this framing? You can read the original framework post here.
And of course, feel free to challenge, critique, or expand in the comments. Use the model to map emerging moral shifts—whether it's AGI rights, neuroethics, or something else entirely.
Question for discussion: Which current 'high' moral norms do you think will become 'basic' within the next 50 years?
Thank you very much for your attention to my proposal. I know that new ideas are difficult to understand (especially if you're not very good at explaining them), and particularly when something as unusual as promoting new ideological movements (let's say, "utopian").
I just want to make a few brief clarifications:
Moral evolution initiatives in the sense of pacifism, altruism, and benevolence stemming from monastic structures do not seek to create elites, as they are situated outside the conventional world. They can also be referred to as community initiatives of "witness" (for example, the case of Anabaptist communities or Quakers). However, associations such as Freemasonry, Opus Dei, and even initiatives associated with EA, such as "80,000 Hours," are initiatives to create elites. They do agree that they are influential minorities, in one way or another (all social change is logically set in motion by minorities).
I don't propose "norms," but rather styles of behavior based on internalized ethical values. A non-coercive prosociality.
All activities based on altruism can be complementary, although dilemmas about priorities always arise.
I understand the importance given to "long-term" issues and the alarm created by issues related to AI. Unfortunately, not all of us are sufficiently prepared to grasp the magnitude of such threats to the common good.
In my opinion, the essential factor for the progress of civilization is moral progress, and moral progress occurs through social psychological mechanisms that are often more accessible to the understanding of people motivated by empathy and altruism, and that falls more within the realm of "wisdom."