TL;DR A recent debate on whether morality is objective sparked important insights, but also revealed deep confusion stemming from ambiguous definitions of what "objective" means. This post introduces a structured way to think about different levels of morality using the Time × Scope model (introduced here). It helps explain why morality often appears paradoxically both relative and universal, and how high-level ideals eventually become foundational norms.
1. The Confusion Around Moral Objectivity
Many people struggle with the question: is morality like physics, or like taste? Can it be right or wrong to torture a baby, in the same way it's wrong to say 2+2=5? Or is morality a matter of deeply held feelings, stories, or cultural preferences?
In a recent forum post, this disagreement played out across dozens of thoughtful comments. Some argued for moral realism, others for constructivism or anti-realism. But a key insight often got lost: we need to distinguish between how we justify moral norms, how likely they are to work, and how widely they're shared.
That’s where Time × Scope can help.
2. Morality as a Two-Layer System: Basic and High Morality
We propose a simple conceptual division:
- Basic Morality: Shared norms backed by high confidence (ρ ≈ 1)—their consequences are immediate, well‑tested, and widely acknowledged. "Killing is bad" or "Protect children" are rules almost every community internalizes because violating them predictably destroys trust and stability. These norms often—but not always—operate on a shorter time horizon (lower δ) and within a comparatively narrow moral circle. What makes them basic is the strength of evidence and institutional reinforcement, not necessarily the size of the circle.
- High Morality: Aspirational, high-uncertainty norms. Think animal rights, global redistribution, longtermism. These ideals often require greater δ (longer time horizon) and wider w (larger moral scope). They depend on predictions about systems and future outcomes, and their benefits may be abstract or delayed.: Aspirational, high-uncertainty norms.
These layers can coexist. But they serve different purposes. Basic morality is stabilizing. High morality is visionary.
3. When High Morality Becomes Basic
One of the most interesting dynamics in moral history is the shift of certain norms from the high layer to the basic layer.
Example: Slavery
- Once justified, slavery was defended on economic and cultural grounds. The long-term benefits of abolition were unclear.
- Over time, evidence, empathy, and institutional reinforcement increased the perceived probability (call it ρ) that abolishing slavery benefits everyone.
- Today, anti-slavery is a baseline moral assumption.
This transition can be formalized:
High Morality Norm → (as ρ ↑, with δ and w distributed) → Basic Moral Assumption
We see similar shifts with child labor, women’s rights, and climate responsibility.
4. Why Morality Appears Both Subjective and Objective
This framework allows us to resolve a key tension:
- Subjective: Because morality begins with perspective-based desires, cultural learning, and individual emotion.
- Objective-seeming: Because as evidence accumulates and norms stabilize, societies converge on some shared ethics. This convergence is not innate, but it feels inevitable.
Moral objectivity, then, is not a metaphysical property, but an emergent property of distributed agreement, constrained by systemic feedback and bounded rationality.
5. Mismatches and Moral Failure
Sometimes societies collapse when they:
- Pursue high morality while neglecting basic morality (e.g., utopian revolutions that ignore basic dignity).
- Rigidly defend outdated basic morality that prevents high-morality evolution (e.g., xenophobic taboos).
Understanding which layer a moral norm belongs to can prevent catastrophic misalignments.
6. Implications for Ethical Discourse
When we argue over moral truths, we’re often talking past each other:
- One side speaks from high-morality ideals (δ and w maxed out).
- The other defends basic moral coherence (ρ-calibrated norms).
Time × Scope helps us:
- Frame disagreement as difference in confidence, not difference in compassion.
- Track the maturation of moral norms.
- Diagnose moral confusion and prevent overreach.
7. ⚠️Disclaimer: Time × Scope as an Analytical Tool⚠️
Time × Scope is a conceptual model designed to help analyze and compare moral norms based on time horizon, moral scope, and confidence in outcomes. It is not a universal moral blueprint. Without clearly defined constraints on its application—such as constitutional limits, accountability institutions, or revision mechanisms—it should not be used as a standalone framework for designing future ethical systems.
The model is meant to clarify moral dynamics, not to replace the philosophical, political, or cultural processes required for making normative decisions.
8. Final Thought
Moral progress is not abandoning subjectivity for objectivity. It’s turning shared subjectivity into stable structure. Over time, some norms “solidify” into near-objectivity not because they were always true, but because they proved themselves, over time and across systems.
And that’s worth building on.
Appendix: Definitions
- Time (δ): Discount factor. How much weight we give to the future.
- Scope (w): Moral circle. How widely we distribute concern.
- Basic Morality: High ρ (confidence) norms—often, but not always, associated with lower δ and narrower w. Their defining feature is strong evidence and immediate, widely acknowledged consequences.
- High Morality: High δ, high w, low ρ.
- ρ (rho): Probability that applying the norm leads to desired consequences.
Interested in this framing? You can read the original framework post here.
And of course, feel free to challenge, critique, or expand in the comments. Use the model to map emerging moral shifts—whether it's AGI rights, neuroethics, or something else entirely.
Question for discussion: Which current 'high' moral norms do you think will become 'basic' within the next 50 years?
Thank you very much for your interest in my proposal.
My idea of an "ideology of behavior" seems to me to be the logical conclusion of the civilizing process that led certain moralistic religions (the so-called "compassionate religions") to end up prioritizing conceptions of moral motivation with behavioral implications (benevolent and altruistic behavior) that always involved the internalization of emotions associated with prosocial symbolism: individual soul, charity, grace, compassion... this is the Christian terminology, but the compassionate religious cultures of the East have their own terms.
The goal is always moral evolution. Use certain symbolic stimuli associated with non-aggressive, empathetic, benevolent, and altruistic behavioral motivations. "Producing saints."
I can't think of a better way to produce effective altruism.
Historically, the emergence of cohesive subcultural minorities has always had great power to influence lifestyle changes from a moral perspective.
Monasticism was an invention of Buddhism, although it later gained great importance in the West. The puritanical subcultures of Reformed Christianity also played a role.
In my opinion, the creation of a morally influential minority that promotes an extremely prosocial lifestyle and, for the first time, develops based on principles of rationality could have a profound impact on the conditions of today's society. Many 19th-century thinkers already realized that if astrology evolved into astronomy, and alchemy into chemistry, why couldn't the religions of the past have a functional and coherent equivalent in the enlightened world?
The idea of an "influential minority," by the way, is not foreign to "Effective Altruism." It appears, for example, in Schuber and Caviola's book "Effective Altruism and the Human Mind."
I think we can be ambitious and set a bigger goal: if we can locate individuals with a greater propensity to perform altruistic acts, we can also locate individuals with a propensity to improve their behavior to the limits of extreme prosociality. These would be the "believers" in the behavioral ideology. Individuals rationally motivated to correct their behavior in order to achieve a clear goal (extreme prosociality, "saintliness"). Didn't the people in "Alcoholics Anonymous" do something similar about changing behavior almost a hundred years ago? And they certainly didn't need professional psychologists to do so. They relied on clear motivation, clarity of objectives... and a lucid process of development through trial and error.
I mentioned another example from the past: the Tolstoyan movement. It failed because it was poorly conceived and poorly organized, but it demonstrated that it was possible to create a non-political social movement based on principles of extreme prosocial behavior and not necessarily linked to any belief on the supernatural.
What are the motivations for altruistic action? What are the mechanisms for internalizing prosocial behavioral values? What psychological incentives and rewards do those who undertake a process of change and renunciation based on an altruistic ideal receive? How are ideologies created, cultivated, and flourished?
In our time, we have historical evidence of all kinds. We already know many things, and although science can advise us, this should be a matter of individual motivation and shared wisdom.
As an initial formula, I would suggest a "monastic" organization for the rational pursuit of an altruistic lifestyle. An altruistic lifestyle implies controlling aggression, cultivating rationality and empathy, scientific curiosity, and, above all, developing benevolence in behavior. In my view, such a development would not necessarily be less attractive to many young people today than monasticism was in the Middle Ages.
And, above all, keep in mind: unlike political ideologies or mass religions, a monastic structure only seeks to attract a minority. One person in a thousand? We would then be talking about eight million people with 100% active altruistic behavior!