Author's note: This post serves as my report of potential directions I envision for my own PhD dissertation work. I would be very grateful for any comments for feedback, suggestions on research directions, useful readings, and anything else a reader might find worth mentioning. Moreover, if you are a researcher working (or interested) in any of these directions, please reach out! I am very eager to expand my collaborative work. I am grateful for prior feedback and comments by my advisor, Dr @Falk Lieder, as well as my committee members from the UCLA Department of Psychology, Dr Ian Krajbich and Dr Patricia Cheng.
“Moral learning [...] can go beyond the acquisition of known moral concepts or internalization of prevailing social norms, and can extend to the formation of novel moral concepts and evaluations, resulting in dramatic personal and social change even within one lifetime” – Railton (2017; p. 172)
1) Introduction
Should you throw away left-over food? Is it okay to hunt and kill animals for pleasure? Should you water your garden when your region is under a severe drought? Should you donate money to your local soup kitchen instead of children facing life-threatening diseases in sub-Saharan Africa? These are but a few scenarios that illustrate the basic idea of social dilemmas: what should people do in situations where private and collective interests are seemingly at odds with one another? More generally, these kinds of dilemmas involve conflicts between rules and obligations, and cost-benefit reasoning (Maier et al., 2024). As in many other decisions, people may rely on an intuitive system of decision-making to guide their morality and arrive at a solution (Greene, 2014; Cushman et al., 2017), typically manifested through heuristics and biases. While philosophers have devoted a lot of time to discussing and reconciling theories of morality that propose solutions to moral dilemmas (e.g., Ord, 2009; Parfit, 1984), comparably little effort has been invested in applying these models of morality to inform an answer to the question of exactly how humans learn to make moral judgments (Cushman et al., 2017).
Moreover, humans appear to collectively change their morality over time (c.f. Pinker, 2011; Buchanan, 2020; MacAskill, 2022), through deliberate societal changes (e.g., universal suffrage, same-sex marriage, abolition of slavery) that expand our circle of moral concern (Sauer et al., 2021). In other words, whereas two hundred years ago it was permissible to purchase and trade human beings for profit and labor, in modern times, this is not only illegal and impermissible, but morally repugnant. This suggests that at a point in time, humanity began to show an ability to learn to assign a moral weight to the life of all humans (and some non-humans), irrespective of social group membership, leading to a gradual expansion of individuals’ moral circle of concern (Sauer et al., 2021; Ayars, 2016). But this raises the question, then, how did people learn to overcome the bottlenecks that prevented them from acknowledging this universal moral weight in the first place?
One possible answer is motivated by utilitarian analysis (Baron, 2024). More specifically, it is motivated by the idea that we ought to decide in a way that pays attention to, and considers, the perceptible and imperceptible consequences of our decisions on others (Parfit, 1984). So, how do people learn whose welfare to pay attention to when they make moral decisions? Relatedly, is the development of moral expansiveness constrained by attentional biases in what consequences people learn from? Can attentional biases in which consequences people learn from cause their circle of moral concern to contract? While some have approached similar questions from a philosophical perspective, I explore the role of attention in exercising a consequentialist approach to moral decision-making by drawing on work that explains people’s attention regulation (Lieder et al., 2018), and attention-induced value (e.g., Pleskac et al., 2023; Krajbich et al., 2010).
This paper is a preliminary literature review around this central theme of moral learning, with particular emphasis on how attention contributes to acquiring decision strategies that determine and guide a person’s moral decision-making. Section 2 will shed light on the current philosophical and psychological foundations of candidate normative models of moral judgment and decision-making. Section 3 connects this theoretical background with the current literature on moral learning, specifically, in terms of how people learn decision strategies that guide their morality, with a special focus on reinforcement learning. Section 4 ties these two strands of literature with one of the core topics of cognitive psychology: attention. Section 5 concludes and synthesizes the current state of knowledge by providing a set of open questions that can inform a research agenda.
2) Moral decision-making: philosophical foundations and psychological theories
Research on judgment and decision making has uncovered a variety of biases and heuristics that often deviate from a normative standard of decision making, such as expected utility theory (Baron, 2024). Ordinary judgments involve an assessment of two or more alternatives based on evidence, individual preferences and goals, which collectively motivate making a decision. On the other hand, moral judgments are about what someone should decide to do in a particular situation, irrespective of people’s tastes or preferences. In this way, moral judgments are both universal and impersonal (Baron, 2024). With these distinctions, morality in psychology can be defined as a “system of beliefs or set of values relating to right conduct, against which behavior is judged to be acceptable or unacceptable” (APA, n.d.). Any judgment of right conduct, and the acceptability or unacceptability of behavior, requires us to think about what others should do (c.f. Baron, 2024).
Against this backdrop, the current section begins by exploring a set of candidate normative models for moral judgment and decision-making. Specifically, I explore three moral theories from philosophy and discuss some of their applications to moral psychology. Following this discussion, I assume and justify using utilitarianism as a normative model for moral judgment and decision-making. Then, I discuss the underpinning psychological mechanisms to utilitarianism and the individual differences shaping the moral consideration that people extend to different entities.
Normative models of moral decision making
A central question to moral judgment and decision making is: which normative model is most appropriate for evaluating moral decisions? While the answer to this question is far from obvious, three models stand out. Utilitarianism often emerges as a key contender (Greene, 2014; Baron, 2024; Baron et al., 2012; Cohen & Ahn, 2016), emphasizing the maximization of overall welfare. A frequently cited alternative is deontology (Holyoak & Powell, 2016; Bennis et al., 2010), according to which the acceptability of acts is based on their conformity to rules, rights, duties, and the “universalizability” of rules (Kant, 1785). A third normative view, contractualism, broadly suggests that moral norms are principles that no one can reasonably reject since they are drawn from the principle of mutual agreement (Levine, Chater, Tennenbaum, & Cushman, 2024). I will briefly look at each of these in turn.
Baron (2024) argues that utilitarianism can be used as a normative model in moral judgment and decision making in the same way that expected utility theory has been used as a normative model in judgment and decision making and economic theory. Formally, a normative model of utilitarianism in morality is built upon the following axioms (Chappell, Meissner & MacAskill, 2024; p.10):
- Consequentialism: one ought to create outcomes that promote overall value;
- Welfarism: the value of an outcome is determined by the total well-being of all individuals affected by it;
- Impartiality: the well-being of everyone counts equally, no matter who experiences it;
- Aggregationism: the value of an outcome is given by the change in the total sum of the lives affected by it.
To illustrate the four axioms of utilitarianism, let’s suppose that a country of 100 people faces a one in five chance of catching a natural disease without getting vaccinated. If they do get vaccinated, the vaccination causes a disease that is equally serious with a one in 20 chance. The government has to decide whether to institute a vaccination recommendation for all, or not; based on the assumption that if instituted, the recommendation is taken up. Starting from the consequentialism axiom, the decision made should be based on the outcome that promotes overall value, focusing on the consequences of either vaccination or not. Without vaccination, each person has a 20% chance of catching the natural disease, implying that 20 people will get sick. With vaccination, the risk of catching a disease reduces to a 5% chance, meaning that only 5 will get sick. Thus, the vaccination minimizes the total number of sick people in the country, thereby promoting overall value. The axiom of welfarism suggests that the value of an outcome should be judged by the total well-being of the population. The natural disease presumably has negative effects on people’s well-being. Reducing the number of people who get sick from 20 to 5 significantly reduces these negative effects. Therefore, welfarism and consequentialism both prescribe the vaccination recommendation. The axiom of impartiality requires that every person’s well-being counts equally, irrespective of individual characteristics. The prescribed vaccine recommendation does not change based on individual differences; all citizens are equally likely to benefit from the reduced risk of getting sick. This means that impartiality supports vaccination. In the case of the fourth axiom of aggregationism, the decision of whether to institute a vaccination recommendation or not should be based on the “value of the outcome [...] given by the sum value of the lives it contains” (Chappell et al., 2024; p.15). When fewer people suffer from the natural disease, aggregate well-being is higher, since there will be a total of 15 people who will not get sick through the vaccination. Here, the fourth axiom predicts that vaccination is the better choice because it significantly reduces the total number of people suffering from illness, thereby increasing the sum of well-being. In this simplified scenario, utilitarianism would predict that in the face of a natural disease, the government should institute a vaccine recommendation for all.
In sum, utilitarianism points toward outcomes, acts, decisions, motives that aim to maximize welfare, irrespective of who brings them about or who experiences them. In line with the impartiality axiom, utilitarians are agent-neutral (Hare, 1981). On the other hand, deontology judges actions and their acceptability based on a set of rules (such as you must not kill or you must not harm some to help others), irrespective of their consequences. In this way, deontology is said to be agent-relative; what someone is morally obligated to do can differ depending on their specific connection to others (Holyoak & Powell, 2016). This creates a unique set of moral reasons for each individual agent, rather than a universal set of rules applicable to everyone equally. Contrary to utilitarianism, deontology accounts for widely-shared moral intuitions about what is right and wrong. Commonly-held moral intuitions, such as an objection to killing, or making some people sick, are immediately accounted for in deontology. For deontologists, killing is a morally reprehensible act because it violates certain principles, such as Kant’s first categorical imperative, that is, one ought to act in a way which would be best for everyone to follow.
An influential account of deontology in moral psychology lies in Kohlberg’s (1981, 1984) theory of moral development. Building upon some of Piaget’s work, Kohlberg (1981, 1984) outlines a progression through six stages where moral reasoning matures from a preconventional level to a more principled approach grounded in universal ethical principles. In the preconventional level, stages one and two, moral reasoning is primarily concerned with obedience to authority and immediate personal benefits. Stage one focuses on following rules to avoid punishment, while stage two shifts towards recognizing individual interests in reciprocal relationships. The conventional level, encompassing stages three and four, begins to reflect more mature deontological influences. Stage three focuses on good interpersonal relationships, where actions are judged based on societal approval and maintaining good relations. Stage four advances this by considering the necessity of maintaining social order through upholding laws and fulfilling societal duties. Finally, the post-conventional level, featuring stages five and six, epitomizes the deontological emphasis on universal principles. Stage five introduces the idea of a social contract where laws are seen as flexible tools to promote the greater good, requiring adherence as long as they contribute effectively to societal welfare. Stage six, deeply rooted in Kantian deontology, prioritizes moral reasoning based on abstract reasoning using universal ethical principles such as justice and rights (c.f. Rawls, 1971). Here, the morality of actions is judged against these principles, suggesting a duty to challenge unjust laws and practices. Put together, Kohlberg's theory suggests that moral development moves towards a greater appreciation of duties and rights that are recognized as valid beyond immediate or parochial contexts, aligning closely with the deontological view that moral actions transcend specific laws and are grounded in duties that are universally applicable.
The latter stages of Kohlberg's moral developmental theory connect with a third influential view on moral judgment and decision-making; contractualism. Broadly defined, a contractualist approach posits that while ideal moral judgments are those that rational agents would agree upon under perfect conditions, such perfect agreement is often unfeasible in real-world scenarios due to the complex and dynamic nature of human interactions and the cognitive costs associated with constant negotiation (Levine et al., 2024). As a result, Levine et al. (2024) suggest that individuals use a variety of heuristics and abstractions to make moral decisions efficiently, approximating the ideal outcomes that would be achieved through exhaustive bargaining; thus engaging in a resource-rational (Lieder et al., 2024) approximation of contractualism. These simplified processes are designed to balance the cognitive effort and social resources expended in making decisions against the need to achieve fair and mutually beneficial outcomes. By employing these heuristics, individuals can navigate moral landscapes in a way that is both practically feasible and aligned with the core contractualist principle of seeking agreements that all parties can accept.
To synthesize, this exploration discussed three theories of morality: utilitarianism, deontology, and contractualism. A key point that emerges is that moral judgments, distinct from ordinary judgments, are prescriptive and intended to guide behavior, suggesting what others should do in specific circumstances (Baron, 2024). In this way, Hare (1981) argues that moral judgments are universalizable, meaning that they are “meant to apply to anyone who is in certain circumstances. [They are] impersonal” (Baron, 2024; p. 382). Utilitarianism, as a normative model, inherently aligns with this perspective by dictating actions and decisions that maximize overall welfare, irrespective of who experiences that welfare. It offers a systematic framework for evaluating the consequences of actions, which is useful not only for individual decision-making but also for guiding public policies that can propose and predict concrete solutions to moral dilemmas.
Utilitarianism as a normative model of moral judgment and decision making
In this section, I sketch a justification for using utilitarianism as a normative model of moral judgment and decision making. Then, I briefly outline a few deviations from the predictions of utilitarianism evidenced by a notable account of rational altruism by Parfit (1984) and empirical studies in the literature on moral psychology (e.g., Groß et al., 2024).
A prominent justification for utilitarianism as a normative model of moral judgment and decision making lies in its parallel with the use of expected utility theory to make predictions about judgment and decision making (Baron, 2024). Deviations from expected utility theory in rational decision making informed a broader research agenda in cognitive biases. Just as the seminal work of the late Daniel Kahneman and Amos Tversky highlighted how people often diverge from theoretically rational choices, a utilitarian framework can illuminate where and how moral judgments diverge from what would be predicted under a strict utility maximization approach (Sunstein, 2005). This does not diminish the value of utilitarianism as a normative model; rather, it enriches our understanding of human moral cognition by providing a baseline from which deviations can be examined and understood. Choosing utilitarianism as this baseline model, then, should not be interpreted as a statement on what is the right or correct moral theory, but simply as a model that makes precise predictions of decisions that would maximize welfare under given conditions. It provides a scaffold for examining how actual human decisions align with or deviate from these predictions.
Deviations from utilitarianism have been documented in philosophical texts (e.g., Parfit, 1984), and empirical studies (e.g., Groß et al., 2024; Cohen & Ahn, 2016). Typically, the empirical work conducted in moral psychology has employed experimental paradigms using realistic and/or hypothetical social and moral dilemmas that pit private and collective interests against each other (Van Lange et al., 2016; Ellemers et al., 2019). While solutions to these dilemmas can be contentious and, at times, even dramatic (Cohen & Ahn, 2016), Parfit (1984) carefully set out the philosophical groundwork for how a rational person with sufficient levels of altruism would behave such that solutions to dilemmas become obvious – while presupposing utilitarianism. By sufficient levels of altruism, Parfit (1984) means that a person demonstrates impartial concern for others, that is, an equal concern for everyone, including oneself – as suggested by the third axiom of utilitarianism (Chappell et al., 2024). According to Parfit (1984), a rational altruist would avoid the following five mistakes in her moral mathematics: the share-of-the-total view, ignoring the effects of sets of acts, ignoring small chances, ignoring small or imperceptible effects, and adhering to the view that imperceptible effects cannot be morally significant. I will briefly describe each of these five mistakes in turn.
To illustrate the share-of-the-total view mistake, I present an illustrative scenario adapted from Parfit’s (1984; p.67) “First Rescue Mission”. Suppose that a sudden and severe storm has caused extensive flooding to a small island in the Mediterranean Sea. In one area of the island, 100 tourists are stranded on a piece of land that is rapidly being submerged. The only way to rescue these 100 people is by using a large boat that needs to be manually launched into the water with the help of a winch system. The winch requires a certain amount of weight to operate effectively, enough to safely launch the rescue boat. You and three volunteers are at the site, and by standing on a large platform attached to the winch, your combined weight can activate the winch mechanism that launches the boat to rescue the 100 stranded tourists. However, at another part of the island, a group of ten hikers is trapped in a valley that is also rapidly filling with water. You have the unique skills needed to navigate through the flooded terrain to reach and rescue these hikers. There is a fifth potential volunteer at the boat site who, if you decide to help the hikers, will replace you on the platform, ensuring enough weight to launch the boat and rescue the 100 tourists.
By the share-of-the-total view, if all five volunteers help out the 100 tourists, then each would save the equivalent of twenty lives. If you were to go to the valley and save the ten hikers, then you would save only ten lives, and ten lives are less than the twenty you could have saved if you stayed at the boat site. This is a mistake because a person doing a good action should not simply consider the absolute value of the good that she created, but rather, what would happen if she did not do that good act. Going with the other four volunteers means that the ten hikers needlessly die. The other four volunteers would save the 100 tourists without your assistance, and they would each save the equivalent of 25 lives. By joining the rescue mission at the drdrboatsite, you single-handedly reduced the number of lives saved per volunteer equivalent to five people, or a total of twenty. So, even though your presence at the boatsite saves an absolute number of 20 tourists, your volunteering effort reduces each of the other volunteers’ share by 5 lives, equivalent to a sum of 20. Subtracting these two by each other, nullifies your effort at the boatsite, which if further adjusted by the loss of ten hikers, leaves the world worse off. This is similar to the idea behind opportunity costs. More formally, a rational altruist for Parfit (1984) “should act in the way whose consequence is that most lives are saved” (p. 69).
The second mistake is when we ignore the effects of sets of acts. This is a mistake because it is incorrect to say that an act is right or wrong because of its effects and that the only relevant effects are solely those of the particular act. In other words, suppose that two people together shoot a third person and kill him. Either of the two would have killed the third person. We could argue that neither of the two shooters have killed the third person, so the two persons did not act wrongly. This is wrong because the act of shooting is itself part of a set of acts that together have contributed to the harm of the third person. The third mistake claims that an altruistic act that has a small chance of occurring has no moral significance. This is incorrect because when the stakes are very high and a large number of people will be affected, no matter how small the chances (as long as they are not zero), they should not be ignored. The fourth mistake claims that if an act has effects on other people that are imperceptible, then the act cannot be wrong because of these effects, so these effects can be ignored. Similarly, the fifth mistake claims that if an act has imperceptible effects on other people, then it cannot be morally significant since no one of them will ever notice the difference. To solve the tragedy-of-commons-type dilemmas (such as the Commuter’s Dilemma), Parfit (1984) prescribes the notion of rational altruism that avoids stumbling into the five mistakes. Concretely, a rational altruist would appeal to the effects of what each person does or to the effect as a result of what everyone does. Parfit (1984) seems to suggest that we ought to evaluate everything by their consequences, paving the way for Ord’s (2009) interpretation of global consequentialism.
By employing utilitarianism as a lens, we can begin to understand and navigate the complex landscape of moral judgment and decision making. It allows for a structured analysis of moral issues, from everyday dilemmas to contentious societal debates like abortion. In particular, utilitarianism’s four axioms of consequentialism, welfarism, impartiality and aggregationism (Chappell et al., 2024) provide clear criteria for evaluating and predicting moral decisions. While some aspects of morality, as noted by Bloom (2013), may come naturally to us, a utilitarian model helps elucidate and navigate those that do not, offering a method to reconcile individual and collective interests in diverse and often conflicting moral contexts.
Utilitarianism and its psychological mechanisms
Having laid out the competing normative models for moral judgment and decision making, and established a justification for using utilitarianism as a normative framework, it is now appropriate to discuss how the predictions of utilitarianism can, or fail to, be accomplished.
Applied work in moral psychology draws on stylized moral dilemmas that are deliberately constructed to pit rule-based reasoning against cost-benefit reasoning (e.g., Maier et al., 2024). According to dual-process theories of morality, moral dilemmas can be explained as a conflict between intuitive-emotional responses (System 1) and deliberate-rational processes (System 2), where utilitarian judgments typically map onto the latter due to their reliance on rational calculations of welfare maximization, and deontological judgments on the former (Greene, 2014; Cushman, 2013). These frameworks posit that in moral dilemmas, decisions are often the product of a cognitive evaluation that prioritizes – or discounts – the greater good, even at the expense – or benefit – of individual rights and considerations.
Neuroimaging studies further support this view by highlighting the neural correlates of moral decision-making, which frequently overlap with those involved in non-social, rational decision-making processes (Greene et al., 2001). These suggest that utilitarian decision-making may involve an integration of affective and cognitive components, rather than a simple dichotomy between emotion and rationality. This integration is evident in scenarios where individuals must balance the emotional impact of their actions against a rational evaluation of outcomes that maximize the welfare of the greatest number (c.f. Levine et al., 2024). For instance, in their Subjective Utilitarian Theory (SUT), Cohen and Ahn (2016) extend this discussion by suggesting that moral judgments are fundamentally utilitarian but modulated by personal and subjective values, which are seen as preattentive (Cohen & Ahn, 2016). Their theory, validated by experimental data on pre-validated dilemma scenarios, argues that individuals assess moral choices based on the personal value they attribute to outcomes, thereby integrating subjective value within a utilitarian framework. Crucially, though, SUT challenges the dual-process model by treating emotion not as a separate, competing, decision-making process, but as an integral input into a broader rational calculus. Indeed, Cohen and Ahn (2016) claim that support for dual-process theory of moral judgments is indirect since “although the data described [...] show the relative influence of emotion and working memory in the decision process, they do not show that two separate decision processes are present” (p.1361). Rather than seeing deontological judgments as the intuitive response to moral dilemmas which people need to override, Cohen and Ahn (2016) treat moral judgments as inherently utilitarian (Baron, 2024) and subjective in the sense that the decision options available to them in the dilemmas are a function of their own personal value. Put differently, the item with the greatest personal value determines their choice. Other authors have obtained similarly conflicting conclusions on Greene’s dual process model. For instance, Kahane (2012; p. 542) argues that “Greater conceptual precision, coupled with a closer reading of the existing body of evidence, should lead us to doubt Greene’s model”. While Baron et al. (2012) are less conclusive in questioning the overall validity of Greene’s model, they admit that their results “find no asymmetry of the sort that might be predicted by the two-system model” (Baron et al., 2012; p. 115).
In stark contrast to the theoretical purity of philosophical consequentialism, psychological applications of utilitarianism often acknowledge that individuals might value the welfare of those close to them more highly than that of strangers (Bloom, 2013; Bloom, 2010). This discrepancy highlights a deviation from the impartiality axiom presupposed by utilitarianism. For instance, in an experimental social dilemma, the consequences that people were experimentally manipulated to attend to had an impact on the perceived need of a financially stable ingroup versus a distant and impoverished outgroup (Spiteri & Lieder, 2024). As Baron (2012) notes, parochialist tendencies can further drive people to sacrifice their interests for members of their ingroup at the expense of an outgroup, irrespective of the negative effects that the target outgroup may incur. The concept of the “moral circle” further enriches this discussion by suggesting that moral concern is not static but expands and contracts based on social and physical distance from the decision-maker (Marshall & Wilks, 2024). Formally, the moral circle can be defined as the set of all beings and entities whose well-being a decision-maker considers when making moral decisions; loosely equivalent to a circle of altruism (c.f. Singer, 2011). This concept illustrates how utilitarian predictions are dynamically influenced by relational factors, which may dictate the extent of one's moral obligations and the weighting of different individuals' welfare in moral judgments. For instance, while the impartiality axiom of utilitarianism predicts that all human beings are equally deserving of our moral consideration, blood relatives frequently take precedence (e.g., Lee & Holyoak, 2020).
According to Bloom (2013), some aspects of our morality are innate, but others are learned through development, experience, and human interaction (c.f., Cushman et al., 2017). For instance, younger children are more likely to ascribe helping others as an obligation, and less likely to see physical distance as a hindrance to this obligation, than older children and adults (Marshall & Wilks, 2024). Even infants as young as six months show a favorability towards those who help others over those who do not, indicating early-emerging moral tendencies (Bloom 2013; Bloom 2010). Moreover, they show disapproval of those who deviate from these tendencies (Bloom, 2010) in an attempt to uphold a fairness maxim (Baron, 2024). However, as they age, this impartiality weakens, and their moral weights are updated such that they become more sensitive to physical distance (Marshall & Wilks, 2024). One might wonder, then, if this could be explained by a dual-system of altruistic behavior that is automatically altruistic (System 1), and rationally self-interested (System 2). It has been suggested in the experimental economics and experimental psychology literature that a dual-system framework can explain the apparent conflict between altruism and self-interest (Fromell et al., 2020). However, in their meta-analysis of 22 studies across both fields, Fromell et al. (2020) lend little support for this dual-process model of altruism. Crucially, Fromell et al. (2020) hint to a framing effect that could partly explain how and why people make their moral decisions altruistically or selfishly. If this is the case, then attention can be an important moderator of utilitarian judgments, as evidenced by Spiteri and Lieder (2024).
The resource-rationality framework by Lieder et al. (2024) can inform optimal attentional manipulations that could reliably steer people towards resource-rational utilitarian solutions. By considering the cognitive demands of different decision-making strategies, we can begin to better understand why people sometimes deviate from, or align with, the normative predictions of utilitarianism, even when they may have a strong moral inclination to pursue a specific outcome. As a result, resource-rationality can provide an answer to the question of how we can make the way in which people think about their moral judgments and decisions more adaptive to the environment they are in (c.f. Lieder et al., 2024, p. 138); and their ability to perceive the environment is inextricably connected to their attention (Proctor & Vu, 2023).
Impartiality and parochialism
Both utilitarianism (Baron, 2024; Chappell et al., 2024) and Parfit’s (1984) account of rational altruism treat parochialism as a violation of the axiom of impartiality. Empirical work shows that parochial tendencies and partial altruism are evident time and again across many social dilemmas, both in the lab and in the real world (Groß et al., 2024; Baron, 2024; Van Lange et al., 2013; Ellemers et al., 2019). Parochial altruism, the tendency to be altruistic towards an ingroup (c.f. Baron, 2012), is deeply rooted in evolution, particularly the preference for kindness to kin (Bloom, 2013), where the children of parents who care for their offspring are more likely to pass on their genes compared to the children of those who abandon or harm their children.
The literature on ineffective charitable giving has frequently identified parochialism as one of the key bottlenecks to maximizing social welfare per dollar donated (Caviola et al., 2021; Baron & Szymanska, 2011). Some charities are more effective at doing good than others (Caviola et al., 2020), yet many people motivated to do good still fall short of doing the most good they can do, thereby driving ineffective well-doing (c.f. Lieder et al., 2022; c.f. Parfit’s share-of-the-total view). For instance, annual donations to charity in the United States reach half a trillion dollars, yet despite this generosity, the world still faces a grim reality with nearly 700 million people living on less than $2.15 per day (World Bank, 2024). Such ineffective actions can lead to devastating outcomes that could be easily prevented. One way to reduce parochialism is to encourage people to “think of outsiders as individuals rather than as members of an abstract group” (Baron & Szymanska, 2011; p. 220).
The deviation from the impartiality axiom can be explained by the theory of decision by sampling (DbS; Stewart, Chater, & Brown, 2006). According to DbS, people evaluate attribute values (e.g., monetary amounts) by comparing them to a sample of other values drawn from both the immediate decision context and memories of previously encountered values. These comparisons rely on simple cognitive processes, shaping perception and decision-making. More mechanistically, DbS suggests that parochialism arises because people predominantly sample outcomes that are physically and socially proximate. That is, the cognitive system prioritizes information that is more immediately available and relevant, making it easier to attend to the outcomes affecting those who are close (c.f. Lieder et al., 2024; Lieder & Griffiths, 2020). Since repeatedly sampling and considering the outcomes of a decision for a wide range of people is cognitively costly (Lieder et al., 2024, p. 8-9), a resource-rational strategy is to focus on those who are readily available in our sample—typically ingroup members. Consequently, the sample of outcomes decision-makers rely on is systematically skewed toward people they are physically or socially closer to, reinforcing parochial altruism. This cognitive mechanism provides a plausible explanation for why people feel stronger moral obligations toward close relationships than toward more distant ones. However, this prediction is yet to be tested and validated.
Individual differences in moral considerations
Individual differences in the extent of moral consideration for different entities are influenced by personality traits and their relevance across distinct situational affordances. A prominent meta-analysis by Thielmann, Spadaro and Balliet (2020) identified four affordances—exploitation, reciprocity, temporal conflict, and dependence under uncertainty—that shape the impact of personality traits on prosocial behavior. These affordances (i.e., properties in the situational environment that permit an individual to engage in prosocial behavior) elucidate how personality traits guide moral and prosocial tendencies in contexts where individual and collective outcomes intersect.
People vary in the degree of moral consideration they extend to different entities, influenced by factors such as developmental stage (Bloom, 2010; Bloom, 2013; Marshall & Wilks, 2024), stress (Crockett, 2013), social value orientation (Van Lange et al., 2013), and identity (McFarland et al., 2019; Grimalda et al., 2023; Pong & Tam, 2023). For example, younger children perceive an obligation to help others, showing less sensitivity to physical distance than adults (Marshall & Wilks, 2024). In contrast, adults prioritize obligations based on social proximity and group membership, judging individuals as more obligated to help those who share similar behaviors (Marshall & Wilks, 2024; Study 1) or belong to the same group (Marshall & Wilks, 2024, Study 2; Grimalda et al., 2023).
Agreeableness, as conceptualized in both the Five-Factor Model (FFM) and HEXACO (Honesty-humility, Emotionality, eXtraversion, Agreeableness, Conscientiousness and Openness to experience) frameworks, consistently predicts prosocial behavior due to its association with cooperative tendencies and tolerance for potential exploitation (Thielmann et al., 2020). This trait supports moral consideration by prioritizing collective well-being over individual self-interest (Tuen et al., 2023). Similarly, Honesty-Humility strongly predicts prosocial behavior, particularly in situations involving exploitation, as it reflects fairness and genuine cooperation, even in the absence of retaliatory threats (Thielmann et al., 2020). Empathy, a narrower trait, is also positively associated with moral consideration by fostering emotional concern for others, especially in contexts involving reciprocity or addressing exploitation. Conscientiousness and self-control emerge as predictors of moral behavior in situations requiring the resolution of temporal conflicts, such as balancing immediate personal gains against long-term collective benefits.
Social value orientation (SVO) is an additional factor that distinguishes individuals in terms of their consideration for different entities (Van Lange et al., 2013). Individuals who rank high in their SVO aim to maximize equality and collective prosperity, often emphasizing solidarity and egalitarianism in moral and political contexts. In stark contrast, individuals who rank high on individualism and competitiveness tend to focus less on collective welfare, and prioritize self-interest and relative advantage, respectively. These orientations influence how people approach social dilemmas (Van Lange et al., 2013; Baron, 2024), and how they allocate moral weight across entities. Marshall & Wilks (2024) highlight that ethnic and racial backgrounds of both participants and stimuli (in eliciting moral circle boundaries) are likely significant contributors to perceived obligations to different entities (see also Bobba et al., 2024). Traits that emphasize self-interest like Machiavellianism, narcissism, and psychopathy negatively correlate with prosocial tendencies (Thielmann et al., 2020; Grimalda et al., 2023). On the contrary, individuals with an expansive sense of belonging, such as identification with humanity or feelings of world citizenship are positively associated with prosocial tendencies (Grimalda et al., 2023), and tend to be more satisfied with their lives (Spiteri, Kim, & Lieder, 2024). Other traits, such as guilt-proneness and integrity, further highlight moral responsibility and an inclination to correct personal wrongdoing, promoting consistent prosocial behavior (Thielmann et al., 2020).
Situational differences can also contribute to individual differences across moral judgments and decisions. For instance, according to Crockett (2013), stress contributes to shifts from model-based to model-free systems, suggesting that stress could be a promoter of deontological judgments. Relatedly, Baron et al. (2012) highlight individual differences across individuals’ response times to moral dilemmas; dilemmas invoking System 1-emotional responses did not necessarily respond fast, and dilemmas invoking System 2-deliberate responses did not necessarily respond slowly. In other words, some individuals struggle to decide quickly, even on dilemmas perceived as “easy” where most people choose the utilitarian response. Others face difficulty and delay in “hard” dilemmas where the most common response is not clearly utilitarian or deontological. Moreover, in line with other findings on framing effects (e.g., Fromell et al., 2020), Baron et al. (2012) suggest that individual and group differences in response times and moral judgments may be contingent on the selection of dilemmas and unique to the individual differences of the sample of subjects recruited.
Summary and next directions
This section started to address key questions about normative models in moral judgments, psychological processes, individual differences, and the dynamics between impartiality and parochialism. The findings of the reviewed work reveal that moral judgment is shaped by a complex interplay of evolutionary predispositions, cultural norms, cognitive systems, and individual differences. Normative theories such as utilitarianism, deontology and contractualism serve as useful frameworks for predicting and evaluating moral decisions. While utilitarianism emphasizes welfare maximization, deontology focuses on rule-based judgments, and contractualism embraces the idea that moral norms result from mutual agreement. Yet, as highlighted in this section, these normative approaches do not always align with how individuals actually make moral decisions. Empirical studies demonstrate that moral cognition involves nuanced interactions between intuitive and deliberative processes, often influenced by context and learned reinforcement histories.
The current state of research suggests a systematic deviation from the impartial beneficence axiom of utilitarianism. This deviation is more formalized when seen through the lens of impartially altruistic and parochial decisions, shedding light on the evolutionary and cognitive underpinnings of moral behavior. Parochial altruism, deeply rooted in evolutionary advantages for kin and group survival, prioritizes proximity and shared identity, whereas impartial altruism seeks fairness and utility across all entities. These tendencies are further shaped by attentional mechanisms, which can adaptively navigate complex moral dilemmas. Individual differences, such as developmental stage, stress levels, and social value orientation, add another layer of complexity. For instance, younger children demonstrate more impartial moral considerations, while adults show greater sensitivity to physical and social distance (Marshall & Wilks, 2024). Stress can shift decision-making from model-based to model-free processes (e.g., Crockett, 2013), promoting deontological judgments. Social value orientation further distinguishes individuals, with prosocials prioritizing equality and collective welfare, while individualists and competitive persons focus on self-interest.
3) Learning strategies for moral judgment and decision-making
A key question which emerges from the foregoing discussion on utilitarianism as a normative model of moral judgment and decision-making, and potential deviations from its predicted outcomes is: how do people learn and acquire their morality? This section discusses the ways in which morality is shaped by learning and how people learn cognitive strategies for moral decision-making, learning when to rely on which decision strategy, and learning moral weights for moral circle expansion and/or contraction. Similar to Crockett (2013) and Cushman (2013), I believe that the reinforcement learning framework is very useful for studying moral learning.
Moral learning
A concrete theory of moral learning “naturally suggests mechanisms both for innovation in moral thought and for practical ways of bringing about moral changes” (Cushman et al., 2017; p.2). A cursory look at human history will immediately identify a number of key societal changes such as universal suffrage, banning slavery, and legalizing same-sex marriage, that could be attributed to innovations in moral thought (c.f., Pinker, 2011; Sauer et al., 2021). In what follows, I first describe a moral exemplar who brought about innovation in moral thought that eventually led to women’s suffrage. Then, I briefly describe a theoretical account of moral learning, after which I outline the application of reinforcement learning in morality.
Predating universal suffrage was an active feminist movement drawing its intellectual roots to John Stuart Mill’s influential text, The Subjection of Women (Meany, 2021). Indeed, Mill was the first British Parliamentarian who proposed granting women the right to vote, only to be met with ridicule and mockery at the time. Such was the extent of this denigration that Vanity Fair chose to publish a cartoon depicting him as a “Feminine Philosopher” (Spy, 1873; Meany, 2021). Despite being initially met with resistance, Mill is frequently associated with the dawn of the feminist movement that eventually inspired other changemakers to advocate for women’s rights (Meany, 2021). On the specific issue of women’s rights, Mill can be seen as a moral exemplar (c.f., Reynante et al., 2024), whose efforts only came to fruition forty years after his passing (Meany, 2021). It is plausible that Mill himself engaged in moral learning that drew him to the conclusion that women were equally deserving of an electoral vote as men; despite the considerable immaterial costs that a man would have to incur at a time when society was excessively patriarchal. According to Reynante and colleagues (2024), character education and moral interventions can prompt individuals to engage with moral exemplars, reflect on virtues, and internalize prosocial norms. Empathy-building programs and norm-nudging approaches demonstrate the power of experiential learning in fostering moral growth, enabling individuals to adapt their moral strategies to new and complex dilemmas (c.f., Pinker, 2011; Reynante et al., 2024).
One theoretical account of moral learning posits that individuals gradually construct and refine the frameworks that guide their moral judgments and decisions (e.g., Rhodes & Wellman, 2017). This constructivist perspective highlights the active role individuals play in shaping their moral schemas through the integration of new evidence and experiences. The iterative nature of this process allows individuals to develop increasingly nuanced strategies for evaluating morally salient scenarios, enabling them to navigate complex moral dilemmas effectively. A separate influential account depicts moral cognition as a process of development, analogous to cognitive development, by which individuals progress from rule-based reasoning to universal ethical principles, driven by a growing ability to handle complexity in moral thought and an increasing capacity for empathy (Kohlberg & Hersh, 1977).
Recent empirical research underscores the significance of reinforcement learning mechanisms in shaping how people learn to make moral judgments (Maier et al., 2024). Stimulus-reinforcement and response-outcome learning processes are central to forming the valence-based valuations that underpin moral cognition (Blair, 2017). These mechanisms facilitate associations between actions and their consequences, informing the moral significance of behaviors through experiences of reward and punishment (c.f. Sutton & Barto, 2018; Chapter 1). For instance, stimulus-reinforcement learning links specific actions to emotional responses, such as empathy or guilt, fostering a deeper understanding of the moral significance of one's actions (Blair, 2017). Indeed, experimental evidence from studies on both animals and humans demonstrate that dopamine manipulation affects both learning and choice behaviors, confirming dopamine’s causal role in learning action values to maximize reward (Daw & Tobler, 2014). Furthermore, moral learning is influenced by a combination of rational and emotional factors, often constrained by competing forces (Graham et al., 2017). These forces can simultaneously influence the moral circle; either by encouraging parochial concern for close others (centripetal forces) or pushing for broader inclusivity within the moral circle (centrifugal forces), shaping individual moral trajectories (Graham et al., 2017). Such dynamics illustrate the complexity of moral learning as individuals negotiate between immediate social attachments and broader, egalitarian principles.
Moral (reinforcement) learning and the moral circle
The moral circle represents the boundaries of moral concern, encompassing those deemed worthy of moral consideration (Crimston et al., 2018). Put differently, a person’s moral circle is the set of all beings whose wellbeing the person considers when making a moral decision. The capacity to learn moral weights—the relative significance ascribed to entities within and beyond one's immediate circle—has profound implications for the moral circle. Rhodes and Wellman (2017) argue that group membership plays a crucial role in the structure of our moral judgments (Cushman et al., 2017). Indeed, the way we think about groups of people is directly connected to how much we care about the wellbeing of others (Graham et al., 2017; Rhodes & Wellman, 2017). In other words, our concern for others’ wellbeing determines the moral weight we assign to them within our moral circle.
Reinforcement learning potentially plays a key role in determining the moral weights assigned to the various entities within the circle. Upon making a moral decision, decision makers observe the outcomes of their decisions for a subset of those affected by it (e.g., Maier et al., 2024; Spiteri & Lieder, 2024). Decision-makers can then make a judgment of the outcome, and learn from their judgment (c.f. Crockett, 2013). Through iterative experiences of moral success and failure, individuals can recalibrate their weights, aligning their actions with broader moral principles. For example, observational learning in game-theoretic social interactions has been shown to encourage altruistic behavior, highlighting the potential for learned strategies to promote moral inclusivity (Seymour, 2009).
The expansion of the moral circle is a nonlinear process, shaped by competing forces that pull individuals in opposite directions. On one hand, centripetal forces (pulling towards the center of the circle) emphasize loyalty and concern for close social groups, such as family and friends, while centrifugal forces (pushing outwards from the center of the circle) push for broader inclusivity, advocating for impartial moral concern that encompasses distant others (Graham et al., 2017). For instance, interventions that cultivate empathy and perspective-taking can facilitate the expansion of moral concern from narrow ingroups to distant others, (Reed & Aquino, 2003), thereby creating centrifugal forces. These centripetal and centrifugal forces create intrapersonal tensions, as individuals grapple with conflicting intuitions about how to prioritize the needs of proximate versus distant entities. Navigating this tension requires learning strategies that balance parochial concerns with the demands of moral inclusivity (c.f. Grimalda et al., 2023), facilitating moral progress over time.
Reinforcement learning provides a powerful framework for understanding how individuals acquire and refine strategies to navigate these competing forces (Sutton & Barto, 2018, p. 1-25). Model-free learning, driven by associations between actions and historical rewards or punishments, explains intuitive, rule-based judgments that often favor local social groups (Ayars, 2016). In contrast, model-based learning, which involves constructing causal models of the world, supports deliberate, far-sighted decision-making that aligns more closely with utilitarian principles and seeks to maximize collective welfare (Blair, 2017). Importantly, reinforcement learning extends beyond individual experiences, incorporating insights gained through observational learning and social modeling. By observing the actions and consequences experienced by others, individuals can broaden their moral insights and accelerate the development of effective decision-making strategies (Seymour, 2009). Attentional mechanisms can also play a critical role in this learning process. For instance, experimental research by Spiteri and Lieder (2024) demonstrates that attentional highlighting can effectively guide individuals in updating the moral weights assigned to entities, such as distant and impoverished children. This suggests that attention can serve as a key input within a reinforcement learning model of morality, influencing how individuals allocate moral weights and recalibrate their moral circles.
The iterative nature of reinforcement learning allows individuals to refine how they make their moral decisions based on accumulated reinforcement histories. These histories inform the boundaries of moral concern by integrating characteristics of the entities involved, the personal attributes of the decision-maker, and the interactions between the two (Jaeger & Wilks, 2023), all of which serve as inputs into the function of the individual’s assigned moral weights (c.f. Crimston et al., 2018). As individuals navigate the complex interplay of evolutionary predispositions (Buchanan, 2020), cultural influences, and cognitive processes, they develop adaptive moral frameworks that balance local allegiances with broader ethical commitments. This adaptability underscores the potential for reinforcement learning to align moral cognition with utilitarian ideals, advancing inclusivity and promoting moral progress.
Synthesis
Given the interplay between learning mechanisms and moral cognition, a number of learning strategies emerge as potentially pushing towards either expansion or contraction of individuals’ circle of moral concern. Interventions targeting empathy (Reynante et al., 2024; c.f. Pinker, 2011), perspective-taking (Reed & Aquino, 2003; Kohlberg & Hersh, 1977), and moral identity (e.g., Reed & Aquino, 2003) not only enhance moral expansiveness but also align individual decision-making with the utilitarian axioms of consequentialism, welfarism, impartiality, and aggregationism (Crimston et al., 2016). These strategies work by fostering inclusivity and encouraging individuals to prioritize broader well-being, particularly through structured environments that provide iterative feedback on outcomes of moral decisions. For example, norm-based interventions can nudge individuals toward prosocial behavior, gradually recalibrating moral weights to favor fairness and inclusivity (Reynante et al., 2024).
Attention can also play a role in moral learning. As demonstrated by Spiteri and Lieder (2024), attentional highlighting can guide individuals in updating the moral weights assigned to distant and impoverished entities, emphasizing the critical role that focused attention could play in reinforcement learning models of morality. Moreover, guided reflection and engagement with moral exemplars reinforce the salience of altruistic principles, encouraging individuals to expand their moral circle and adopt strategies that prioritize long-term benefits (Reynante et al., 2024; Watson & Watson, 2019). Observational learning and social modeling further enhance these processes, allowing individuals to integrate insights from others’ actions and consequences into their own moral frameworks. The integration of reinforcement learning principles with normative ethical frameworks provides a compelling pathway for cultivating moral progress. By aligning moral cognition with utilitarian ideals, these strategies not only promote moral circle expansion but also enable individuals to balance parochial concerns with broader ethical commitments, advancing inclusivity and collective well-being over time (Crockett, 2013; Sauer et al., 2021).
4) The role of attention in moral psychology
This section ties together the foregoing sections with the topic of attention. As in other domains of judgment and decision-making, moral judgments are the result of information processing. The ways in which attention is allocated during this processing can introduce biases that influence outcomes, often deviating from normative models of decision-making, such as, but not limited to, utilitarianism in the case of morality. This section explores the kinds of attentional biases that influence moral judgments and decisions and their interaction with the mechanisms behind moral learning.
Attentional biases in moral judgment
Attention serves as a selective filter, determining which information receives cognitive processing and which is ignored (Fiedler & Glöckner, 2015). The process of selecting which information to attend to and which to ignore can be habitual and predictable (Jiang & Sisk, 2019; Lieder & Griffiths, 2020, Section 4.1). This predictability is frequently viewed as an attentional bias (c.f. Jiang & Sisk, 2019). In the context of moral decision-making, an attentional bias can significantly shape judgments by directing focus toward particular aspects of a moral (or social) dilemma (Fiedler & Glöckner, 2015).
Attentional biases can be driven by individual differences and motivations. For instance, according to Mennen, Norman and Turk-Browne (2019; p. 267) “depressed individuals show increased internal attention to negative representations of the past” which, unless treated, can persist over a long timespan and manifest in a negative attentional bias. Attention can also be selectively allocated to self-serving or prosocial outcomes in strategic games (Fiedler & Glöckner, 2015) and realistic social dilemmas (Spiteri & Lieder, 2024). For instance, Fiedler and Glöckner (2015) claim that people tend to allocate attention in a self-serving manner, often guided by pre-existing motives and goals. Self-maximizing individuals prioritize information about their own outcomes during decision-making, while prosocial individuals direct their attention toward others’ outcomes and the consequences of their decisions (Fiedler & Glöckner, 2015; c.f. Spiteri & Lieder, 2024). Moreover, Decety et al. (2012) demonstrated that individuals confronted with moral violations (such as harm caused to others) directed more attention toward the victim rather than the perpetrator. This empathic focus highlights the importance of attentional allocation in amplifying the emotional responses that contribute to moral judgments. In other similar scenarios involving moral trade-offs, eye-tracking studies showed that people spend more time fixating on the potential consequences of their decisions for others, which can reflect their underlying moral preferences (Fiedler & Glöckner, 2015). This selective attention reflects an amplification effect, where the weight assigned to specific options is heightened by attentional focus (Krajbich, 2019).
Biases in attention can also be driven by decision task difficulty and complexity. Esterman and Rothlein (2019) discuss a set of neurocognitive models that define the state of sustained attention as a critical cognitive ability that deviates the brain from its natural tendency towards mind wandering. In particular, their attentional allocation model shows that intentional and unintentional mind wandering are differentially influenced by task difficulty – “intentional mind wandering decreases with task difficulty, while unintentional mind wandering shows the opposite pattern” (Esterman & Rothlein, 2019; p. 176). More recently, Callaway et al. (2021) refined these observations in terms of eye fixations. Specifically, more complex decisions, as measured by the difference in value estimates of the highest rated item and lowest rated item in a choice set, correspond with longer fixation times (refer to Fig. 3C in Callaway et al., 2021). Similarly, Fiedler and Glöckner (2015) highlighted that longer fixation times typically correspond with higher cognitive processing tasks, such as deliberate calculations, whereas shorter fixation times typically correspond with lower levels of processing.
Interestingly, Teoh et al. (2020) found that time pressure decreased their participants’ overall generosity but showed variability in terms of their underlying social preferences. Specifically, their findings illustrated that variations in time pressure amplify specific attentional biases aligned with individual social preferences: selfish individuals become less generous under time pressure because they prioritize self-focused information, while prosocial individuals exhibit reduced generosity when time pressure limits their ability to process other-focused information fully (Teoh et al., 2020; p.3). Eye-tracking data further revealed that time pressure altered attentional allocation patterns. Participants exhibited early gaze biases toward themselves, but these biases were modulated by individual differences. Selfish participants maintained strong self-focused attention throughout the task, while prosocial participants shifted their attention toward others as the trial progressed. The strength of early gaze toward self-focused information predicted lower generosity, highlighting the role of attentional dynamics in altruistic choice. Building on this work, Teoh and Hutcherson (2022) find evidence that social context drives divergent effects of time pressure on prosocial behavior by altering informational priorities. For instance, dictator games incentivize selfish behavior under time constraints, while the ultimatum game prompts greater attention to others’ outcomes.
The interaction of attention and moral learning
Attention and learning processes are deeply intertwined in shaping moral judgments. Attention is not static; it is shaped by learning processes that guide how individuals allocate their cognitive resources (Lieder et al., 2018; Jiang & Sisk, 2019). Reinforcement learning, in particular, offers a compelling framework for understanding how attention evolves in response to past experiences (c.f. Learned Value of Control model in Lieder et al., 2018). For example, people learn to focus on features or locations associated with rewards (Lieder et al., 2018; Becker et al., 2023; Teoh et al., 2020; Daw & Tobler, 2014), even when these features are no longer task-relevant (see Lieder & Griffiths, 2020, Section 4.1). This value-driven attention mechanism demonstrates how reinforcement histories can create habits of attentional allocation that persist over time.
The allocation of attention influences which outcomes individuals learn from and what lessons they derive from their experiences. For instance, Parr and Friston (2019) propose that attentional processes optimize the weighting of sensory data to prioritize information that is most informative about the causes of observed outcomes. In moral contexts, this means that individuals may focus on aspects of a situation that align with their prior beliefs or expectations, reinforcing pre-existing moral schemas. Spiteri and Lieder (2024) highlight the dynamic interaction between attention and learning in moral decision-making. In their study, attentional highlighting was used to direct participants’ focus toward distant and impoverished children, which significantly increased the moral weight participants assigned to them. Similarly, attentional highlighting that directed participants’ focus toward their ingroup significantly increased the assigned moral weight to members of their ingroup. This suggests that attention could serve as a key input in reinforcement learning models of morality, helping individuals recalibrate their moral weights based on the outcomes and consequences they observe. Additionally, the iterative nature of reinforcement learning allows individuals to refine their attentional strategies, prioritizing stimuli that have consistently yielded favorable outcomes while de-emphasizing those that have not. Jiang and Sisk (2019) describe how selection history—the cumulative record of which stimuli have been attended to in the past—creates biases that influence future attentional allocation.
Indeed, attentional biases can create blind spots in moral learning. When individuals systematically overlook certain aspects of a moral dilemma (Groß et al., 2024), such as the needs of out-group members, they are less likely to update their moral weights in a way that promotes inclusivity (Fiedler & Glöckner, 2015). Eye-tracking studies have provided valuable insights into these processes by revealing how attention fluctuates during moral decision-making. Of note, Teoh et al. (2020) showed a natural tendency for people to gaze towards their own outcomes when under time pressure, manifested in an “early-gaze bias”. However, they also noted that underlying differences in people’s social preferences influence this attentional bias. To this end, Pärnamets et al. (2015) demonstrated that gaze direction not only reflects developing preferences but also causally influences moral choices. By manipulating the timing of decision prompts, Pärnamets et al. (2015) were able to bias participants toward certain choices based on where their attention was directed. Their findings underscore the reciprocal relationship between attention and moral learning, where attention shapes the lessons individuals derive from their experiences and, in turn, is influenced by what they have learned.
Attention is a limited cognitive resource that is strategically deployed to information deemed important by the individual (Lieder & Griffiths, 2020). To this end, Lieder and Griffiths (2020) view attention as a decision problem, where the cognitive system weighs the costs and benefits of focusing on specific aspects of a situation. However, the allocation of attention can be biased and habitual (Jiang & Fisk, 2019). Attentional biases can lead to overemphasis on certain features, such as immediate rewards, at the expense of long-term considerations (Jiang & Sisk, 2019). These biases are reinforced by habitual patterns of attention that develop over time, typically triggered by reward-associated stimuli that gain attentional priority and thereby determine reward histories (Anderson, 2016).
Synthesis
Attention plays a central role in shaping moral judgments by acting as a selective filter that determines which information is processed and prioritized during decision-making. Attentional biases, often driven by pre-existing motivations and individual differences, influence moral outcomes by amplifying the salience of certain aspects of moral dilemmas while neglecting others (c.f. Pleskac et al., 2023). For instance, attention directed toward victims of harm over perpetrators highlights empathy-driven moral judgments, while self-focused attention under time pressure reduces generosity (Fiedler & Glöckner, 2015; Teoh et al., 2020). These biases are not static but evolve through learning processes, where reinforcement histories and attentional highlighting shape how cognitive resources are allocated. However, attention can also entrench blind spots in moral learning, as habitual patterns of focusing on self-relevant information or ingroup members often limit moral concern for outgroups. Eye-tracking studies reveal that attention not only reflects underlying moral preferences but also causally influences choices, with directed gaze biases steering decision-making in predictable ways (Pärnamets et al., 2015).
Crucially, the role of attention within the growing field of moral learning is understudied, and there appears to be significant ground for insightful work. By understanding attention as a limited yet dynamically allocated resource, future work can highlight its dual role in amplifying existing biases and serving as a lever for fostering moral growth through interventions that strategically guide attentional focus. This interplay between attention and moral learning underscores its potential as both a constraint and a tool for advancing moral cognition. Understanding the interplay between attention and moral learning can inform strategies to mitigate biases and foster moral progress, while simultaneously understanding how these biases can contribute to moral regress.
5) Open questions
This section provides a synthesis of the reviewed work on moral learning via philosophical and psychological theories of moral decision-making, its mechanisms, and the role of how people select between competing internal and external stimuli in making decisions that correspond with their learned morality. Then, I outline a number of open questions for future work, with particular attention to moral circle expansion and/or contraction.
Moral learning and decision-making mechanisms
Moral decision-making is influenced by a combination of cognitive, affective, and social learning processes. While normative theories such as utilitarianism, deontology, and contractualism offer structured frameworks for evaluating moral choices, empirical evidence suggests that moral judgments often deviate from these theoretical models due to the influence of cognitive biases, reinforcement histories, and attentional mechanisms (Baron, 2024; Cushman et al., 2017). Individuals learn how to make their moral decisions through a range of learning mechanisms, including direct reinforcement learning, observational learning, and cognitive reflection. These learning mechanisms help shape the moral weights individuals assign to different entities, influencing the scope of their moral circle (Spiteri & Lieder, 2024; Graham et al., 2017).
Reinforcement learning, in particular, provides a compelling model for understanding how individuals acquire moral intuitions, moral values, and intuitive moral rules. Model-free reinforcement learning explains intuitive, habitual responses to moral scenarios, where actions are guided by past rewards and punishments rather than explicit reasoning (Sutton & Barto, 2018; c.f. Crockett, 2013). In contrast, model-based learning allows individuals to simulate potential consequences of moral choices, integrating causal reasoning into their decision-making processes (Blair, 2017; c.f. Crockett, 2013). Importantly, these learning mechanisms interact with attentional biases, as individuals selectively focus on the outcomes of previous decisions and actions in different moral dilemmas (e.g., Maier et al., 2024; Spiteri & Lieder, 2024). This selective attention can amplify existing moral biases, reinforcing parochial moral concern while limiting moral inclusion for distant others (Marshall & Wilks, 2024).
Selecting between competing internal and external stimuli in moral learning
Attention plays a central role in shaping moral judgments by acting as a selective filter that determines which information is processed and prioritized during decision-making. Attentional biases, often driven by pre-existing motivations and individual differences, are not static but evolve through learning processes, where reinforcement histories and attentional highlighting shape how cognitive resources are allocated (e.g., Spiteri & Lieder, 2024). However, attention can also entrench blind spots in moral learning, as habitual patterns of focusing on self-relevant information or ingroup members often limit moral concern for outgroups. Eye-tracking studies reveal that attention not only reflects underlying moral preferences but also causally influences choices, with directed gaze biases steering decision-making in predictable ways (Pärnamets et al., 2015). This suggests that attention serves as a crucial moderator of moral circle expansion and contraction, shaping how individuals weigh different moral considerations over time.
Open questions for future work
Despite significant advances in understanding moral learning, several critical questions remain unanswered. One pressing issue concerns the stability of the moral circle: Are shifts in the moral circle stable across time, and do they fluctuate based on situational factors? While some research suggests that moral inclusivity can be learned and reinforced through repeated exposure to diverse perspectives (Rhodes & Wellman, 2017), other studies indicate that moral circle expansion is highly context-dependent and could be susceptible to regression under stress or threat (Crockett, 2013; Buchanan, 2020).
Another key question involves the role of moral learning from consequences and social learning in moral circle expansion. To what extent do individuals learn to extend their moral concern by observing moral exemplars or engaging with moral narratives? Moreover, people engage in considerable social learning from peers, who most of the time are not extraordinary changemakers or exemplars. In this sense, the moral lessons they learn through social learning can be ordinary and not profound (c.f. Railton, 2017). Research on norm-nudging and empathy interventions suggests that exposure to prosocial role models can promote broader moral concern (Reynante et al., 2024). However, the mechanisms underlying these effects remain poorly understood. It is possible that attention shapes moral learning by acting as a selective filter determining which consequences are processed and prioritized during information processing. At the same time, it is also possible that there are learning-induced changes in people’s decision strategies that change the cognitive mechanisms of moral decision-making from one decision strategy to another.
Additionally, the intersection of moral learning and cognitive resource allocation warrants further investigation. The resource-rationality framework suggests that individuals optimize their moral decision-making strategies based on cognitive constraints (Lieder et al., 2024). If the process of making moral decisions in a way that takes the decision’s effects on distant others requires significant cognitive effort, is the cognitive cost prohibitively high enough to create a bottleneck that limits willingness to expand the moral circle? In addition, if the learning process that increases the moral weights assigned to distant others also requires significant cognitive effort, is this equally prohibitive for an expanding circle of moral concern? Understanding how cognitive limitations shape moral learning trajectories could provide valuable insights into designing interventions that facilitate lasting moral inclusivity. Moreover, the decision by sampling (DbS; Stewart et al., 2006) account of decision-making predicts that parochialism could be a result of systematically biased samples from memory and the immediate environment. The utility-weighted sampling (UWS) model by Lieder, Griffiths and Hsu (2018) makes a similar prediction. While DbS and the UWS model have been influential in judgment and decision-making, there is a dearth of work on their applications to moral psychology.
Lastly, the role of cultural and historical context in moral learning remains an open question. Historical psychology research suggests that moral values evolve over time, influenced by social, economic, and technological changes (Pinker, 2011). Examining the historical dynamics of moral circle expansion may help elucidate the conditions under which societies became more inclusive or reverted to moral exclusion.
Conclusion
This section has synthesized key insights from philosophical and psychological theories of moral decision-making, emphasizing the mechanisms of moral learning and the factors influencing moral circle expansion and contraction. Moral learning is a dynamic process shaped by reinforcement histories, attentional biases, and social modeling, all of which contribute to the moral weights individuals assign to different entities. The selection between competing internal and external stimuli plays a crucial role in shaping moral judgments, highlighting the importance of cognitive resource allocation in moral learning.
Several open questions remain, particularly regarding the stability of moral circle expansion, the role of social learning, cognitive constraints on moral inclusivity, and historical trends in moral evolution. For instance, how do moral reinforcement learning and metacognitive reinforcement learning contribute to the expansion and contraction of the moral circle? In particular, can reinforcement learning play a role in determining the moral weights assigned to the various entities within the moral circle? Moreover, under which conditions does moral learning from the consequences of previous decisions expand versus contract an individual’s moral circle? Another important question is: how do attentional biases in moral learning influence whether a person’s moral circle will expand or contract? Future work should also explore whether interventions in the process of moral learning (Railton, 2017) can systematically recalibrate moral weights.
Addressing these questions through empirical research could provide a deeper understanding of how moral learning unfolds and inform strategies for promoting moral progress. By integrating insights from reinforcement learning, attention research, and social psychology, future work can further elucidate the cognitive and social mechanisms underlying moral learning and its implications for moral decision-making.
References
American Psychological Association, APA. (n.d.). Morality. In APA Dictionary of Psychology. Retrieved February 10, 2025, from https://dictionary.apa.org/morality
Anderson, B. A. (2016). The attention habit: How reward learning shapes attentional selection: The attention habit. Annals of the New York Academy of Sciences, 1369(1), 24–39. https://doi.org/10.1111/nyas.12957
Ayars, A. (2016). Can model-free reinforcement learning explain deontological moral judgments? Cognition, 150, 232–242. https://doi.org/10.1016/j.cognition.2016.02.002
Baron, J., & Szymanska, E. (2011). Heuristics and Biases in Charity. In D. M. Oppenheimer & C. Y. Olivola (Eds.), The science of giving: Experimental approaches to the study of charity (pp. 215--235). Psychology Press.
Baron, J., Guercay, B., Moore, A. B. & Starcke, K. (2012). Use of a Rasch model to predict response times to utilitarian moral dilemmas. Synthese 189, 107–117.
Baron, J. (2024). Thinking and Deciding (Fifth edition). New York: Cambridge University Press. Chapters 17–19.
Becker, F., Wirzberger, M., Pammer-Schindler, V., Srinivas, S., & Lieder, F. (2023). Systematic metacognitive reflection helps people discover far-sighted decision strategies: A process-tracing experiment. Judgment and Decision Making, 18, e15. https://doi.org/10.1017/jdm.2023.16
Bennis, W. M., Medin, D. L., & Bartels, D. M. (2010). The Costs and Benefits of Calculation and Moral Rules. Perspectives on Psychological Science, 5(2), 187–202. https://doi.org/10.1177/1745691610362354
Blair, R. J. R. (2017). Emotion-based learning systems and the development of morality. Cognition, 167, 38–45. https://doi.org/10.1016/j.cognition.2017.03.013
Bloom, P. (2010). How do morals change? Nature, 464, 490. https://doi.org/10.1038/464490a
Bloom, P. (2013). Just Babies: The Origins of Good and Evil. New York: Crown Publishers.
Chapter 1 - The Moral Life of Babies, Chapter 7 - How to Be Good.
Buchanan, A. (2020). Our moral fate: Evolution and the escape from tribalism. MIT Press.
** Chapter 1: “Large Scale Moral Change: The Shift Toward Inclusive Moralities”,
* Chapter 6: “Solving the Big Puzzle: How Surplus Reproductive Success Led to the Great Uncoupling of Morality from Fitness”, * Chapter 9: “Taking Charge of Our Moral Fate”.
Callaway, F., Rangel, A., & Griffiths, T. L. (2021). Fixation patterns in simple choice reflect optimal information sampling. PLOS Computational Biology, 17(3), e1008863. https://doi.org/10.1371/journal.pcbi.1008863
Castañón, R., Campos, Fco. A., Villar, J., & Sánchez, A. (2023). A reinforcement learning approach to explore the role of social expectations in altruistic behavior. Scientific Reports, 13(1), 1717. https://doi.org/10.1038/s41598-023-28659-0
Caviola, L., Schubert, S., Teperman, E., Moss, D., Greenberg, S., & Faber, N. S. (2020). Donors vastly underestimate differences in charities’ effectiveness. Judgment and Decision Making, 15(4), 509–516.
Caviola, L., Schubert, S., & Greene, J. D. (2021). The Psychology of (In)Effective Altruism. Trends in Cognitive Sciences, 25(7), 596–607. https://doi.org/10.1016/j.tics.2021.03.015
Chalik, L., & Rhodes, M. (2022). The development of moral circles. In Handbook of Moral Development (pp. 54-68). Routledge.
Chappell, R., Meissner, D., & MacAskill, W. (2024). An Introduction to Utilitarianism: From Theory to Practice. Indianapolis: Hackett Publishing Company.
Cohen, D. J., & Ahn, M. (2016). A subjective utilitarian theory of moral judgment. Journal of Experimental Psychology: General, 145(10), 1359.
Crimston, C. R., Hornsey, M. J., Bain, P. G., & Bastian, B. (2018). Toward a psychology of moral expansiveness. Current Directions in Psychological Science, 27(1), 14-19.
Crockett, M. J. (2013). Models of morality. Trends in cognitive sciences, 17(8), 363-366.
Cushman, F. (2013). Action, outcome, and value: A dual-system framework for morality. Personality and social psychology review, 17(3), 273-292.
Cushman, F., Kumar, V., & Railton, P. (2017). Moral learning: Psychological and philosophical perspectives. Cognition, 167, 1–10. https://doi.org/10.1016/j.cognition.2017.06.008
Daw, N. D., & Tobler, P. N. (2014). Value Learning through Reinforcement. In Neuroeconomics: Decision Making and the Brain (DOI: 10.1016/B978-0-12-416008-8.00015-2; 2nd ed., pp. 283–298).
Decety, J., Michalska, K. J., & Kinzler, K. D. (2012). The Contribution of Emotion and Cognition to Moral Sensitivity: A Neurodevelopmental Study. Cerebral Cortex, 22(1), 209–220. https://doi.org/10.1093/cercor/bhr111
Ellemers, N., Van Der Toorn, J., Paunov, Y., & Van Leeuwen, T. (2019). The psychology of morality: A review and analysis of empirical studies published from 1940 through 2017. Personality and Social Psychology Review, 23(4), 332-366.
Esterman, M., & Rothlein, D. (2019). Models of sustained attention. Current Opinion in Psychology, 29, 174–180. https://doi.org/10.1016/j.copsyc.2019.03.005
Fiedler, S., & Glöckner, A. (2015). Attention and moral behavior. Current Opinion in Psychology, 6, 139–144. https://doi.org/10.1016/j.copsyc.2015.08.008
Fromell, H., Nosenzo, D., & Owens, T. (2020). Altruism, fast and slow? Evidence from a meta-analysis and a new experiment. Experimental Economics, 23(4), 979–1001. https://doi.org/10.1007/s10683-020-09645-z
Graham, J., Waytz, A., Meindl, P., Iyer, R., & Young, L. (2017). Centripetal and centrifugal forces in the moral circle: Competing constraints on moral learning. Cognition, 167, 58–65. https://doi.org/10.1016/j.cognition.2016.12.001
Greene, J. D., Sommerville, R. B., Nystrom, L. E., Darley, J. M., & Cohen, J. D. (2001). An fMRI Investigation of Emotional Engagement in Moral Judgment. Science, 293(5537), 2105–2108. https://doi.org/10.1126/science.1062872
Greene, J. (2014). Moral tribes: Emotion, Reason, and the Gap Between Us and Them. Penguin Books. ** Introduction, * Chapter 1, ** Chapter 6, ** Chapter 8.
Grimalda, G., Buchan, N. R., & Brewer, M. B. (2023). Global social identity predicts cooperation at local, national, and global levels: Results from international experiments. Frontiers in Psychology, 14, 1008567. https://doi.org/10.3389/fpsyg.2023.1008567
Groß, P., Burga, T., Pons, E., Spiteri, G., Maier, M., Cheung, V., Tahmasebi, Z., Lieder, F. (2024). What (Doesn’t) Limit People’s Prosociality in Social Dilemma Situations. European Conference on Positive Psychology (ECPP) 2024. http://dx.doi.org/10.13140/RG.2.2.10049.12649/1
Hare, R. M. (1981). Moral thinking: Its levels, method, and point. Oxford: Clarendon Press; New York: Oxford University Press. Chapters 1-3.
Holyoak, K. J., & Powell, D. (2016). Deontological coherence: A framework for commonsense moral reasoning. Psychological Bulletin, 142(11), 1179–1203. https://doi.org/10.1037/bul0000075
Jaeger, B., & Wilks, M. (2023). The relative importance of target and judge characteristics in shaping the moral circle. Cognitive Science, 47(10), e13362.
Jiang, Y. V., & Sisk, C. A. (2019). Habit-like attention. Current Opinion in Psychology, 29, 65–70. https://doi.org/10.1016/j.copsyc.2018.11.014
Kahane, G. (2012). On the Wrong Track: Process and Content in Moral Psychology. Mind & Language, 27(5), 519-545.
Kant, I. (1953). Groundwork of the metaphysics of morals. Translated as The moral law by H. J. Paton. London, UK: Hutchinson. (Original work published 1785)
Kleiman-Weiner, M., Saxe, R., & Tenenbaum, J. B. (2017). Learning a commonsense moral theory. Cognition, 167, 107–123. https://doi.org/10.1016/j.cognition.2017.03.005
Kohlberg, L., & Hersh, R. H. (1977). Moral development: A review of the theory. Theory Into Practice, 16(2), 53–59. https://doi.org/10.1080/00405847709542675
Kohlberg, L. (1981). Essays on moral development: The philosophy of moral development: Moral Stages and the idea of justice (Vol. 1). San Francisco: Harper & Row.
Kohlberg, L. (1984). Essays on moral development: The psychology of moral development: The nature and validity of moral stages (Vol. 2). San Francisco: Harper & Row.
Krajbich, I., Armel, C., & Rangel, A. (2010). Visual fixations and the computation and comparison of value in simple choice. Nature Neuroscience, 13(10), 1292–1298. https://doi.org/10.1038/nn.2635
Krajbich, I. (2019). Accounting for attention in sequential sampling models of decision making. Current Opinion in Psychology, 29, 6–11. https://doi.org/10.1016/j.copsyc.2018.10.008
Lee, J., & Holyoak, K. J. (2020). “But he’s my brother”: The impact of family obligation on moral judgments and decisions. Memory & Cognition, 48(1), 158–170. https://doi.org/10.3758/s13421-019-00969-7
Levine, S., Chater, N., Tenenbaum, J. B., & Cushman, F. (2024). Resource-rational contractualism: A triple theory of moral cognition. Behavioral and Brain Sciences, 1–38. https://doi.org/10.1017/S0140525X24001067
Lieder, F., Shenhav, A., Musslick, S., & Griffiths, T. L. (2018). Rational metareasoning and the plasticity of cognitive control. PLOS Computational Biology, 14(4), e1006043. https://doi.org/10.1371/journal.pcbi.1006043
Lieder, F., & Griffiths, T. L. (2020). Resource-rational analysis: Understanding human cognition as the optimal use of limited computational resources. Behavioral and Brain Sciences, 43, e1. https://doi.org/10.1017/S0140525X1900061X
Lieder, F., Prentice, M., Corwin-Renner, E. (2022). An interdisciplinary synthesis of research on understanding and promoting well-doing. Social and Personality Psychology Compass, e12704. http://dx.doi.org/10.1111/spc3.12704
Lieder, F., Callaway, F., & Griffiths, T. L. (2024). Rational use of cognitive resources.
* Chapter 8: Improving decisions: optimal boosting, nudging, and cognitive prostheses
MacAskill, W. (2022). What We Owe The Future. Simon and Schuster.
* Chapter 3: Moral Change.
Maier, M., Cheung, V., Lieder, F. (2024) Metacognitive Learning from Consequences of Past Choices Shapes Moral Decision-Making. PsyArXiv Preprint. https://osf.io/preprints/psyarxiv/gjf3h?view_only=
Marshall, J., & Wilks, M. (2024). Does Distance Matter? How Physical and Social Distance Shape Our Perceived Obligations to Others. Open Mind, 8, 511–534. https://doi.org/10.1162/opmi_a_00138
McFarland, S., Hackett, J., Hamer, K., Katzarska‐Miller, I., Malsch, A., Reese, G., & Reysen, S. (2019). Global Human Identification and Citizenship: A Review of Psychological Studies. Political Psychology, 40(S1), 141–171. https://doi.org/10.1111/pops.12572
Meany, P. (2021). An Introduction to Mill’s The Subjection of Women. Libertarianism.org. https://www.libertarianism.org/articles/introduction-mills-subjection-women
Mennen, A. C., Norman, K. A., & Turk-Browne, N. B. (2019). Attentional bias in depression: Understanding mechanisms to improve training and treatment. Current Opinion in Psychology, 29, 266–273. https://doi.org/10.1016/j.copsyc.2019.07.036
Nichols, S. (2021). Rational rules: Towards a theory of moral learning (First edition). Oxford University Press. Chapters 1-4.
Ord, T. (2009). Beyond Action. PhD dissertation, University of Oxford. Chapters 1-2.
Parfit, D. (1984). Reasons and Persons. Oxford: Oxford University Press.
* Chapter 3 - Five Mistakes in Moral Mathematics.
Pärnamets, P., Johansson, P., Hall, L., Balkenius, C., Spivey, M. J., & Richardson, D. C. (2015). Biasing moral decisions by exploiting the dynamics of eye gaze. Proceedings of the National Academy of Sciences, 112(13), 4170–4175. https://doi.org/10.1073/pnas.1415250112
Parr, T., & Friston, K. J. (2019). Attention or salience? Current Opinion in Psychology, 29, 1–5. https://doi.org/10.1016/j.copsyc.2018.10.006
Pinker, S. (2011). The Better Angels of our Nature: Why Violence has Declined. New York: Penguin Group. Chapter 9 - Better Angels.
Pleskac, T. J., Yu, S., Grunevski, S., & Liu, T. (2023). Attention biases preferential choice by enhancing an option’s value. Journal of Experimental Psychology: General, 152(4), 993–1010. https://doi.org/10.1037/xge0001307
Pong, V., & Tam, K.-P. (2023). Relationship between global identity and pro-environmental behavior and environmental concern: A systematic review. Frontiers in Psychology, 14, 1033564. https://doi.org/10.3389/fpsyg.2023.1033564
Proctor, R. W., & Vu, K.-P. L. (2023). Historical overview of research on attention. In R. W. Proctor & K.-P. L. Vu, Attention: Selection and control in human information processing. (DOI: 10.1037/0000317-001; pp. 3–28). American Psychological Association.
Railton, P. (2017). Moral Learning: Conceptual foundations and normative relevance. Cognition, 167, 172–190. https://doi.org/10.1016/j.cognition.2016.08.015
Rawles, J. (1971/2005). A Theory of Justice. Belknap Press.
Reed II, A., & Aquino, K. F. (2003). Moral identity and the expanding circle of moral regard toward out-groups. Journal of personality and social psychology, 84(6), 1270.
Rhodes, M., & Wellman, H. (2017). Moral learning as intuitive theory revision. Cognition, 167, 191–200. https://doi.org/10.1016/j.cognition.2016.08.013
Reynante, B. M., Wilcox, J. E., Stephenson, O. L., Lieder, F., & Lacopo, C. (2024). Metachangemaking: An interdisciplinary synthesis of research on cultivating changemakers. Journal of Moral Education, 1-26.
Sauer, H., Blunden, C., Eriksen, C., & Rehren, P. (2021). Moral progress: Recent developments. Philosophy Compass, e12769. https://doi.org/10.1111/phc3.12769
Schubert, S., & Caviola, L. (2024). Effective Altruism and the Human Mind: The Clash Between Intuition and Impact. Oxford University Press.
* Chapter 8: Fundamental Value Change
Seymour, B. (2009). Altruistic Learning. Frontiers in Behavioral Neuroscience, 3. https://doi.org/10.3389/neuro.08.023.2009
Shepard, R. N. (2008). The step to rationality: The efficacy of thought experiments in science, ethics, and free will. Cognitive Science, 32, 23–26. * Section 11: The cognitive grounds of moral principles (p. 23-26).
Singer, P. (2011). The Expanding Circle: Ethics, Evolution, and Moral Progress. Princeton, New Jersey: Princeton University Press. Chapter 4 - Reason.
Spiteri, G.W., & Lieder, F. (2024). The role of attention in moral learning from consequences. Master’s dissertation, UCLA.
Spiteri, G.W., Kim, S., & Lieder, F. (2024). Identification with world citizenship predicts life satisfaction. Under review at Scientific Reports. [Pre-Print]. DOI: 10.21203/rs.3.rs-5349047/v1
Spy. (1873). A Feminine Philosopher [Cartoon].
Stewart, N., Chater, N., & Brown, G. D. A. (2006). Decision by sampling. Cognitive Psychology, 53(1), 1–26. https://doi.org/10.1016/j.cogpsych.2005.10.003
Sunstein, C. R. (2005). Moral heuristics. Behavioral and Brain Sciences, 28, 531–573. https://doi.org/10.1017/S0140525X05000099
Sutton, R. S., & Barto, A. G. (2018). Reinforcement learning: An introduction. MIT Press. Chapters 1-3.
Teoh, Y. Y., Yao, Z., Cunningham, W. A., & Hutcherson, C. A. (2020). Attentional priorities drive effects of time pressure on altruistic choice. Nature Communications, 11(1), 3534. https://doi.org/10.1038/s41467-020-17326-x
Teoh, Y. Y., & Hutcherson, C. A. (2022). The Games We Play: Prosocial Choices Under Time Pressure Reflect Context-Sensitive Information Priorities. Psychological Science, 33(9), 1541–1556. https://doi.org/10.1177/09567976221094782
Thielmann, I., Spadaro, G., & Balliet, D. (2020). Personality and prosocial behavior: A theoretical framework and meta-analysis. Psychological Bulletin, 146(1), 30–90. https://doi.org/10.1037/bul0000217
Tuen, Y. J., Bulley, A., Palombo, D. J., & O’Connor, B. B. (2023). Social value at a distance: Higher identification with all of humanity is associated with reduced social discounting. Cognition, 230, 105283. https://doi.org/10.1016/j.cognition.2022.105283
Tusche, A., & Bas, L. M. (2021). Neurocomputational models of altruistic decision‐making and social motives: Advances, pitfalls, and future directions. Wiley Interdisciplinary Reviews: Cognitive Science, 12(6), e1571.
Van Lange, P. A., Joireman, J., Parks, C. D., & Van Dijk, E. (2013). The psychology of social dilemmas: A review. Organizational Behavior and Human Decision Processes, 120(2), 125-141.
Watson, L., & Wilson, A. T. (2019). Review Essay: Exemplarist Moral Theory. Journal of Moral Philosophy, 16(6), 755–768. https://doi.org/10.1163/17455243-01606003
World Bank. (2024). Poverty, Prosperity, and Planet Report 2024: Pathways Out of the Polycrisis. World Bank.
Executive summary: This exploratory and literature-review-based post proposes a research agenda for investigating how attentional processes shape moral learning—particularly how individuals acquire, update, and apply moral judgments—within a utilitarian framework, suggesting that reinforcement learning and attentional biases play key roles in expanding or contracting one’s moral circle.
Key points:
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.