According to social psychologist Daniel Batson, psychological altruism refers to a motivational state with the ultimate goal of increasing the well-being of others.


Not all altruistic behaviors have the same psychological motivation (one can act altruistically out of mere social convention at a given moment), but psychologically motivated altruism is undoubtedly the most promising when it comes to quantitatively increasing altruistic acts.


To promote altruistic acts, the best strategy is, logically, to promote psychologically motivated altruism. Batson, in particular, links altruistic motivation to the development of empathy. But empathy, as a basic response to human behavior, has its limitations (as aptly pointed out in a famous book). These limitations disappear almost entirely when we conceive of "principled empathy." In this case, symbolic mechanisms with strong emotional power are used to establish rational criteria for empathy and altruism. 

A principled altruistic psychology, unlike altruism based merely on empathy, requires a certain level of cognitive development. In theory, not all altruistic principles are compatible with empathy-based altruism. Consider the Marxists who believed that, to achieve universal justice, the end justified the means (and some speculations about anti-human animalism too). But altruistic psychology (or "principled empathic altruism") can address these issues if it conceives of human development as inseparable from cultural development. An altruistic culture based on empathy, while never without ethical dilemmas, cannot fall prey to utilitarian fallacies.

The goal must be to develop an altruistic culture by using—through trial and error—valid strategies for developing an altruistic personality (altruistic psychology) and controlling human innate agression.

Jeremy Rifkin points out that an outstanding experience in developing empathic connection between individuals as successful as "Alcoholics Anonymous" did not arise from psychological science, but from the individual motivation to focus cooperation for the common good from the point of view of the development of empathy. 

Also, starting from the conception of the development of a "dramaturgical consciousness" in human psychology (after humanity has passed through stages of "mythological consciousness," "theological consciousness," "ideological consciousness," and "psychological consciousness"), Jeremy Rifkin considers the possibilities of applying the "deep acting" proposed by Stanislavsky in his method for actors to the altruistic development of personality. To this we can add the possibilities of the "psychological priming" for the development of altruistic empathy, cognitive behavioral therapy strategies (coaching tricks too), the cultivation of the arts (especially literature), and the long tradition of psychological strategies characteristic of compassionate religions. If the objective of the social behavior to be achieved is always clear (non-aggressive social behavior, with the highest possible level of empathy, benevolence and altruism), a trial and error process would select the most productive strategies and discard those that are less so.

We have evidence that today there are already societies that are more altruistic than others, and therefore it seems sensible to explore the limits of the social development of altruism, for which we find a large number of potentially effective instruments.

1

0
0

Reactions

0
0
Comments


No comments on this post yet.
Be the first to respond.
Curated and popular this week
 ·  · 16m read
 · 
At the last EAG Bay Area, I gave a workshop on navigating a difficult job market, which I repeated days ago at EAG London. A few people have asked for my notes and slides, so I’ve decided to share them here.  This is the slide deck I used.   Below is a low-effort loose transcript, minus the interactive bits (you can see these on the slides in the form of reflection and discussion prompts with a timer). In my opinion, some interactive elements were rushed because I stubbornly wanted to pack too much into the session. If you’re going to re-use them, I recommend you allow for more time than I did if you can (and if you can’t, I empathise with the struggle of making difficult trade-offs due to time constraints).  One of the benefits of written communication over spoken communication is that you can be very precise and comprehensive. I’m sorry that those benefits are wasted on this post. Ideally, I’d have turned my speaker notes from the session into a more nuanced written post that would include a hundred extra points that I wanted to make and caveats that I wanted to add. Unfortunately, I’m a busy person, and I’ve come to accept that such a post will never exist. So I’m sharing this instead as a MVP that I believe can still be valuable –certainly more valuable than nothing!  Introduction 80,000 Hours’ whole thing is asking: Have you considered using your career to have an impact? As an advisor, I now speak with lots of people who have indeed considered it and very much want it – they don't need persuading. What they need is help navigating a tough job market. I want to use this session to spread some messages I keep repeating in these calls and create common knowledge about the job landscape.  But first, a couple of caveats: 1. Oh my, I wonder if volunteering to run this session was a terrible idea. Giving advice to one person is difficult; giving advice to many people simultaneously is impossible. You all have different skill sets, are at different points in
 ·  · 10m read
 · 
Regulation cannot be written in blood alone. There’s this fantasy of easy, free support for the AI Safety position coming from what’s commonly called a “warning shot”. The idea is that AI will cause smaller disasters before it causes a really big one, and that when people see this they will realize we’ve been right all along and easily do what we suggest. I can’t count how many times someone (ostensibly from my own side) has said something to me like “we just have to hope for warning shots”. It’s the AI Safety version of “regulation is written in blood”. But that’s not how it works. Here’s what I think about the myth that warning shots will come to save the day: 1) Awful. I will never hope for a disaster. That’s what I’m trying to prevent. Hoping for disasters to make our job easier is callous and it takes us off track to be thinking about the silver lining of failing in our mission. 2) A disaster does not automatically a warning shot make. People have to be prepared with a world model that includes what the significance of the event would be to experience it as a warning shot that kicks them into gear. 3) The way to make warning shots effective if (God forbid) they happen is to work hard at convincing others of the risk and what to do about it based on the evidence we already have— the very thing we should be doing in the absence of warning shots. If these smaller scale disasters happen, they will only serve as warning shots if we put a lot of work into educating the public to understand what they mean before they happen. The default “warning shot” event outcome is confusion, misattribution, or normalizing the tragedy. Let’s imagine what one of these macabrely hoped-for “warning shot” scenarios feels like from the inside. Say one of the commonly proposed warning shot scenario occurs: a misaligned AI causes several thousand deaths. Say the deaths are of ICU patients because the AI in charge of their machines decides that costs and suffering would be minimize
 ·  · 14m read
 · 
This is a transcript of my opening talk at EA Global: London 2025. In my talk, I challenge the misconception that EA is populated by “cold, uncaring, spreadsheet-obsessed robots” and explain how EA principles serve as tools for putting compassion into practice, translating our feelings about the world's problems into effective action. Key points:  * Most people involved in EA are here because of their feelings, not despite them. Many of us are driven by emotions like anger about neglected global health needs, sadness about animal suffering, or fear about AI risks. What distinguishes us as a community isn't that we don't feel; it's that we don't stop at feeling — we act. Two examples: * When USAID cuts threatened critical health programs, GiveWell mobilized $24 million in emergency funding within weeks. * People from the EA ecosystem spotted AI risks years ahead of the mainstream and pioneered funding for the field starting in 2015, helping transform AI safety from a fringe concern into a thriving research field. * We don't make spreadsheets because we lack care. We make them because we care deeply. In the face of tremendous suffering, prioritization helps us take decisive, thoughtful action instead of freezing or leaving impact on the table. * Surveys show that personal connections are the most common way that people first discover EA. When we share our own stories — explaining not just what we do but why it matters to us emotionally — we help others see that EA offers a concrete way to turn their compassion into meaningful impact. You can also watch my full talk on YouTube. ---------------------------------------- One year ago, I stood on this stage as the new CEO of the Centre for Effective Altruism to talk about the journey effective altruism is on. Among other key messages, my talk made this point: if we want to get to where we want to go, we need to be better at telling our own stories rather than leaving that to critics and commentators. Since