Hide table of contents

Including, but not limited to, selection forces for: genes, memes, economic power, and political power.

Motivation for asking: This is part of my analysis on whether we should aim to make philanthropy obsolete.

New Answer
New Comment


2 Answers sorted by

I wrote down some musings about this (including a few relevant links) in appendix 2 here.

Epistemic status: narrative driven; arm-chair thinking; contains large simplifications, suppositions, and speculations

Conclusion: I don't know if the overall effect is selecting for or against

Historically

Humans might be good at detecting whether someone is altruistic. So from an evolutionary psychology perspective, altruism might act as a commitment mechanism for cooperativeness (but remember, we're Adaptation-Executers, not Fitness-Maximizers). Similarly, but alternatively, similar alleles could be responsible for both cooperativeness and altruism. In either case, those seems like plausible explanations for why some amount of altruism were selected for, and would continue being selected for.

But I want to focus my answer mostly on speculating on new and future selection pressures for or against altruism. The term to search to read the literature on the topic of its historical selection pressures is 'problem of altruism'. The above is just a quick thought, not a summary of the literature.

General

Narratives for increased selectiveness

It could be that we have a greater opportunity for cooperativeness than we used to. It's now possible to cooperate with people throughout the world, and not just with your local tribe. Plus, with a winners take most financial dynamics, this could have increase benefits of having large group cooperates.

Also, a tribe of people sharing the same moral values will cooperate much more easily. A pure negative preference utilitarian giving money to another pure negative preference utilitarian knows that this money will be used for the pursuit of a shared goal. Whereas a pure egoist can't as easily do this with other pure egoists as they all have different goals / they all want to help different people (ie. themselves, respectively). It's much cheaper for people sharing moral values to cooperate as they don't have to design robust contracts.

Genes

Narratives for increased selectiveness

A) It could be that altruistic people think having more people in absolute or more people like them in comparison is a good thing, and so make an effort to raise more children or conceive more biological children, respectively, on average.

B) It could be that when we get technology to do advance genetic engineering in humans, subsidies or laws encourage or force selecting prosocial genes for the benefit of the common good.

Narratives for decreased selectiveness

A) It could be that altruistic people give resources away to the extent that they don't have enough to raise (as much) children, or to raise them well enough.

B) It could be that altruistic people think it's wrong to create new people, either on deontological or utilitarian grounds. Deontological grounds could include directly being against creating new humans, or indirectly, by being against taking welfare money to do so. From an utilitarian perspective, they could potentially be failing to see the longer-term consequences it would have from the resulting selection effect, or they could rightfully have weighted this consideration as less important (or came to the right conclusion for epistemically wrong reasons).

C) It could be that when we get technology to do advance genetic engineering in humans, people want their kids to mostly care about their family and themselves, and not care about society as much.

Economic power

Related: Donating now vs later (on Causepriotization.org)

Narratives for increased selectiveness

It seems likely that egoists have faster diminishing returns on marginal dollars, and also, as a consequence, are more risk averse to making a lot of money. Ie. you can only save yourself once (sort of), but there are a lot of other people to save. Although if you have fringe moral values, they might be so neglected that this isn't as accurate.

As a potential example for altruistic people taking more risks, it seems more plausible that an egoist person being offered 100M USD to sell zir startup would take the money than an altruistic person given an altruistic person might still have low diminishing returns on money at that level.

It could also be that altruistic people, caring about people in the future, are more likely to invest their money long-term, and so gain power over a larger fraction of the economy.

Narratives for decreased selectiveness

It could be that philanthropists, by redistributing their wealth directly or through public goods, or by helping oppressed groups see their relative capacity to influence the world diminished as they become relatively less wealthy than those who don't. Trivially, if they are rational, they would only do that if they expect this to be the best course of action. But their altruistic instinct might incite them for more rapid gratification, especially if they want to signal those instincts, and other mechanisms, such as Donor-Advized Funds, don't allow them to do so as much.

Other

Ems

On page 302-303 of "The Age of Ems", Robin Hanson explains what ze thinks altruistic ems will donate money to and why they will choose those cause areas. Ze also says "Like people today, ems are eager to show their feelings about social and moral problems, and their allegiance to pro-social norms", although I think ze doesn't explain why, but it might just be a premise of the book that ems are similar to humans a priori, and just live with different incentive structures.

Comments2
Sorted by Click to highlight new comments since:

In his blog post "Why Might the Future Be Good," Paul Christiano writes:

What natural selection selects for is patience. In a thousand years, given efficient natural selection, the most influential people will be those who today cared what happens in a thousand years. Preferences about what happens to me (at least for a narrow conception of personal identity) will eventually die off, dominated by preferences about what society looks like on the longest timescales.

(Please read all of "How Much Altruism Do We Expect?" for the full context.)

The classical answer to this is that altruism towards strangers is not evolutionarily adaptative. This is because the altruistic give ressources benefit their own and others' descendants equally, while the nonaltruistic also get those benefits for their descendants without having to pay the cost. See also the tragic story of George R. Price.

Curated and popular this week
 ·  · 3m read
 · 
We’ve redesigned effectivealtruism.org to improve understanding and perception of effective altruism, and make it easier to take action.  View the new site → I led the redesign and will be writing in the first person here, but many others contributed research, feedback, writing, editing, and development. I’d love to hear what you think, here is a feedback form. Redesign goals This redesign is part of CEA’s broader efforts to improve how effective altruism is understood and perceived. I focused on goals aligned with CEA’s branding and growth strategy: 1. Improve understanding of what effective altruism is Make the core ideas easier to grasp by simplifying language, addressing common misconceptions, and showcasing more real-world examples of people and projects. 2. Improve the perception of effective altruism I worked from a set of brand associations defined by the group working on the EA brand project[1]. These are words we want people to associate with effective altruism more strongly—like compassionate, competent, and action-oriented. 3. Increase impactful actions Make it easier for visitors to take meaningful next steps, like signing up for the newsletter or intro course, exploring career opportunities, or donating. We focused especially on three key audiences: * To-be direct workers: young people and professionals who might explore impactful career paths * Opinion shapers and people in power: journalists, policymakers, and senior professionals in relevant fields * Donors: from large funders to smaller individual givers and peer foundations Before and after The changes across the site are aimed at making it clearer, more skimmable, and easier to navigate. Here are some side-by-side comparisons: Landing page Some of the changes: * Replaced the economic growth graph with a short video highlighting different cause areas and effective altruism in action * Updated tagline to "Find the best ways to help others" based on testing by Rethink
 ·  · 8m read
 · 
Around 1 month ago, I wrote a similar Forum post on the Easterlin Paradox. I decided to take it down because: 1) after useful comments, the method looked a little half-baked; 2) I got in touch with two academics – Profs. Caspar Kaiser and Andrew Oswald – and we are now working on a paper together using a related method.  That blog post actually came to the opposite conclusion, but, as mentioned, I don't think the method was fully thought through.  I'm a little more confident about this work. It essentially summarises my Undergraduate dissertation. You can read a full version here. I'm hoping to publish this somewhere, over the Summer. So all feedback is welcome.  TLDR * Life satisfaction (LS) appears flat over time, despite massive economic growth — the “Easterlin Paradox.” * Some argue that happiness is rising, but we’re reporting it more conservatively — a phenomenon called rescaling. * I test this hypothesis using a large (panel) dataset by asking a simple question: has the emotional impact of life events — e.g., unemployment, new relationships — weakened over time? If happiness scales have stretched, life events should “move the needle” less now than in the past. * That’s exactly what I find: on average, the effect of the average life event on reported happiness has fallen by around 40%. * This result is surprisingly robust to various model specifications. It suggests rescaling is a real phenomenon, and that (under 2 strong assumptions), underlying happiness may be 60% higher than reported happiness. * There are some interesting EA-relevant implications for the merits of material abundance, and the limits to subjective wellbeing data. 1. Background: A Happiness Paradox Here is a claim that I suspect most EAs would agree with: humans today live longer, richer, and healthier lives than any point in history. Yet we seem no happier for it. Self-reported life satisfaction (LS), usually measured on a 0–10 scale, has remained remarkably flat over the last f
 ·  · 4m read
 · 
Summary I’m excited to announce a “Digital Sentience Consortium” hosted by Longview Philanthropy, in collaboration with The Navigation Fund and Macroscopic Ventures, to support research and applied projects focused on the potential consciousness, sentience, moral status, and experiences of artificial intelligence systems. The opportunities include research fellowships, career transition fellowships, and a broad request for proposals for applied work on these topics.  For years, I’ve thought this area was seriously overlooked. It now has growing interest. Twenty-two out of 123 pages of  Claude 4’s model card are about its potential moral patienthood. Scientific experts increasingly say that near-term AI sentience is a real possibility; even the skeptical neuroscientist Anil Seth says, “it is unwise to dismiss the possibility altogether.” We’re hoping to bring new people and projects into the field to increase the chance that society deals with the possibility of digital sentience reasonably, and with concern for all involved. * Apply to Research Fellowship * Apply to Career Transition Fellowship * Apply to Request for Proposals Motivation & Focus For about as long as I’ve been reading about transformative AI, I’ve wondered whether society would face critical decisions involving AI sentience. Until recently, I thought there was not much to be done here besides perhaps more philosophy of mind and perhaps some ethics—and I was not sure these approaches would make much progress.  Now, I think there are live areas where people can contribute: * Technically informed research on which AI systems are sentient, like this paper applying existing theories of consciousness to a few AI architectures. * Innovative approaches to investigate sentience, potentially in a way that avoids having to take a stand on a particular theory of consciousness, like work on  AI introspection. * Political philosophy and policy research on the proper role of AI in society. * Work to ed