Hide table of contents

If we’re interested in building the best version of effective altruism, it’s natural to spend some time thinking about why people should join.

Presumably people should join because it’s valuable, but how is it valuable? In fact there are a couple of different versions of the question, according to whose perspective of “valuable” we are using. They both seem relevant, and they have different answers:

First is the impersonal perspective: in what ways does effective altruism (as a community, or an intellectual project) help the world? This is important to understand because it can help us to focus on steering towards versions which are more valuable. [One might also have other personal reasons for wanting to get people to join the movement, but in practice I think these are usually significantly weaker.]

Second is the personal perspective: for individuals who might engage with effective altruism, why is it valuable for them personally to do so? Impersonal reasons can be a part of this, but may not be the whole story. We might think of this as asking: what is the value proposition for effective altruism? This is important to understand because it can help us to build a version which appeals more strongly to the people we would like to attract.

Impersonal perspective

Effective altruism helps the world by causing individuals to do more good in their lives. Roughly, the degree to which it helps depends on:

  1. The number of individuals who take valuable action thanks to effective altruism

  2. The capabilities and resources of those individuals

  3. The degree to which they take valuable action

The degree to which individuals take valuable action in turn depends on:

  1. Their values -- how much they are aiming for something which creates a better world by our lights;

  2. Their knowledge about what actions are effective in pursuit of those values;

  3. The degree to which they take action based on that knowledge.

In order to be significantly valuable, effective altruism wants to affect people who between them have significant resources, and to shift them significantly in a positive direction on one or more of these axes. The relative importance of each of the axes depends on the degree to which each is currently a bottleneck on actually helping the world (for the people who engage with effective altruism).

Personal perspective

Effective altruism also offers significant personal benefits for people who engage with it. In practice these can be a large part of what appeals to and motivates individuals. (This section is just observational, rather than an attempt at comprehensive analysis, and it might miss something important.)

I see three potentially large value propositions for individuals engaging with effective altruism:

  1. The knowledge about how to be effective at altruism;
    • Offering people knowledge about what is effective (or tools for finding this) directly helps them to pursue goals they already have.
  2. A community;
    • Communities offer socialising, friendship, support. A common goal lets people help each other and band together in ways often lacking in today’s society. Communities can create opportunities for their members.
  3. A sense of meaning.
    • For some people, effective altruism can give them an enhanced sense of personal purpose or significance (which can be reinforced by having a community but does not necessarily rely on one). It might offer a turn away from nihilism or generally feeling helpless in such a large world.
    • This is a mechanism for what is valuable from the impersonal perspective to also be valuable from the personal perspective.

This suggests an empirical question:

How important are these different value propositions in attracting people? How does it vary by person?

There has been some work on this already, for example asking people what they found valuable about local communities in a 2015 survey of effective altruists . It would be great to see much more explicit investigation.

It also suggests a strategic question:

To what extent should we try to separate the value propositions?

Separating the value propositions means making it easier for people to access one without the others. For example by presenting the community as a community of people following the ideas of effective altruism rather than the community, or making it easy for people to access the knowledge without getting pushed to join the community.

Tightly linking the value propositions might produce a more attractive total package (to individuals), and help people to move from one source of value to another. But separation could be helpful if some people were put off by some aspects. For example, although meaning can be a strong draw for some people, it can play into perceptions that effective altruism is weird or demanding. Separation could also help to insulate against catastrophic failure: for example if there were a scandal in the community, the reputation of the knowledge and tools that have been developed would likely be less damaged if they were more separate.

Synergies and tensions between impersonal and personal perspectives

From the impersonal perspective, there were three axes (values, knowledge, action) that it could be good to move individuals on. Is being moved also valuable for the individuals?

I think it depends. For each axis there are ways of encouraging people to move which are likely to feel valuable for the individual, and ways which may feel hostile or inconsiderate.

Increasing knowledge about what is effective is helpful from the impersonal perspective, but it’s also one of the value propositions for individuals. This is great: sharing knowledge becomes a win-win, since it’s desired by both sides. It also gives us perspective on why decision-relevant research is particularly important: it creates knowledge which will often be acted on, and through being shared can attract people towards effective altruism.

(This is true of offering knowledge in an even-handed way. Trying to get someone to believe a particular proposition can get into the realm of propaganda, which is generally not considerate.)

In contrast to offering people knowledge, which generally feels cooperative, trying to improve someone’s values (by your standards) or pressuring them to take greater action can more easily be a non-cooperative interaction. I actually think that discussions about ethical issues or what we should do are very valuable if undertaken with a spirit of open-mindedness and humility, but they can take on an adversarial character if people rather take the approach of trying to persuade others of what they find to be certain truths.This creates a tension between what is impersonally valuable (them making large such shifts) and what is valuable to them personally (perhaps not doing so).

In the case of demanding action, this tension has been recognised and discussed at least somewhat within the community (without consensus). I think the case of trying to shift values is analogous; more of the related discussion seems to be against pushing people , but this may be in response to observing potentially damaging behaviour .

In both cases my personal view is that we should err on the side of not pushing people. There are substantial benefits to being considerate in our interactions with people who might join the community. A particular worry is that pushing people can create hostility towards effective altruism (anecdotally, it has in some cases; it would be great to see a thorough investigation of this). I’ve argued it’s important for the long-term growth of the movement that we take pains to avoid hostility . And I’m particularly concerned that extremely reasonable people (whom we would like to attract) are particularly attracted to communities they see as offering support rather than pressure.

There are costs to being careful about this. A desire to avoid pushing people’s values might mean being slightly more hesitant to recommend working to reduce existential risk without discussing values first, since the importance of the cause area depends on ethical views.

However, I don’t think we need to push values or actions where not wanted, because there are ways to help people move on these axes which build out of the personal value propositions:

  • Having a sense of meaning can be motivating for action.
  • A supportive community can:
    • Help people to think through and clarify their values.
      • This is only valuable in the impersonal sense if many people, when they reflect on their values they move more closely into alignment with typical ‘effective altruist’ values; for example thinking that cost-effectiveness is morally relevant. I do think this is probably true, but my reasons for believing this are mostly anecdotal.
    • Help support them to take action that might otherwise be difficult.
      • For example, I think that the Giving What We Can community has been great in helping support people in making a decision to donate a slice of their income (although it has attracted at least some criticism for pushing lifetime commitments to students).
  • We can offer knowledge which helps people to reflect on their own core values, or to plan to be personally more effective.

While there could still be tension when there is an effective way to shift people which is not considerate to their preferences, not doing so is likely to mean shifting them less efficiently rather than not at all, which is a smaller price and easier for the benefits of considerateness to overcome.

Conclusions:

  • For researchers:

    • Consider further investigation of these two questions:

      • [Empirical] How important are the different value propositions in attracting people? How does it vary by person?

      • [Strategic] To what extent should the EA community try to separate the value propositions?

  • For community-builders:

    • We should be cooperative towards potential members:

      • We should not mislead people about what is hoped-for from the impersonal perspective;

      • We should commit to offering some things which are valuable from the personal perspective rather than the impersonal perspective (when the costs are worthwhile).

  • For community members:

    • When talking to other people who might get involved in effective altruism, it can be helpful to bear in mind how what they are likely to get out of it differs from the benefits they are likely to provide by being involved.

    • In particular:

      • We should share knowledge widely (but not overstate confidence);

      • We should provide support to take action, not push people to action;

      • We should provide resources to help people reflect on their values, not push value change at them.

Thanks to Goodwin Gibbins, Sam Hilton, Stefan Schubert and others for conversations and comments which fed into this article.

 
Comments3


Sorted by Click to highlight new comments since:

Does anyone have recommendations for activities that are valuable for people considering their values? Or people considering / doing action?

This is a great question and I think deserves further thought.

Helping people consider their values was one of the major goals Daniel Kokotajlo and I had in designing this flowchart. One possible activity would be to read through and/or discuss parts of that.

There's a website (who's link I'm trying to find) of EA related tasks ranging from 2 minutes to a few hours that could be used in a discussion/hackathon meetup. And also effectivealtduism.orgs new 'Ways to get involved' guide. THINK also has worksheets that cover various issues to use personally or in groups. Is this the type of material you were looking for?

Curated and popular this week
 ·  · 8m read
 · 
Around 1 month ago, I wrote a similar Forum post on the Easterlin Paradox. I decided to take it down because: 1) after useful comments, the method looked a little half-baked; 2) I got in touch with two academics – Profs. Caspar Kaiser and Andrew Oswald – and we are now working on a paper together using a related method.  That blog post actually came to the opposite conclusion, but, as mentioned, I don't think the method was fully thought through.  I'm a little more confident about this work. It essentially summarises my Undergraduate dissertation. You can read a full version here. I'm hoping to publish this somewhere, over the Summer. So all feedback is welcome.  TLDR * Life satisfaction (LS) appears flat over time, despite massive economic growth — the “Easterlin Paradox.” * Some argue that happiness is rising, but we’re reporting it more conservatively — a phenomenon called rescaling. * I test this hypothesis using a large (panel) dataset by asking a simple question: has the emotional impact of life events — e.g., unemployment, new relationships — weakened over time? If happiness scales have stretched, life events should “move the needle” less now than in the past. * That’s exactly what I find: on average, the effect of the average life event on reported happiness has fallen by around 40%. * This result is surprisingly robust to various model specifications. It suggests rescaling is a real phenomenon, and that (under 2 strong assumptions), underlying happiness may be 60% higher than reported happiness. * There are some interesting EA-relevant implications for the merits of material abundance, and the limits to subjective wellbeing data. 1. Background: A Happiness Paradox Here is a claim that I suspect most EAs would agree with: humans today live longer, richer, and healthier lives than any point in history. Yet we seem no happier for it. Self-reported life satisfaction (LS), usually measured on a 0–10 scale, has remained remarkably flat over the last f
 ·  · 3m read
 · 
We’ve redesigned effectivealtruism.org to improve understanding and perception of effective altruism, and make it easier to take action.  View the new site → I led the redesign and will be writing in the first person here, but many others contributed research, feedback, writing, editing, and development. I’d love to hear what you think, here is a feedback form. Redesign goals This redesign is part of CEA’s broader efforts to improve how effective altruism is understood and perceived. I focused on goals aligned with CEA’s branding and growth strategy: 1. Improve understanding of what effective altruism is Make the core ideas easier to grasp by simplifying language, addressing common misconceptions, and showcasing more real-world examples of people and projects. 2. Improve the perception of effective altruism I worked from a set of brand associations defined by the group working on the EA brand project[1]. These are words we want people to associate with effective altruism more strongly—like compassionate, competent, and action-oriented. 3. Increase impactful actions Make it easier for visitors to take meaningful next steps, like signing up for the newsletter or intro course, exploring career opportunities, or donating. We focused especially on three key audiences: * To-be direct workers: young people and professionals who might explore impactful career paths * Opinion shapers and people in power: journalists, policymakers, and senior professionals in relevant fields * Donors: from large funders to smaller individual givers and peer foundations Before and after The changes across the site are aimed at making it clearer, more skimmable, and easier to navigate. Here are some side-by-side comparisons: Landing page Some of the changes: * Replaced the economic growth graph with a short video highlighting different cause areas and effective altruism in action * Updated tagline to "Find the best ways to help others" based on testing by Rethink
 ·  · 4m read
 · 
Summary I’m excited to announce a “Digital Sentience Consortium” hosted by Longview Philanthropy, in collaboration with The Navigation Fund and Macroscopic Ventures, to support research and applied projects focused on the potential consciousness, sentience, moral status, and experiences of artificial intelligence systems. The opportunities include research fellowships, career transition fellowships, and a broad request for proposals for applied work on these topics.  For years, I’ve thought this area was seriously overlooked. It now has growing interest. Twenty-two out of 123 pages of  Claude 4’s model card are about its potential moral patienthood. Scientific experts increasingly say that near-term AI sentience is a real possibility; even the skeptical neuroscientist Anil Seth says, “it is unwise to dismiss the possibility altogether.” We’re hoping to bring new people and projects into the field to increase the chance that society deals with the possibility of digital sentience reasonably, and with concern for all involved. * Apply to Research Fellowship * Apply to Career Transition Fellowship * Apply to Request for Proposals Motivation & Focus For about as long as I’ve been reading about transformative AI, I’ve wondered whether society would face critical decisions involving AI sentience. Until recently, I thought there was not much to be done here besides perhaps more philosophy of mind and perhaps some ethics—and I was not sure these approaches would make much progress.  Now, I think there are live areas where people can contribute: * Technically informed research on which AI systems are sentient, like this paper applying existing theories of consciousness to a few AI architectures. * Innovative approaches to investigate sentience, potentially in a way that avoids having to take a stand on a particular theory of consciousness, like work on  AI introspection. * Political philosophy and policy research on the proper role of AI in society. * Work to ed