[Note: [1]Big tent refers to a group that encourages "a broad spectrum of views among its members". This is not a post arguing for "fast growth" of highly-engaged EAs (HEAs) but rather a recommendation that as we inevitably get more exposure we try to represent and cultivate our diversity while ensuring we present EA as a question.]


This August, when Will MacAskill launches What We Owe The Future, we will see a spike of interest in longtermism and effective altruism more broadly. People will form their first impressions – these will be hard to shake.

After hearing of these ideas for the first time, they will be wondering things like:

  • Who are these people? (Can I trust them? Are they like me? Do they have an ulterior agenda?)
  • What can I do (literally right now and also how it might impact my decisions over time)?
  • What does this all mean for me and my life?

If we're lucky, they'll investigate these questions. The answers they get matter (and so does their experience finding those answers).

I get the sense that effective altruism is at a crossroads right now. We can either become a movement of people who appear dedicated to a particular set of conclusions about the world, or we can become a movement of people that appear united by a shared commitment to using reason and evidence to do the most good we can.

In the former case, I expect to become a much smaller group, easier to coordinate our focus, but it's also a group that's more easily dismissed. People might see us as a bunch of nerds[2] who have read too many philosophy papers[3] and who are out of touch with the real world.

In the latter case, I'd expect to become a much bigger group. I'll admit that it's also a group that's harder to organise (people are coming at the problem from different angles and with varying levels of knowledge). However, if we are to have the impact we want: I'd bet on the latter option.

I don't believe we can – nor should – simply tinker on the margins forever nor try to act as a "shadowy cabal". As we grow, we will start pushing for bigger and more significant changes, and people will notice. We've already seen this with the increased media coverage of things like political campaigns[4] and prominent people that are seen to be EA-adjacent[5].

A lot of these first impressions we won't be able to control. But we can try to spread good memes about EA (inspiring and accurate ones), and we do have some level of control about what happens when people show up at our "shop fronts" (e.g. prominent organisations, local and university groups, conferences etc.).

I recently had a pretty disheartening exchange where I heard from a new GWWC member who'd started to help run a local group felt "discouraged and embarrassed" at an EAGx conference. They left feeling like they weren't earning enough to be "earning to give" and that they didn't belong in the community if they're not doing direct work (or don't have an immediate plan to drop everything and change). They said this "poisoned" their interest in EA.

Experiences like this aren't always easy to prevent, but it's worth trying.

We are aware that we are one of the "shop fronts" at Giving What We Can. So we're currently thinking about how we represent worldview diversity within effective giving and what options we present to first-time donors. Some examples:

  • We're focusing on providing easily legible options (e.g. larger organisations with an understandable mission and strong track[6] record instead of more speculative small grants that foundations better make) and easier decisions (e.g. "I want to help people now" or "I want to help future generations").
  • We're also cautious about how we talk about The Giving What We Can Pledge to ensure that it's framed as an invitation for those who want it and not an admonition of those for whom it's not the right fit.
  • We're working to ensure that people who first come across EA via effective giving can find their way to the actions that best fit them (e.g. by introducing them to the broader EA community).
  • We often cross-promote careers as a way of doing good, but we're careful to do so in a way that doesn't diminish those who aren't in a position to switch careers and would leave someone feeling good about their donations.

These are just small ways to make effective altruism more accessible and appealing to a wider audience.

Even if we were just trying to reach a small number of highly-skilled individuals, we don't want to make life difficult for them by having effective altruism (or longtermism) seem too weird to their family or friends (people are less likely to take actions when they don't feel supported by their immediate community). Even better, we want people's interest in these ideas and actions they take to spur more positive actions by those in their lives.

I believe we need the kind of effective altruism where:

  • A university student says to their parents they're doing a fellowship on AI policy because their effective altruism group told them it'd be a good fit; their parents Google “effective altruism” and end up thrilled[7] (so much that they end up donating).
  • A consultant tells their spouse they're donating to safeguard the long-term future; their spouse looks into it and realises that their skills in marketing and communications would be needed for Charity Entrepreneurship and applies for a role.
  • A rabbi gets their congregation involved in the Jewish Giving Initiative; one of them goes on to take the Giving What We Can Pledge; they read the newsletter and start telling their friends who care about climate change about the Founders Pledge Climate Fund.
  • A workplace hosts a talk on effective giving, many people donate to a range of high-impact causes, and one of them goes "down the rabbit hole", and their subsequent career shift is into direct work.
  • A journalist is covering a local election and sees that one of the candidates has an affiliation with effective altruism; they understand the merits and write favourably about the candidate while communicating carefully the importance of the positions they are putting forward.

Many paths to effective altruism. Many positive actions taken.

For this to work, I think we need to: 

  • Be extra vigilant to ensure that effective altruism remains a "big tent".
  • Remain committed to EA as a question.
  • Remain committed to worldview diversification.
  • Make sure that it is easy for people to get involved and take action.
  • Celebrate all the good actions[8] that people are taking (not diminish people when they don't go from 0 to 100 in under 10 seconds flat).
  • Communicate our ideas both in high fidelity while remaining brief and to the point (be careful of the memes we spread).
  • Avoid coming across as dogmatic, elitist, or out-of-touch.
  • Work towards clear, tangible wins that we can point to.
  • Keep trying to have difficult conversations without resorting to tribalism.
  • Try to empathise with the variety of perspectives people will come with when they come across our ideas and community.
  • Develop subfields intentionally, but don't brand them as "effective altruism."
  • Keep the focus on effective altruism as a question, not a dogma.

I'm not saying that "anything goes", and we should drop our standards and not be bold enough to make strong and unintuitive claims. I think we must continue to be truth-seeking to develop a shared understanding of the world and what we should do to improve it. But I think we need to keep our minds open to the fact that we're going to be wrong about a lot of things, new people will bring helpful new perspectives, and we want to have the type of effective altruism that attracts many people who have a variety of things to bring to the table.

  1. ^

    After reading several comments I think that I could have done better by defining "big tent" at the beginning so I added this definition and clarification after this was posted.

  2. ^

    I wear the nerd label proudly

  3. ^

    And love me some philosophy papers

  4. ^

    e.g. This, this, and many more over the past few weeks

  5. ^

    e.g. Despite it being a stretch: this

  6. ^

    We are aware that this is often fungible with larger donors but we think that’s okay for reasons we will get into in future posts. We also expect that the type of donor who’s interested in fungibility is a great person to get more involved in direct work so we are working to ensure that these deeper concepts are still presented to donors and have a path for people to go “down the rabbit hole”.

  7. ^

    As opposed to concerned as I've heard people share that their family or friends are worried about their involvement after looking into it.

  8. ^

     Even beyond our typical recommendations. I’ve been thinking about “everyday altruism” having a presence within our local EA group (e.g. such as giving blood together, volunteering to teach ethics in schools, helping people to get to voting booths etc) – not skewing too much too this way, but having some presence could be good. As we’ve seen with Carrick’s campaign, doing some legible good within your community is something that outsiders will look for and will judge you on. Plus some of these things could (low confidence) make a decent case for considering how low cost they might be.

Comments78
Sorted by Click to highlight new comments since:
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

There's value in giving the average person a broadly positive impression of EA, and I agree with some of the suggested actions. However, I think some of them risk being applause lights-- it's easy to say we need to be less elitist, etc., but I think the easy changes you can make sometimes don't address fundamental difficulties, and making sweeping changes have hidden costs when you think about what they actually mean.

This is separate from any concern about whether it's better for EA to be a large or small movement.

Be extra vigilant to ensure that effective altruism remains a "big tent".

Edit: big tent actually means "encompassing a broad spectrum of views", not "big movement". I now think this section has some relevance to the OP but does not centrally address the above point.

As I understand it, this means spending more resources on people who are "less elite" and less committed to maximizing their impact. Some of these people will go on to make career changes and have lots of impact, but it seems clear that their average impact will be lower. Right now, EA has limited community-building capacity, so the opportunity cost is huge. If we allocate more resources to "big tent" efforts, ... (read more)

[anonymous]67
2
0

Celebrate all the good actions[that people are taking (not diminish people when they don't go from 0 to 100 in under 10 seconds flat).

--

I'm uncomfortable doing too much celebrating of actions that are much lower impact than other actions

I think the following things can both be true:

  • The best actions are much higher impact than others and should be heavily encouraged.
  • Most people will come in on easier but lower impact actions and if there isn't an obvious and stepped progression to get to higher impact actions and support to facilitate this then many will fall out unnecessarily. Or may be put off entirely if 'entry level' actions either aren't available or receive a very low reward or status.

I didn't read the OP as saying that we should settle with lower impact actions if there's the potential for higher impact ones. I read it as saying that we should make it easier for people to find their level - either helping them to reach higher impact over time if for whatever reason they're unable or unwilling to get there straight away, or making space for lower impact actions if for whatever reason that's what's available. 

Some of this will involve shouting out and rewarding less impact... (read more)

9
Luke Freeman 🔸
Thanks Rob. I think you just made my point better than me! 😀

Thanks for your response. I tend to actually agree with a lot (but not all) of these points, so I totally own that some of this just needs clarification that wouldn't be the case if I were clearer in my original post.

this means spending more resources on people who are "less elite" and less committed to EA

There’s a difference between actively recruiting from “less elite” sources and being carefully about your shopfronts so that they don’t put-off would-be effective altruists and create enemies of could-be allies. I’m pointing much more to the latter than the former (though I do think there’s value in the former too).

 

I'm not saying we should shun people for taking a suboptimal action, but we should be transparent about the fact that (a) some altruistic actions aren't very good and don't deserve celebration, and (b) some actions are good but only because they're on the path to an impactful career.

I’m mostly saying we shouldn’t shun people for taking a suboptimal action. But also, be careful about how confident we are about what is suboptimal or not. And use to use positive reinforcement instead of good actions instead of guilting people for not reaching a particular standard. T... (read more)

big tent doesn’t mean actively increasing reach. Big tent means encouraging and showcasing the diversity that exists within the community so that people can see that we’re committed to the question of “how can we do the most good” not a specific set of answers.

Thanks, this clears up a lot for me.

5
Luke Freeman 🔸
Great! I definitely should have defined that up front!

Correct me if I'm wrong in my interpretation here, but it seems like you are modelling impact on a unidimensional scale, as though there is always an objective answer that we know with certainty when asked 'is X or Y more impactful'? 

I got this impression from what I understood your main point to be, something like: 

There is a tail of talented people who will make the most impact, and any diversion of resource towards less talented people will be lower expected value.

I think there are several assumptions in both of these points that I want to unpack (and disagree with).

On the question of whether there is a unidimensional scale of talented people who will make the most impact: I believe that the EA movement could be wrong about the problems it thinks are most important, and/or the approaches to solving them. In the world where we are wrong, if we deter many groups with important skillsets or approaches that we didn't realise were important because we were overconfident in some problems/solutions, then that's quite bad. Conversely, in the world where we are right, yes maybe we have invested in more places than turned out to be necessary, but the downside risks seem smaller ... (read more)

(I also felt that the applause lights argument largely didn’t hold up and came across as unnecessarily dismissive, I think the comment would have held up better without it)

9
Thomas Kwa
Thanks, I made an edit to weaken the wording. I mostly wanted to point out a few characteristics of applause lights that I thought matched: * the proposed actions are easier to cheer for on a superficial level * arguing for the opposite is difficult, even if it might be correct: "Avoid coming across as dogmatic, elitist, or out-of-touch." inverts to "be okay with coming across as dogmatic, elitsit, or out-of-touch" * when you try to put them into practice, the easy changes you can make don't address fundamental difficulties, and making sweeping changes has high cost Looking over it again, saying they are applause lights is saying that the recommendations are entirely vacuous, which is a pretty serious claim I didn't mean to make.
5
Luke Freeman 🔸
Thanks Thomas! I definitely agree that when you get into the details of some of these they’re certainly not easy and that the framing of some of them could be seen as applause lights.

Correct me if I'm wrong in my interpretation here, but it seems like you are modelling impact on a unidimensional scale, as though there is always an objective answer that we know with certainty when asked 'is X or Y more impactful

I think this is unhelpfully conflating at least three pretty different concepts. 

  • Whether impact can be collapsed to a single dimension when doing moral calculus.
  • Whether morality is objective
  • Whether we have the predictive prowess to know with certainty ahead of time which actions are more impactful
1
tamgent
Yeah maybe. Sorry if you found it unhelpful, I could have been clearer. I find your decomposition interesting. I was most strongly gesturing at the third.
2
Linch
I guess my personal read here is that I don't think Thomas implied that we had perfect predictive prowess, nor did his argument rely upon this assumption. 
1
tamgent
Yeah I just couldn't understand his comment until I realised that he'd misunderstood the OP as saying it should be a big movement rather than it should be a movement with diverse views that doesn't deter great people for having different views. So I was looking for an explanation and that's what my brain came up with. 
2
Linch
Thank you, that makes sense!
2
Thomas Kwa
First off, note that my comment was based on a misunderstanding of "big tent" as "big movement", not "broad spectrum of views". As Linch pointed out, there are three different questions here (and there's a 4th important one): 1. Whether impact can be collapsed to a single dimension when doing moral calculus. 2. Whether morality is objective 3. Whether we have the predictive prowess to know with certainty ahead of time which actions are more impactful 4. Whether we can identify groups of people to invest in, given the uncertainty we have Under my moral views, (1) is basically true. I think morality is not (2) objective. (3) is clearly false. But the important point is that (3) is not necessary to put actions on a unidimensional scale, because we should be maximizing our expected utility with respect to our current best guess. This is consistent with worldview diversification, because it can be justified by unidimensional consequentialism in two ways: maximizing EV under high uncertainty and diminishing returns, and acausal trade / veil of ignorance arguments. Of course, we should be calibrated as to the confidence we have in the best guess of our current cause areas and approaches. I would state my main point as something like "Many of the points in the OP are easy to cheer for, but do not contain the necessary arguments for why they're good, given that they have large costs". I do believe that there's a tail of talented+dedicated people who will make much more impact than others, but I don't think the second half follows, just that any reallocation of resources requires weighing costs and benefits. Here are some things I think we agree on: * Money has low opportunity cost, so funding community-building at a sufficiently EA-aligned synagogue seems great if we can find one. * Before deciding that top community-builders should work at a synagogue, we should make sure it's the highest EV thing they could be doing (taking into account uncertainty and VOI). No

It is plausible to me that there are some low opportunity cost actions that might make it way more likely that certain people will work on guesses that are plausible candidates for our top (or close to the top) guesses in the next 50 years who, otherwise, wouldn't engage with effective altruism.[1]

For example, how existing community organizers manage certain conversations can make a really big difference to some people's lasting impressions of effective altruism. 

Consider a person who comes to a group who is sceptical of the top causes we propose but uses the ITN framework to make a case for another cause that they believe is more promising by EA lights. 

There are many ways to respond to this person. One is to make it clear that you think that this person just hasn't thought about it enough, or they would just come to the same conclusion as existing people in the effective altruism community. Another is to give false encouragement, overstating the extent of your agreement for the sake of making this person, who you disagree with, feel welcome. A skilled community builder with the right mindset can, perhaps, navigate between the above two reactions. They might use this as ... (read more)

9
Luke Freeman 🔸
Thanks Sophia! That example is very much the kind of thing I’m talking about. IMHO it’s pretty low cost and high value for us to try and communicate in this way (and would attract more people with a scout mindset which I think would be very good).
2
Sophia
🌞

Your comment now makes more sense given that you misunderstood the OP. Consider adding an edit mentioning what your misunderstanding was at top of your comment, I think it'd help with interpreting it.

So you agree 3 is clearly false. I thought that you thought it was near enough true to not worry about the possibility of being very wrong on a number of things. Good to have cleared that up.

I imagine then our central disagreement lies more in what it looks like once you collapse all that uncertainty on your unidimensional EV scale. Maybe you think it looks less diverse (on many dimensions) overall than I do. That's my best guess at our disagreement - that we just have different priors on how much diversity is the right amount for maximising impact overall. Or maybe we have no core disagreement. On an aside, I tend to find it mostly not useful as an exercise to do that collapsing thing at such an aggregate level, but maybe I just don't do enough macro analysis, or I'm just not that maximising.

BTW on your areas where you think we agree: I strongly disagree with commitment to EA as a sign of how likely someone is to make impact. Probably it does better than base rate in global population... (read more)

I think the best remedy to looking dogmatic is actually having good, legible epistemics, not avoiding coming across as dogmatic by adding false uncertainty.

This is a great sentence, I will be stealing it :)

However, I think "having good legible epistemics" being sufficient for not coming across as dogmatic is partially wishful thinking. A lot of these first impressions are just going to be pattern-matching, whether we like it or not.

I would be excited to find ways to pattern-match better, without actually sacrificing anything substantive. One thing I've found anecdotally is that a sort of "friendly transparency" works pretty well for this - just be up front about what you believe and why, don't try to hide ideas that might scare people off, be open about the optics on things, ways you're worried they might come across badly, and why those bad impressions are misleading, etc.

Thanks for this post, Luke! 

This touches on many of my personal fears about the community in the moment. 

I sincerely hope that anyone who comes across our community with the desire and intent to participate in the project of effective altruism feels that they are welcome and celebrated, whether that looks like volunteering an hour each month, donating whatever they feel they can afford, or doing direct work.

To lose people who have diverse worldviews, abilities and backgrounds would be a shame, and could potentially limit the impact of the community. I'd like to see an increasingly diverse effective altruism community, all bound by seeking to do as much good as we can.

The call to action here resonates -- feels really important and true to me, and I was just thinking yesterday about the same problem.

The way I would frame it is this:

The core of EA, what drives all of us together, is not the conclusions (focus on long term! AI!) -- it's the thought process and principles. Although EA's conclusions are exciting and headline-worthy, pushing them without pushing the process feels to me like it risks hollowing out an important core and turning EA into (more of) a cult, rather than a discipline.

Edit to add re. "celebrate the process" -- A bunch of people have critiqued you for pushing "celebrate all the good actions" since it risks diluting the power of our conclusions, but I think if we frame it as "celebrate and demonstrate the EA process" then that aligns with the point I'm trying to make, and I think works.

2
Luke Freeman 🔸
Thanks! I really like your framing of both these 😀 

Thank you for this post! I'm a loud-and-proud advocate of the "big tent". It's partly selfish, because I don't have the markers that would make me EA Elite (like multiple Oxbridge degrees or a gazillion dollars). 

What I do have is a persistent desire to steadily hack away at the tremendous amount of suffering in the world, and a solid set of interpersonal skills. So I show up and I make my donations and I do my level best to encourage/uplift/motivate the other folks who might feel the way that I do. If the tent weren't big, I wouldn't be here, and I think that would be a loss. 

Your new GWWC member's EAGx experience is exactly what I'm out here trying to prevent. Here is someone who was interested/engaged enough to go to a conference, and - we've lost them. What a waste! Just a little more care could have helped that person come away willing to continue to engage with EA - or at least not have a negative view of it.

There are lots of folks out there who are working hard on "narrow tower" EA. Hooray for them - they are driving the forward motion of the movement and achieving amazing things. But in my view, we also need the "big tent" folks to make sure the movement stays accessible.

After all, “How can I do the most good, with the resources available to me?” is a question more - certainly not fewer! - people should be encouraged to ask.

We can either become a movement of people who seem dedicated to a particular set of conclusions about the world, or we can become a movement of people united by a shared commitment to using reason and evidence to do the most good we can.

The former is a much smaller group, easier to coordinate our focus, but it's also a group that's more easily dismissed. People might see us as a bunch of nerds[1] who have read too many philosophy papers[2] and who are out of touch with the real world.

The latter is a much bigger group.

 

I'm aware that this is not exactly the central thrust of the piece, but I'd be interested if you could expand on why we might expect the former to be a smaller group than the latter.

 

I agree that a "commitment to using reason and evidence to do the most good we can" is a much better target to aim for than "dedicated to a particular set of conclusions about the world".  However, my sense is that historically there have been many large and rapidly growing groups of people that fit the second description, and not very many of the first.  I think this was true for mechanistic reasons related to how humans work rather than being accidents of h... (read more)

+1 to this.

In fact, I think that it's harder to get a very big (or very fast-growing) set of people to do the "reason and evidence" thing well.  I think that reasoning carefully is very hard, and building a community that reasons well together is very hard.

I am very keen for EA to be about the "reason and evidence" thing, rather than about specific answers.  But in order to do this, I think that we need to grow cautiously (maybe around 30%/year) and in a pretty thoughtful way.

I think that it's harder very big (or very fast-growing) set of people to do the "reason and evidence" thing well.  I think that reasoning carefully is very hard, and building a community that reasons well together is very hard.

I agree with this. I think it's even harder to build a community that reasons well together when we come across dogmatically (and we risk cultivating an echo chamber).

Note: I do want to applaud a lot of recent work that CEA-core team are doing to avoid this, the updates to effectivealtruism.org for example have helped!.

I am very keen for EA to be about the "reason and evidence" thing, rather than about specific answers.  But in order to do this, I think that we need to grow cautiously (maybe around 30%/year) and in a pretty thoughtful way.

A couple of things here:

Firstly, 30% /year is pretty damn fast by most standards!

Secondly, I agree that being thoughtful is essential (that's a key part of my central claim!).

Thirdly, some of the rate of growth is within "our" control (e.g. CEA can control how much it invests in certain community building activities). However, a lot of things aren't. People are noticing as we ramp up activities labelled EA or even... (read more)

2
MaxDalton
Agree that echo chamber/dogmatism is also a major barrier to epistemics! "30% seems high by normal standards" - yep, I guess so. But I'm excited about things like GWWC trying to grow much faster than 30%, and I think that's possible. Agree it's not fully within our control, and that we might not yet be hitting 30%. I think that if we're hitting >35% annual growth, I would begin to favour cutting back on certain sorts of outreach efforts or doing things like increasing the bar for EAG. I wouldn't want GW/GWWC to slow down, but I would want you to begin to point fewer people to EA (at least temporarily, so that we can manage the growth). [Off the cuff take, maybe I'd change my mind on further reflection.]
1
Guy Raveh
Are there estimates about current or previous growth rates?
5
MaxDalton
There are some, e.g. here.

my sense is that historically there have been many large and rapidly growing groups of people that fit the second description, and not very many of the first.  I think this was true for mechanistic reasons related to how humans work rather than being accidents of history, and think that recent technological advances may even have exaggerated the effects.

I think that works for many groups, and many subfields/related causes, but not for "effective altruism". 

To unpack this a bit, I think that "AI safety" or "animal welfare" movements could quite possibly get much bigger much more quickly than an "effective altruism" movement that is "commitment to using reason and evidence to do the most good we can".

However, when we are selling that we're "commitment to using reason and evidence to do the most good we can" and instead present people with a very narrow set of conclusions I think we do neither of these things well. Instead we put people off and we undermine our value. 

I believe that the value of the EA movement comes from this commitment to using reason and evidence to do the most good we can.

People are hearing about EA. These people could become allies or members of th... (read more)

5
RobertM
I agree!  That's why I'm surprised by the initial claim in the article, which seems to be saying that we're more likely to be a smaller group if we become ideologically committed to certain object-level conclusions, and a larger group if we instead stay focused on having good epistemics and seeing where that takes us.  It seems like the two should be flipped?
7
Luke Freeman 🔸
Sorry if the remainder of the comment didn't communicate this clearly enough: I think the "bait and switch" of EA  (sell the "EA is a question" but seem to deliver "EA is these specific conclusions") is self-limiting for our total impact. This is self-limiting because: * It limits the size of our community (put off people who see it as a bait and switch) * It limits the quality of the community (groupthink, echo chambers, overfishing small ponds  etc) * We lose allies * We create enemies * Impact is a product of: size (community + allies) * quality (community + allies) - actions of enemies actively working against us. * If we decrease size and quality of community and allies while increasing the size and veracity of people working against us then we limit our impact. Does that help clarify?

A core part of the differing intuitions might be because we're thinking about two different timescales.

 It seems intuitively right to me that the "dedicated to a particular set of conclusions about the world" version of effective altruism will grow faster in the short term. I think this might be because conclusions require less nuanced communication, and being more concrete  there are more concrete actions to take that can get people on board faster.

I also have the intuition that a "commitment to using reason and evidence to do the most good we can" (I'd maybe add, "with some proportion of our resources") has the potential to have a larger backing in the long-term. 

I have done a terrible "paint" job (literally used paint) in purple on one of the diagrams in this post to illustrate what I mean:

There are movement building strategies that end us up on the grey line, which gives us faster growth in the short term (so a bigger tent for a while), but doesn't change our saturation point (we're still at saturation point 1). 

I think that a "broad spectrum of ideas" might mean our end saturation point is higher even if this might require slower growth in the near term. I'... (read more)

An example of a "movement" that had a vaguer, bigger picture idea that got so big it was too commonplace to be a movement might be "the scientific method"? 

2
Guy Raveh
I think "large groups that reason together on how to achieve some shared values" is something that's so common, that we ignore it. Examples can be democratic countries, cities, communities. Not that this means reasoning about being effective can attract as large a group. But one can hope.

I both relatively strongly agree and strongly disagree with this post. Apologies that my points contradict one another:

Agreement:

  • Yes, community vibes feel weird right now. And I think in the run up to WWOTF they will only get weirder
  • Yes, we should be gracious to people who do small things. For me, being an EA is about being more effective or more altruistic with even $10 a month.  

Disagreement:

  • I reckon it's better if we focus on being a smaller highly engaged community rather than a really big one. I still think there should be actual research on this, but so far, much of the impact (SBF, Moskovitz funding GiveWell charities, direct work) has been from very engaged people. I find it compelling that we want similar levels of engagement in future. Do low engagement people become high engagement. I don't know. I don't emotionally enjoy this conclusion, but I can't say it's wrong, even though it clashes with the bullet point I made above.
    • GWWC is clearly a mass movement kind of organisation. I guess they should say, you might want to check out effective altruism, but it's not necessary.
  • I don't think that EA is for everyone. Again this clashes with what i said above, but I think tha
... (read more)

Thanks Nathan. I definitely see the tensions here. Hopefully these clarifications will help :)

I reckon it's better if we focus on being a smaller highly engaged community rather than a really big one.

My central claim isn't about the size of the community, it's about the diversity of EA that we present to the world (and represent within EA) and staying true to the core question not a particular set of conclusions. 

It depends on what you mean by "focus" too. The community will always be some degree of concentric circles of engagement. The total size and relative distribution of engagement will vary depending on what we focus on. My central claim is that the total impact of the community will be higher if the community remains a "big tent" that sticks to the core question of EA. The mechanism is that we create more engagement within each level of engagement, with more allies and fewer adversaries.

 

Do low engagement people become high engagement.

I've never seen someone become high engagement instantly. I've only seen engagement as something that increases incrementally (sometimes fast, sometimes slow, sometimes hit's a point and tapers off, and sadly sometimes high engagement... (read more)

I reckon it's better if we focus on being a smaller highly engaged community rather than a really big one.

Why not both? Have a big tent with less-engaged people, and a core of more-engaged people.

Also, a lot of people donating small amounts can add up to big amounts.

7
Luke Freeman 🔸
Agree on both points. I think the concentric circles model still holds well. "Big tent" still applies at each level of engagement though. The best critics in the core will be those who still feel comfortable in the core while disagreeing with lots of people. I highly value people who are at a similar level of engagement but hold very different views to me as they make the best critics.

What is WWOTF?

I reckon it's better if we focus on being a smaller highly engaged community rather than a really big one

Agreed, though it makes sense for Giving What We Can to become a mass movement. I think it'd be good for some people involved in GWWC to join EA, but there's no need to push it too hard. More like let people know about EA and if it resonates with people they'll come over.

but signal that EAGs are mainly for those who are engaged

Maybe, I think there's scope for people to become more engaged over time.

5
Guy Raveh
"What We Owe the Future", Will MacAskill's new book.
3
Guy Raveh
I think there are two ways to frame an expansion of the group of people who are engaged with EA through more than donations. The first, which sits well with your disagreements: we're doing extremely important things which we got into by careful reasoning about our values and impact. More people may cause value drift or dilute the more impactful efforts to make way on the most important problems. But I think a second one is much more plausible: we're almost surely wrong about some important things. We have biases that stem from who the typical EAs are, where they live, or just the very noisy path that EA has taken so far. While our current work is important, it's also crucial that our ideas are exposed to, and processed by, more people. What's "value drift" in one person's eyes might really be an important correction in another's. What's "dilution" may actually prove to mean a host of new useful perspectives and ideas (among other less useful ones).

Thanks for writing this up Luke! I think you're pointing to some important issues. I also think you and the GWWC team are doing excellent work - I'm really excited to see more people introduced to effective giving!

[Edit to add: Despite my comment below, I still am taking in the datapoints and perspectives that Luke is sharing, and I agree with many of his recommendations. I don't want to go into all of the sub-debates below because I'm focused on other priorities right now (including working on some of the issues Luke raises!).]

However, I worry that you're conflating a few pretty different dimensions, so I downvoted this post.

Here are some things that I think you're pointing to:

  1. "Particular set of conclusions" vs. "commitment to using evidence and reasoning"
  2. Size of the community, which we could in turn split into
    1. Rate of growth of the community
    2. Eventual size of the community
  3. How welcoming we should be/how diverse
    1. [I think you could split this up further.]
  4. In what circumstances, and to what degree, there should be  encouragement/pressure to take certain actions, versus just presenting people with options.
  5. How much we should focus on clearly communicating EA to people who aren't yet heavily involved.

This matters because you're sometimes then conflating these dimensions in ways that seem wrong to me (e.g. you say that it's easier to get big with the "evidence and reasoning" framing, but I think the opposite). 

I also interpreted this comment as quite dismissive but I think most of that comes from the fact Max explicitly said he downvoted the post, rather than from the rest of the comment (which seems fine and reasonable).

 I think I naturally interpret a downvote as meaning "I think this post/comment isn't helpful and I generally want to discourage posts/comments like it." That seems pretty harsh in this case, and at odds with the fact Max seems to think the post actually points at some important things worth taking seriously. I also naturally feel a bit concerned about the CEO of CEA seeming to discourage posts which suggest EA should be doing things differently,  especially where they are reasonable and constructive like this one.

This is a minor point in some ways but I think explicitly stating "I downvoted this post" can say quite a lot (especially when coming from someone with a senior position in the community). I haven't spent a lot of time on this forum recently so I'm wondering if other people think the norms around up/downvoting are different to my interpretation, and in particular whether Max you meant to use it differently?

[EDIT: I checked the norms on up/downvoting, ... (read more)

This is a minor point in some ways but I think explicitly stating "I downvoted this post" can say quite a lot (especially when coming from someone with a senior position in the community).

I ran the Forum for 3+ years (and, caveat, worked with Max). This is a complicated question.

Something I've seen many times: A post or comment is downvoted, and the author writes a comment asking why people downvoted (often seeming pretty confused/dispirited). 

Some people really hate anonymous downvotes. I've heard multiple suggestions that we remove anonymity from votes, or require people to input a reason before downvoting (which is then presumably sent to the author), or just establish an informal culture where downvotes are expected to come with comments.

So I don't think Max was necessarily being impolite here, especially since he and Luke are colleagues who know each other well.  Instead, he was doing something that some people want a lot more of and other people don't want at all. This seems like a matter of competing access needs (different people wanting different things from a shared resource).

In the end, I think it's down to individual users to take their best guess at whether s... (read more)

I think the problem isn't with saying you downvoted a post and why (I personally share the view that people should aim to explain their downvotes).

The problem is the actual reason:

I think you're pointing to some important issues... However, I worry that you're conflating a few pretty different dimensions, so I downvoted this post.

The message that, for me, stands out from this is "If you have an important idea but can't present it perfectly - it's better not to write at all." Which I think most of us would not endorse.

4
Aaron Gertler 🔸
I didn't get that message at all. If someone tells me they downvoted something I wrote, my default takeaway is "oh, I could have been more clear" or "huh, maybe I need to add something that was missing" — not "yikes, I shouldn't have written this". * I read Max's comment as "I thought this wasn't written very clearly/got some things wrong", not "I think you shouldn't have written this at all". The latter is, to me, almost the definition of a strong downvote. If someone sees a post they think (a) points to important issues, and (b) gets important things wrong, any of upvote/downvote/decline-to-vote seems reasonable to me.   *This is partly because I've stopped feeling very nervous about Forum posts after years of experience. I know plenty of people who do have the "yikes" reaction. But that's where the users' identities and relationship comes into play — I'd feel somewhat differently had Max said the same thing to a new poster.
7
Guy Raveh
I don't share your view about what a downvote means. However, regardless of what I think, it doesn't actually have any fixed meaning beyond that which people a assign to it - so it'd be interesting to have some stats on how people on the forum interpret it. Most(?) readers won't know who either of them is, not to mention their relationship.
2
Aaron Gertler 🔸
What does a downvote mean to you? If it means "you shouldn't have written this", what does a strong downvote mean to you? The same thing, but with more emphasis? Why not create a poll? I would, but I'm not sure exactly which question you'd want asked. Which brings up another question — to what extent should a comment be written for an author vs. the audience?  Max's comment seemed very directed at Luke — it was mostly about the style of Luke's writing and his way of drawing conclusions. Other comments feel more audience-directed. 

Personally, I primarily downvote posts/comments where I generally think "reading this post/comment will on average make forum readers be worse at thinking about this problem than if they didn't read this post/comment, assuming that the time spent reading this post/comment is free."

I basically never strong downvote posts unless it's obvious spam or otherwise an extremely bad offender in the "worsens thinking" direction. 

9
Guy Raveh
It's been over a week so I guess I should answer even if I don't have time for a longer reply. I think so, but I'm not very confident. I don't think private conversations can exist on a public platform. If it's not a DM, there's always an audience, and in most contexts, I'd expect much of a comment's impact to come from its effects on that audience. The polls in that specific group look like they have a very small and probably unrepresentative sample size. Though I don't we'll be able to get a much larger one on such a question, I guess.
4
MaxDalton
Nice to see you on the Forum again!  Thanks for sharing that perspective - that makes sense. Possibly I was holding this to too high a standard - I think that I held it to a higher standard partly because Luke is also an organization/community leader, and probably I shouldn't have taken that into account. Still, overall my best guess is that this post distracted from the conversation, rather than adding to it (though others clearly disagree). Roughly, I think that the data points/perspectives were important but not particularly novel, and that the conflation of different questions could lead to people coming away more confused, or to making inaccurate inferences. But I agree that this is a pretty high standard, and maybe I should just comment in circumstances like this. I also think I should have been more careful re seeming to discourage suggestions about EA. I wanted to signal "this particular set of suggestions seems muddled" not "suggestions are bad", but I definitely see how my post above could make people feel more hesitant to share suggestions, and that seems like a mistake on my part. To be clear: I would love feedback and suggestions!

Thanks Max. I agree that there is a lot of ground covered here that isn't broken up into different dimensions and that it could have been better if broken up as such. I disagree that entirely undermines the core proposition that: (a) whether we like it or not we are getting more attention; (b) it's particularly important to think carefully about our "shop fronts" with that increased attention; and therefore (c) staying true to "EA as a question" instead of a particular set of conclusions is going to ultimately serve our goals better (this might be our biggest disagreement?).

I'd be very interested to hear you unpack that you think the opposite of "easier to get big with the 'evidence and reasoning' framing".  This seems to be a pretty important crux.

9
MaxDalton
Ah, I think I was actually a bit confused what the core proposition was, because of the different dimensions. Here's what I think of your claims: a) 100% agree, this is a very important consideration. b) Agree that this is important. I think it's also very important to make sure that our shop fronts are accurate, and that we don't importantly distort the real work that we're doing (I expect you agree with this?). c) I agree with this! Or at least, that's what I'm focused on and want more of. (And I'm also excited about people doing more cause-specific or community building to complement that/reach different audiences.) So maybe I agree with your core thesis! How easy is it to get big with evidence and reasoning? I want to distinguish a few different worlds: 1. We just do cause specific community building, or action-specific community building. 2. We do community building focused on "EA as a question" with several different causes. Our epistemics are decent but not amazing. 3. We do community building focused on "EA as a question" with several different causes. We are aiming for the epistemics of core members to be world class (like probably better than the average on this Forum, around the level that I see at some core EA organizations). I'm most excited about option 3. I think that the thing we're trying to do is really hard and it would be easy for us to cause harm if we don't think carefully enough. And then I think that we're kind of just about at the level I'd like to see for 3. As we grow, I naturally expect regression to the mean, because we're adding new people who have had less exposure to this type of thinking and may be less inclined to it. And also because I think that groups tend to reason less well as they get older and bigger. So I think that you want to be really careful about growth, and you can't grow that quickly with this approach. I wonder if you mean something a bit more like 2? I'm not excited about that, but I agree that we coul

Had a bit of time to digest overnight and wanted to clarify this a bit further.

I'm very supportive of #3 including "epistemics of core members to be world class". But fear that trying to achieve #3 too narrowly (demographics, worldviews, engagement levels etc) might ultimately undermine our goals (putting more people off, leaving the core group without as much support, worldviews becoming too narrow and this hurts our epistemics,  we don't create enough allies to get things we want to do done).

I think that nurturing the experience through each level of engagement from outsider to audience through to contributor and core while remaining a "big tent" (worldview and action diverse) will ultimately serve us better than focusing too much on just developing a world class core (I think remaining a "big tent" is a necessary precondition because the world class core won't exist without diversity of ideas/approaches and the support network needed for this core to succeed).

Happy to chat more about this.

2
Luke Freeman 🔸
Thanks for clarifying! Not much to add now right this moment other than to say that I appreciate you going into detail about this.

Hello Max,

In turn, I strongly downvoted your post.

Luke raised, you say, some "important issues". However, you didn't engage with the substance of those issues. Instead, you complained that he hadn't adequately separated them even though, for my money, they are substantially related. I wouldn't have minded that if you'd then go on to offer your thoughts on how EA should operate on each of the dimensions you listed, but you did not.

Given this, your comment struck me as unacceptably dismissive, particularly given you are the CEO of CEA. The message it conveys is something like "I will only listen to your concerns if you present them exactly in the format I want" which, again for my money, is not a good message to send.

I'm sorry that it came off as dismissive. I'll edit to make clearer that I appreciate and value the datapoints and perspectives. I am keen to get feedback and suggestions in any form. I take the datapoints and perspectives that Luke shared seriously, and I've discussed lots of these things with him before. Sounds like you might want to share your perspective too? I'll send you a DM.

I viewed the splitting out of different threads as a substantive contribution to the debate, but I'm sorry you didn't see it that way. :) I agree that it would have been better if I'd given my take on all of the dimensions, but I didn't really want to get into all of those threads right now.

4
nananana.nananana.heyhey.anon
Would you have this same reaction if you saw Luke and Max or GWWC/CEA as equals and peers? Maybe so! It seems like you saw this as the head of CEA talking down to the OP. Max and Luke seem to know each other though; I read Max’s comment as a quick flag between equals that there’s a disagreement here, but writing it on the forum instead of an email means the rest of us get to participate a bit more in the conversation too.
5
MaxDalton
FWIW, I do think that I reacted to this a bit differently because it's Luke (who I've worked with, and who I view as a peer). I think I would have been more positive/had lower standards for a random community member.
6
Luke Freeman 🔸
👌

Thank you for this post, I was thinking along similar lines and am grateful that you wrote this down. I would like to see the number of people grow that make decisions around career, donations and volunteering based on the central EA question regardless of whether they call themselves EA. More than a billion people live in high income countries alone and I find it conceivable that 1-10% would be open to making changes in their lives depending on the action they can take. But for EA to accommodate 10-100 million people I also assume different shopfronts in addition to the backend capabilities (having enough charities that can handle vast amounts of donations, having pipelines for charity entrepreneurship that can help these charities grow, consulting capacity to help existing organizations to switch to effectiveness metrics etc). If we look at the movement from the perspective of scaling to these numbers I assume we will see a relatively short term saturation in longtermist cause areas. Currently we don’t seem to be funding restricted in that area and I don’t see a world where millions working on these problems will be better than thousands. So from this perspective I would like us to think about longer view and build the capacity now for a big EA movement that will be less effective on the margin while advocating for the most effective choices now in parallel.

I initially found myself nodding along with this post, but I then realised I didn't really understand what point you were trying to make. Here are some things I think you argue for:

  • theoretically, EA could be either big tent or small tent
  • to the extent there is a meaningful distinction, it seems better in general for EA to aim to be big tent
  • Now is a particularly important time to aim for EA to be big tent
  • Here are some things that we could do help make EA more big tent.

Am I right in thinking these are the core arguments?

A more important concern of mine with this post is that I don't really see any evidence or arguments presented for any of these four things. I think your writing style is nice, but I'm not sure why (apart from something to do with social norms or deference) community builders should update their views in the directions you're advocating for?

I personally hope that EA shifts a bit more in the “big tent” direction, because I think the principles of being rational and analytical about the effectiveness of charitable activity are very important, even though some of the popular charities in the EA community do not really seem effective to me. Like I disagree with the analysis while agreeing on the axioms. And as a result I am still not sure whether I would consider myself an “effective altruist” or not.

I think we can use the EA/Rationality divide to form a home for the philosophy-oriented people in Rationality that doesn't dominate EA culture. Rationality used to totally dominate EA, something that has I think become less true over time, even if it's still pretty prevalent at current levels. Having separate rationality events that people know about, while still ensuring that people devoted to EA have strong rationalist fundamentals (which is a big concern!), seems like the way to go for creating a thriving community.

Thanks for writing this Luke! Much like others have said, there are some sections in this that really resonate me and others I'm not so sure on. In particular I would offer a different framing on this point:

Celebrate all the good actions[6] that people are taking (not diminish people when they don't go from 0 to 100 in under 10 seconds flat).

Rather than celebrating actions that have altruistic intent but questionable efficacy, instead I think we could be more accepting of the idea that some of these things (eg donating blood) make us feel warm fuzzy f... (read more)

9
Chris Leong
"I think we could be more accepting of the idea that some of these things (eg donating blood) make us feel warm fuzzy feelings, and there's nothing wrong with wanting to feel those feelings and taking actions to achieve them, even if they might not be obviously maximally impactful. Impact is a marathon, not a sprint, and it's important that people who are looking to have a large impact make sustainable choices, including keeping their morale high." Strongly agreed.
9
Kevin Lacker
I think you may be underestimating the value of giving blood. It seems like according to the analysis here: https://forum.effectivealtruism.org/posts/jqCCM3NvrtCYK3uaB/blood-donation-generally-not-that-effective-on-the-margin A blood donation is still worth about 1/200 of a QALY. That’s still altruistic; it isn’t just warm fuzzies. If someone does not believe the EA community’s analyses of the top charities, we should still encourage them to do things like give blood.

Most of the value of giving blood is in fuzzies. You can buy a QALY from AMF for around $100, so that's $0.50, less than 0.1x US minimum wage if blood donation takes an hour.

If someone doesn't believe the valuation of a QALY it still feels wrong to encourage them to give blood for non-fuzzies reasons. I would encourage them to maximize their utility function, and I don't know what action does that without more context-- it might be thinking more about EA, donating to wildlife conservation, or doing any number of things with an altruistic theme.

1
Jonny Spicer 🔸
Thanks for pointing that out, I didn't realise how effective blood donation was. I think my original point still stands, if "donating blood" is substituted with a different proxy for something that is sub-maximally effective but feels good though.

Also, almost everything anyone does is sub-maximally effective. We simply do not know what maximally effective is. We do think it’s worth trying to figure out our best guesses using the best tools available but we can never know with 100% certainty.

Yeah, I actually called this point out in general in my #8 footnote (“Plus some of these things could (low confidence) make a decent case for considering how low cost they might be.”). I’ve been at EA events or in social contexts with EAs when someone has asserted with great confidence that things like voting and giving blood are pointless. This hasn’t been well received by onlookers (for good reason IMHO) and I think it does more harm than good.

2[comment deleted]

Thanks for this post! Just pointing out that the links in footnotes 3 and 4 seem to all be not working

Edit: They were working, just had to do a captcha

[This comment is no longer endorsed by its author]Reply
3
Guy Raveh
They currently work for me.

Thanks for the post. I agree with most of it.

I think on the one hand, someone participating by donations only may still be huge, as we all know what direct impact GiveWell charities can have for relatively small amounts of money. Human lives saved are not to be taken lightly.

On the other hand, I think it's important to deemphasize donations as a basis for the movement. If we seek to cause greater impact through non-marginal change, relying on philanthropy can only be a first step.

Lastly, I don't think Elon Musk is someone we should associate ourselves with... (read more)

Curated and popular this week
Relevant opportunities