[Note: [1]Big tent refers to a group that encourages "a broad spectrum of views among its members". This is not a post arguing for "fast growth" of highly-engaged EAs (HEAs) but rather a recommendation that as we inevitably get more exposure we try to represent and cultivate our diversity while ensuring we present EA as a question.]
This August, when Will MacAskill launches What We Owe The Future, we will see a spike of interest in longtermism and effective altruism more broadly. People will form their first impressions – these will be hard to shake.
After hearing of these ideas for the first time, they will be wondering things like:
- Who are these people? (Can I trust them? Are they like me? Do they have an ulterior agenda?)
- What can I do (literally right now and also how it might impact my decisions over time)?
- What does this all mean for me and my life?
If we're lucky, they'll investigate these questions. The answers they get matter (and so does their experience finding those answers).
I get the sense that effective altruism is at a crossroads right now. We can either become a movement of people who appear dedicated to a particular set of conclusions about the world, or we can become a movement of people that appear united by a shared commitment to using reason and evidence to do the most good we can.
In the former case, I expect to become a much smaller group, easier to coordinate our focus, but it's also a group that's more easily dismissed. People might see us as a bunch of nerds[2] who have read too many philosophy papers[3] and who are out of touch with the real world.
In the latter case, I'd expect to become a much bigger group. I'll admit that it's also a group that's harder to organise (people are coming at the problem from different angles and with varying levels of knowledge). However, if we are to have the impact we want: I'd bet on the latter option.
I don't believe we can – nor should – simply tinker on the margins forever nor try to act as a "shadowy cabal". As we grow, we will start pushing for bigger and more significant changes, and people will notice. We've already seen this with the increased media coverage of things like political campaigns[4] and prominent people that are seen to be EA-adjacent[5].
A lot of these first impressions we won't be able to control. But we can try to spread good memes about EA (inspiring and accurate ones), and we do have some level of control about what happens when people show up at our "shop fronts" (e.g. prominent organisations, local and university groups, conferences etc.).
I recently had a pretty disheartening exchange where I heard from a new GWWC member who'd started to help run a local group felt "discouraged and embarrassed" at an EAGx conference. They left feeling like they weren't earning enough to be "earning to give" and that they didn't belong in the community if they're not doing direct work (or don't have an immediate plan to drop everything and change). They said this "poisoned" their interest in EA.
Experiences like this aren't always easy to prevent, but it's worth trying.
We are aware that we are one of the "shop fronts" at Giving What We Can. So we're currently thinking about how we represent worldview diversity within effective giving and what options we present to first-time donors. Some examples:
- We're focusing on providing easily legible options (e.g. larger organisations with an understandable mission and strong track[6] record instead of more speculative small grants that foundations better make) and easier decisions (e.g. "I want to help people now" or "I want to help future generations").
- We're also cautious about how we talk about The Giving What We Can Pledge to ensure that it's framed as an invitation for those who want it and not an admonition of those for whom it's not the right fit.
- We're working to ensure that people who first come across EA via effective giving can find their way to the actions that best fit them (e.g. by introducing them to the broader EA community).
- We often cross-promote careers as a way of doing good, but we're careful to do so in a way that doesn't diminish those who aren't in a position to switch careers and would leave someone feeling good about their donations.
These are just small ways to make effective altruism more accessible and appealing to a wider audience.
Even if we were just trying to reach a small number of highly-skilled individuals, we don't want to make life difficult for them by having effective altruism (or longtermism) seem too weird to their family or friends (people are less likely to take actions when they don't feel supported by their immediate community). Even better, we want people's interest in these ideas and actions they take to spur more positive actions by those in their lives.
I believe we need the kind of effective altruism where:
- A university student says to their parents they're doing a fellowship on AI policy because their effective altruism group told them it'd be a good fit; their parents Google “effective altruism” and end up thrilled[7] (so much that they end up donating).
- A consultant tells their spouse they're donating to safeguard the long-term future; their spouse looks into it and realises that their skills in marketing and communications would be needed for Charity Entrepreneurship and applies for a role.
- A rabbi gets their congregation involved in the Jewish Giving Initiative; one of them goes on to take the Giving What We Can Pledge; they read the newsletter and start telling their friends who care about climate change about the Founders Pledge Climate Fund.
- A workplace hosts a talk on effective giving, many people donate to a range of high-impact causes, and one of them goes "down the rabbit hole", and their subsequent career shift is into direct work.
- A journalist is covering a local election and sees that one of the candidates has an affiliation with effective altruism; they understand the merits and write favourably about the candidate while communicating carefully the importance of the positions they are putting forward.
Many paths to effective altruism. Many positive actions taken.
For this to work, I think we need to:
- Be extra vigilant to ensure that effective altruism remains a "big tent".
- Remain committed to EA as a question.
- Remain committed to worldview diversification.
- Make sure that it is easy for people to get involved and take action.
- Celebrate all the good actions[8] that people are taking (not diminish people when they don't go from 0 to 100 in under 10 seconds flat).
- Communicate our ideas both in high fidelity while remaining brief and to the point (be careful of the memes we spread).
- Avoid coming across as dogmatic, elitist, or out-of-touch.
- Work towards clear, tangible wins that we can point to.
- Keep trying to have difficult conversations without resorting to tribalism.
- Try to empathise with the variety of perspectives people will come with when they come across our ideas and community.
- Develop subfields intentionally, but don't brand them as "effective altruism."
- Keep the focus on effective altruism as a question, not a dogma.
I'm not saying that "anything goes", and we should drop our standards and not be bold enough to make strong and unintuitive claims. I think we must continue to be truth-seeking to develop a shared understanding of the world and what we should do to improve it. But I think we need to keep our minds open to the fact that we're going to be wrong about a lot of things, new people will bring helpful new perspectives, and we want to have the type of effective altruism that attracts many people who have a variety of things to bring to the table.
- ^
After reading several comments I think that I could have done better by defining "big tent" at the beginning so I added this definition and clarification after this was posted.
- ^
I wear the nerd label proudly
- ^
And love me some philosophy papers
- ^
- ^
e.g. Despite it being a stretch: this
- ^
We are aware that this is often fungible with larger donors but we think that’s okay for reasons we will get into in future posts. We also expect that the type of donor who’s interested in fungibility is a great person to get more involved in direct work so we are working to ensure that these deeper concepts are still presented to donors and have a path for people to go “down the rabbit hole”.
- ^
As opposed to concerned as I've heard people share that their family or friends are worried about their involvement after looking into it.
- ^
Even beyond our typical recommendations. I’ve been thinking about “everyday altruism” having a presence within our local EA group (e.g. such as giving blood together, volunteering to teach ethics in schools, helping people to get to voting booths etc) – not skewing too much too this way, but having some presence could be good. As we’ve seen with Carrick’s campaign, doing some legible good within your community is something that outsiders will look for and will judge you on. Plus some of these things could (low confidence) make a decent case for considering how low cost they might be.
There's value in giving the average person a broadly positive impression of EA, and I agree with some of the suggested actions. However, I think some of them risk being applause lights-- it's easy to say we need to be less elitist, etc., but I think the easy changes you can make sometimes don't address fundamental difficulties, and making sweeping changes have hidden costs when you think about what they actually mean.
This is separate from any concern about whether it's better for EA to be a large or small movement.
Edit: big tent actually means "encompassing a broad spectrum of views", not "big movement". I now think this section has some relevance to the OP but does not centrally address the above point.
As I understand it, this means spending more resources on people who are "less elite" and less committed to maximizing their impact. Some of these people will go on to make career changes and have lots of impact, but it seems clear that their average impact will be lower. Right now, EA has limited community-building capacity, so the opportunity cost is huge. If we allocate more resources to "big tent" efforts, ... (read more)
--
I think the following things can both be true:
I didn't read the OP as saying that we should settle with lower impact actions if there's the potential for higher impact ones. I read it as saying that we should make it easier for people to find their level - either helping them to reach higher impact over time if for whatever reason they're unable or unwilling to get there straight away, or making space for lower impact actions if for whatever reason that's what's available.
Some of this will involve shouting out and rewarding less impact... (read more)
Thanks for your response. I tend to actually agree with a lot (but not all) of these points, so I totally own that some of this just needs clarification that wouldn't be the case if I were clearer in my original post.
There’s a difference between actively recruiting from “less elite” sources and being carefully about your shopfronts so that they don’t put-off would-be effective altruists and create enemies of could-be allies. I’m pointing much more to the latter than the former (though I do think there’s value in the former too).
I’m mostly saying we shouldn’t shun people for taking a suboptimal action. But also, be careful about how confident we are about what is suboptimal or not. And use to use positive reinforcement instead of good actions instead of guilting people for not reaching a particular standard. T... (read more)
Thanks, this clears up a lot for me.
Correct me if I'm wrong in my interpretation here, but it seems like you are modelling impact on a unidimensional scale, as though there is always an objective answer that we know with certainty when asked 'is X or Y more impactful'?
I got this impression from what I understood your main point to be, something like:
There is a tail of talented people who will make the most impact, and any diversion of resource towards less talented people will be lower expected value.
I think there are several assumptions in both of these points that I want to unpack (and disagree with).
On the question of whether there is a unidimensional scale of talented people who will make the most impact: I believe that the EA movement could be wrong about the problems it thinks are most important, and/or the approaches to solving them. In the world where we are wrong, if we deter many groups with important skillsets or approaches that we didn't realise were important because we were overconfident in some problems/solutions, then that's quite bad. Conversely, in the world where we are right, yes maybe we have invested in more places than turned out to be necessary, but the downside risks seem smaller ... (read more)
(I also felt that the applause lights argument largely didn’t hold up and came across as unnecessarily dismissive, I think the comment would have held up better without it)
I think this is unhelpfully conflating at least three pretty different concepts.
It is plausible to me that there are some low opportunity cost actions that might make it way more likely that certain people will work on guesses that are plausible candidates for our top (or close to the top) guesses in the next 50 years who, otherwise, wouldn't engage with effective altruism.[1]
For example, how existing community organizers manage certain conversations can make a really big difference to some people's lasting impressions of effective altruism.
Consider a person who comes to a group who is sceptical of the top causes we propose but uses the ITN framework to make a case for another cause that they believe is more promising by EA lights.
There are many ways to respond to this person. One is to make it clear that you think that this person just hasn't thought about it enough, or they would just come to the same conclusion as existing people in the effective altruism community. Another is to give false encouragement, overstating the extent of your agreement for the sake of making this person, who you disagree with, feel welcome. A skilled community builder with the right mindset can, perhaps, navigate between the above two reactions. They might use this as ... (read more)
Your comment now makes more sense given that you misunderstood the OP. Consider adding an edit mentioning what your misunderstanding was at top of your comment, I think it'd help with interpreting it.
So you agree 3 is clearly false. I thought that you thought it was near enough true to not worry about the possibility of being very wrong on a number of things. Good to have cleared that up.
I imagine then our central disagreement lies more in what it looks like once you collapse all that uncertainty on your unidimensional EV scale. Maybe you think it looks less diverse (on many dimensions) overall than I do. That's my best guess at our disagreement - that we just have different priors on how much diversity is the right amount for maximising impact overall. Or maybe we have no core disagreement. On an aside, I tend to find it mostly not useful as an exercise to do that collapsing thing at such an aggregate level, but maybe I just don't do enough macro analysis, or I'm just not that maximising.
BTW on your areas where you think we agree: I strongly disagree with commitment to EA as a sign of how likely someone is to make impact. Probably it does better than base rate in global population... (read more)
This is a great sentence, I will be stealing it :)
However, I think "having good legible epistemics" being sufficient for not coming across as dogmatic is partially wishful thinking. A lot of these first impressions are just going to be pattern-matching, whether we like it or not.
I would be excited to find ways to pattern-match better, without actually sacrificing anything substantive. One thing I've found anecdotally is that a sort of "friendly transparency" works pretty well for this - just be up front about what you believe and why, don't try to hide ideas that might scare people off, be open about the optics on things, ways you're worried they might come across badly, and why those bad impressions are misleading, etc.
Thanks for this post, Luke!
This touches on many of my personal fears about the community in the moment.
I sincerely hope that anyone who comes across our community with the desire and intent to participate in the project of effective altruism feels that they are welcome and celebrated, whether that looks like volunteering an hour each month, donating whatever they feel they can afford, or doing direct work.
To lose people who have diverse worldviews, abilities and backgrounds would be a shame, and could potentially limit the impact of the community. I'd like to see an increasingly diverse effective altruism community, all bound by seeking to do as much good as we can.
The call to action here resonates -- feels really important and true to me, and I was just thinking yesterday about the same problem.
The way I would frame it is this:
The core of EA, what drives all of us together, is not the conclusions (focus on long term! AI!) -- it's the thought process and principles. Although EA's conclusions are exciting and headline-worthy, pushing them without pushing the process feels to me like it risks hollowing out an important core and turning EA into (more of) a cult, rather than a discipline.
Edit to add re. "celebrate the process" -- A bunch of people have critiqued you for pushing "celebrate all the good actions" since it risks diluting the power of our conclusions, but I think if we frame it as "celebrate and demonstrate the EA process" then that aligns with the point I'm trying to make, and I think works.
Thank you for this post! I'm a loud-and-proud advocate of the "big tent". It's partly selfish, because I don't have the markers that would make me EA Elite (like multiple Oxbridge degrees or a gazillion dollars).
What I do have is a persistent desire to steadily hack away at the tremendous amount of suffering in the world, and a solid set of interpersonal skills. So I show up and I make my donations and I do my level best to encourage/uplift/motivate the other folks who might feel the way that I do. If the tent weren't big, I wouldn't be here, and I think that would be a loss.
Your new GWWC member's EAGx experience is exactly what I'm out here trying to prevent. Here is someone who was interested/engaged enough to go to a conference, and - we've lost them. What a waste! Just a little more care could have helped that person come away willing to continue to engage with EA - or at least not have a negative view of it.
There are lots of folks out there who are working hard on "narrow tower" EA. Hooray for them - they are driving the forward motion of the movement and achieving amazing things. But in my view, we also need the "big tent" folks to make sure the movement stays accessible.
After all, “How can I do the most good, with the resources available to me?” is a question more - certainly not fewer! - people should be encouraged to ask.
I'm aware that this is not exactly the central thrust of the piece, but I'd be interested if you could expand on why we might expect the former to be a smaller group than the latter.
I agree that a "commitment to using reason and evidence to do the most good we can" is a much better target to aim for than "dedicated to a particular set of conclusions about the world". However, my sense is that historically there have been many large and rapidly growing groups of people that fit the second description, and not very many of the first. I think this was true for mechanistic reasons related to how humans work rather than being accidents of h... (read more)
+1 to this.
In fact, I think that it's harder to get a very big (or very fast-growing) set of people to do the "reason and evidence" thing well. I think that reasoning carefully is very hard, and building a community that reasons well together is very hard.
I am very keen for EA to be about the "reason and evidence" thing, rather than about specific answers. But in order to do this, I think that we need to grow cautiously (maybe around 30%/year) and in a pretty thoughtful way.
I agree with this. I think it's even harder to build a community that reasons well together when we come across dogmatically (and we risk cultivating an echo chamber).
Note: I do want to applaud a lot of recent work that CEA-core team are doing to avoid this, the updates to effectivealtruism.org for example have helped!.
A couple of things here:
Firstly, 30% /year is pretty damn fast by most standards!
Secondly, I agree that being thoughtful is essential (that's a key part of my central claim!).
Thirdly, some of the rate of growth is within "our" control (e.g. CEA can control how much it invests in certain community building activities). However, a lot of things aren't. People are noticing as we ramp up activities labelled EA or even... (read more)
I think that works for many groups, and many subfields/related causes, but not for "effective altruism".
To unpack this a bit, I think that "AI safety" or "animal welfare" movements could quite possibly get much bigger much more quickly than an "effective altruism" movement that is "commitment to using reason and evidence to do the most good we can".
However, when we are selling that we're "commitment to using reason and evidence to do the most good we can" and instead present people with a very narrow set of conclusions I think we do neither of these things well. Instead we put people off and we undermine our value.
I believe that the value of the EA movement comes from this commitment to using reason and evidence to do the most good we can.
People are hearing about EA. These people could become allies or members of th... (read more)
A core part of the differing intuitions might be because we're thinking about two different timescales.
It seems intuitively right to me that the "dedicated to a particular set of conclusions about the world" version of effective altruism will grow faster in the short term. I think this might be because conclusions require less nuanced communication, and being more concrete there are more concrete actions to take that can get people on board faster.
I also have the intuition that a "commitment to using reason and evidence to do the most good we can" (I'd maybe add, "with some proportion of our resources") has the potential to have a larger backing in the long-term.
I have done a terrible "paint" job (literally used paint) in purple on one of the diagrams in this post to illustrate what I mean:
There are movement building strategies that end us up on the grey line, which gives us faster growth in the short term (so a bigger tent for a while), but doesn't change our saturation point (we're still at saturation point 1).
I think that a "broad spectrum of ideas" might mean our end saturation point is higher even if this might require slower growth in the near term. I'... (read more)
An example of a "movement" that had a vaguer, bigger picture idea that got so big it was too commonplace to be a movement might be "the scientific method"?
I both relatively strongly agree and strongly disagree with this post. Apologies that my points contradict one another:
Agreement:
Disagreement:
- I reckon it's better if we focus on being a smaller highly engaged community rather than a really big one. I still think there should be actual research on this, but so far, much of the impact (SBF, Moskovitz funding GiveWell charities, direct work) has been from very engaged people. I find it compelling that we want similar levels of engagement in future. Do low engagement people become high engagement. I don't know. I don't emotionally enjoy this conclusion, but I can't say it's wrong, even though it clashes with the bullet point I made above.
- GWWC is clearly a mass movement kind of organisation. I guess they should say, you might want to check out effective altruism, but it's not necessary.
- I don't think that EA is for everyone. Again this clashes with what i said above, but I think tha
... (read more)Thanks Nathan. I definitely see the tensions here. Hopefully these clarifications will help :)
My central claim isn't about the size of the community, it's about the diversity of EA that we present to the world (and represent within EA) and staying true to the core question not a particular set of conclusions.
It depends on what you mean by "focus" too. The community will always be some degree of concentric circles of engagement. The total size and relative distribution of engagement will vary depending on what we focus on. My central claim is that the total impact of the community will be higher if the community remains a "big tent" that sticks to the core question of EA. The mechanism is that we create more engagement within each level of engagement, with more allies and fewer adversaries.
I've never seen someone become high engagement instantly. I've only seen engagement as something that increases incrementally (sometimes fast, sometimes slow, sometimes hit's a point and tapers off, and sadly sometimes high engagement... (read more)
Why not both? Have a big tent with less-engaged people, and a core of more-engaged people.
Also, a lot of people donating small amounts can add up to big amounts.
What is WWOTF?
Agreed, though it makes sense for Giving What We Can to become a mass movement. I think it'd be good for some people involved in GWWC to join EA, but there's no need to push it too hard. More like let people know about EA and if it resonates with people they'll come over.
Maybe, I think there's scope for people to become more engaged over time.
Thanks for writing this up Luke! I think you're pointing to some important issues. I also think you and the GWWC team are doing excellent work - I'm really excited to see more people introduced to effective giving!
[Edit to add: Despite my comment below, I still am taking in the datapoints and perspectives that Luke is sharing, and I agree with many of his recommendations. I don't want to go into all of the sub-debates below because I'm focused on other priorities right now (including working on some of the issues Luke raises!).]
However, I worry that you're conflating a few pretty different dimensions, so I downvoted this post.
Here are some things that I think you're pointing to:
This matters because you're sometimes then conflating these dimensions in ways that seem wrong to me (e.g. you say that it's easier to get big with the "evidence and reasoning" framing, but I think the opposite).
I also interpreted this comment as quite dismissive but I think most of that comes from the fact Max explicitly said he downvoted the post, rather than from the rest of the comment (which seems fine and reasonable).
I think I naturally interpret a downvote as meaning "I think this post/comment isn't helpful and I generally want to discourage posts/comments like it." That seems pretty harsh in this case, and at odds with the fact Max seems to think the post actually points at some important things worth taking seriously. I also naturally feel a bit concerned about the CEO of CEA seeming to discourage posts which suggest EA should be doing things differently, especially where they are reasonable and constructive like this one.
This is a minor point in some ways but I think explicitly stating "I downvoted this post" can say quite a lot (especially when coming from someone with a senior position in the community). I haven't spent a lot of time on this forum recently so I'm wondering if other people think the norms around up/downvoting are different to my interpretation, and in particular whether Max you meant to use it differently?
[EDIT: I checked the norms on up/downvoting, ... (read more)
I ran the Forum for 3+ years (and, caveat, worked with Max). This is a complicated question.
Something I've seen many times: A post or comment is downvoted, and the author writes a comment asking why people downvoted (often seeming pretty confused/dispirited).
Some people really hate anonymous downvotes. I've heard multiple suggestions that we remove anonymity from votes, or require people to input a reason before downvoting (which is then presumably sent to the author), or just establish an informal culture where downvotes are expected to come with comments.
So I don't think Max was necessarily being impolite here, especially since he and Luke are colleagues who know each other well. Instead, he was doing something that some people want a lot more of and other people don't want at all. This seems like a matter of competing access needs (different people wanting different things from a shared resource).
In the end, I think it's down to individual users to take their best guess at whether s... (read more)
I think the problem isn't with saying you downvoted a post and why (I personally share the view that people should aim to explain their downvotes).
The problem is the actual reason:
The message that, for me, stands out from this is "If you have an important idea but can't present it perfectly - it's better not to write at all." Which I think most of us would not endorse.
Personally, I primarily downvote posts/comments where I generally think "reading this post/comment will on average make forum readers be worse at thinking about this problem than if they didn't read this post/comment, assuming that the time spent reading this post/comment is free."
I basically never strong downvote posts unless it's obvious spam or otherwise an extremely bad offender in the "worsens thinking" direction.
Thanks Max. I agree that there is a lot of ground covered here that isn't broken up into different dimensions and that it could have been better if broken up as such. I disagree that entirely undermines the core proposition that: (a) whether we like it or not we are getting more attention; (b) it's particularly important to think carefully about our "shop fronts" with that increased attention; and therefore (c) staying true to "EA as a question" instead of a particular set of conclusions is going to ultimately serve our goals better (this might be our biggest disagreement?).
I'd be very interested to hear you unpack that you think the opposite of "easier to get big with the 'evidence and reasoning' framing". This seems to be a pretty important crux.
Had a bit of time to digest overnight and wanted to clarify this a bit further.
I'm very supportive of #3 including "epistemics of core members to be world class". But fear that trying to achieve #3 too narrowly (demographics, worldviews, engagement levels etc) might ultimately undermine our goals (putting more people off, leaving the core group without as much support, worldviews becoming too narrow and this hurts our epistemics, we don't create enough allies to get things we want to do done).
I think that nurturing the experience through each level of engagement from outsider to audience through to contributor and core while remaining a "big tent" (worldview and action diverse) will ultimately serve us better than focusing too much on just developing a world class core (I think remaining a "big tent" is a necessary precondition because the world class core won't exist without diversity of ideas/approaches and the support network needed for this core to succeed).
Happy to chat more about this.
Hello Max,
In turn, I strongly downvoted your post.
Luke raised, you say, some "important issues". However, you didn't engage with the substance of those issues. Instead, you complained that he hadn't adequately separated them even though, for my money, they are substantially related. I wouldn't have minded that if you'd then go on to offer your thoughts on how EA should operate on each of the dimensions you listed, but you did not.
Given this, your comment struck me as unacceptably dismissive, particularly given you are the CEO of CEA. The message it conveys is something like "I will only listen to your concerns if you present them exactly in the format I want" which, again for my money, is not a good message to send.
I'm sorry that it came off as dismissive. I'll edit to make clearer that I appreciate and value the datapoints and perspectives. I am keen to get feedback and suggestions in any form. I take the datapoints and perspectives that Luke shared seriously, and I've discussed lots of these things with him before. Sounds like you might want to share your perspective too? I'll send you a DM.
I viewed the splitting out of different threads as a substantive contribution to the debate, but I'm sorry you didn't see it that way. :) I agree that it would have been better if I'd given my take on all of the dimensions, but I didn't really want to get into all of those threads right now.
Thank you for this post, I was thinking along similar lines and am grateful that you wrote this down. I would like to see the number of people grow that make decisions around career, donations and volunteering based on the central EA question regardless of whether they call themselves EA. More than a billion people live in high income countries alone and I find it conceivable that 1-10% would be open to making changes in their lives depending on the action they can take. But for EA to accommodate 10-100 million people I also assume different shopfronts in addition to the backend capabilities (having enough charities that can handle vast amounts of donations, having pipelines for charity entrepreneurship that can help these charities grow, consulting capacity to help existing organizations to switch to effectiveness metrics etc). If we look at the movement from the perspective of scaling to these numbers I assume we will see a relatively short term saturation in longtermist cause areas. Currently we don’t seem to be funding restricted in that area and I don’t see a world where millions working on these problems will be better than thousands. So from this perspective I would like us to think about longer view and build the capacity now for a big EA movement that will be less effective on the margin while advocating for the most effective choices now in parallel.
I initially found myself nodding along with this post, but I then realised I didn't really understand what point you were trying to make. Here are some things I think you argue for:
Am I right in thinking these are the core arguments?
A more important concern of mine with this post is that I don't really see any evidence or arguments presented for any of these four things. I think your writing style is nice, but I'm not sure why (apart from something to do with social norms or deference) community builders should update their views in the directions you're advocating for?
I personally hope that EA shifts a bit more in the “big tent” direction, because I think the principles of being rational and analytical about the effectiveness of charitable activity are very important, even though some of the popular charities in the EA community do not really seem effective to me. Like I disagree with the analysis while agreeing on the axioms. And as a result I am still not sure whether I would consider myself an “effective altruist” or not.
I think we can use the EA/Rationality divide to form a home for the philosophy-oriented people in Rationality that doesn't dominate EA culture. Rationality used to totally dominate EA, something that has I think become less true over time, even if it's still pretty prevalent at current levels. Having separate rationality events that people know about, while still ensuring that people devoted to EA have strong rationalist fundamentals (which is a big concern!), seems like the way to go for creating a thriving community.
Thanks for writing this Luke! Much like others have said, there are some sections in this that really resonate me and others I'm not so sure on. In particular I would offer a different framing on this point:
Rather than celebrating actions that have altruistic intent but questionable efficacy, instead I think we could be more accepting of the idea that some of these things (eg donating blood) make us feel warm fuzzy f... (read more)
Most of the value of giving blood is in fuzzies. You can buy a QALY from AMF for around $100, so that's $0.50, less than 0.1x US minimum wage if blood donation takes an hour.
If someone doesn't believe the valuation of a QALY it still feels wrong to encourage them to give blood for non-fuzzies reasons. I would encourage them to maximize their utility function, and I don't know what action does that without more context-- it might be thinking more about EA, donating to wildlife conservation, or doing any number of things with an altruistic theme.
Also, almost everything anyone does is sub-maximally effective. We simply do not know what maximally effective is. We do think it’s worth trying to figure out our best guesses using the best tools available but we can never know with 100% certainty.
Yeah, I actually called this point out in general in my #8 footnote (“Plus some of these things could (low confidence) make a decent case for considering how low cost they might be.”). I’ve been at EA events or in social contexts with EAs when someone has asserted with great confidence that things like voting and giving blood are pointless. This hasn’t been well received by onlookers (for good reason IMHO) and I think it does more harm than good.
Thanks for this post! Just pointing out that the links in footnotes 3 and 4 seem to all be not working
Edit: They were working, just had to do a captcha
Thanks for the post. I agree with most of it.
I think on the one hand, someone participating by donations only may still be huge, as we all know what direct impact GiveWell charities can have for relatively small amounts of money. Human lives saved are not to be taken lightly.
On the other hand, I think it's important to deemphasize donations as a basis for the movement. If we seek to cause greater impact through non-marginal change, relying on philanthropy can only be a first step.
Lastly, I don't think Elon Musk is someone we should associate ourselves with... (read more)