I'm submitting the below as part of the Red Teaming contest.  Any prize money won will be donated to the non-profit I founded (the Rikers Debate Project).

--

I think it makes a bit of sense to start with who I am for this post, because I'm hoping to provide a bit of a curious outsider's perspective on EA in the hopes of offering a somewhat unique perspective.

I consider myself “EA adjacent.” I have a number of good friends (Josh Morrison, Jay Shooster, Alex Silverstein) who are varying parts of the movement (then again, because I’m only adjacent I have no idea if any of you will know these names). I’m a rationalist by nature, did high-level college debate with decent success, and was part of the Moneyball-era baseball analytics movement (so I’m good with the intersection of numbers and logic). I think all of those qualities give me a proclivity toward EA.  These days I’m a rather boring complex commercial litigator who does a good amount of pro bono work. My greatest civic contribution is founding (with Josh Morrison and others) the Rikers Debate Project, which I consider to be a rather typical, not especially EA-y non-profit.

I’m writing this because I think I’ve absorbed enough about EA via osmosis to provide a halfway intelligible critique. [1] To state my lack of bona fides: I sometimes read EA stuff accidentally, but neither typically or intentionally. I had to make an account for this forum to post this. I've never heard Will MacAskill speak before, but I have had multiple people tell me about him. I read Peter Singer before I associated him with EA. 

I am generally sympathetic to and supportive of the positions of the EA movement. I mean, who can oppose ambitious people coming together to provide efficacious good for the world? It sounds wonderful. In both theory and in practice, I care about the movement (from a distance) and hope it succeeds.

The devil is in the details, as always. I provide the following critique basically in part because I feel guilty that others have done far more to try to help the world than I have, and the least I can do is provide my thoughts to help the movement get better. I caution that the below is based on an outsider's second-hand perspective. Therefore, it may/will get details wrong. But my hope is that the spirit is right.

I have heard that there is a focus in EA lately on longterm problems. You can read a bit about longtermism in Simon Bazelon’s post here[2] Intuitively, longtermism makes sense. EA, as a collective movement, has a finite amount of resources in the present day, yet it is uniquely (literally) temporally positioned. Therefore, as a % of all resources from hereon out, the current amount of resources should be used to take advantage of the unique temporal position to maximize future utility returns. To invert the hypo a bit—if the movement could go back in time, then using the money on anything else but, say, ending slavery or stopping the Holocaust (or, to be more EA about it, handing out vaccines/cures for the Bubonic plague) would be not just foolhardy, but ethically disastrous. Therefore, we must focus on fat-tail future risks that threaten future life.

I like this thought a lot. I think it makes sense. I write to provide four discrete (but at times overlapping) concerns/cautions w/r/t longtermism. At most, the implication of these criticisms is that there may be a current overcorrection toward longtermism that should be somewhat corrected back in the direction of the prior distribution. This doesn’t mean we stop focusing on longterm risks (far from it), [3] but simply that we recalibrate the risk utility curve, and potentially allocate more resources toward present-ish causes. At the least, I think these criticisms should be discussed and persuasive reasons should be offered for why they are not of serious consideration.

  1. Political Capital Concern: There is a compelling case to be made that a wildly successful EA movement could do as much good for the world as almost any other social movement in history.  Even if the movement is only marginally successful, if the precepts underlying the movement are somewhat sound, the utility implications are enormous. 

    To that end, it is incredibly worthwhile for the movement to be politically/socially successful. If the movement dies in the present moment, it can do little to help future life. But because helping future people seems abstract and foreign to the everyday person who wants help right now in the present, and because future life is easily otherized, the movement is susceptible to the criticism that it’s not actually helping anyone. Indeed, people in the present day for the most part will consider movements that help future life to be the moral equivalent of not helping anyone (this is, obviously, massively wrong, but still an important observation).

    One way to address this political capital concern is to provide direct, tangible utility to present humans. I know this happens in the movement, and my point isn’t to take away from those gains made to help people in the present. Instead, the thought is: when running your utility models, factor this in however you can. Consider that utility translated from EA resources to present life, when done effectively and messaged well, [4] redounds as well on the gains to future life
     
  2. Social Capital Concern: This one might be a bit meandering, but I'll get there, hopefully. I think this is probably the most important criticism of my four. This point is rather meta, and I agree with the point made by Michael Nielsen that it is necessary and healthy for the EA movement to practice consistent self-reflection.[5] [6] 

    EA proponents should not have to live hermetic lifestyles. In fact, a collective spirit with communal living is healthy and good for the movement. This is not the very tired "EA is a cult" jab, which I find uncompelling.  That being said, the movement should be aware of potential pitfalls that come with this approach, and apply appropriate guardrails.

    Here is my concern regarding the intersection of EA-as-community and longtermism: focusing on longterm problems is probably way more fun than present ones.[7] Longtermism projects seem inherently more big picture and academic, detached from the boring mundanities of present reality. There is a related concern with this approach that longtermism may fetishize future life, in the sense of seeing ourselves as saviors who will be looked back on by billions in the future with gratitude and outright reverence for caring so much about posterity. [8] 

    But that aside, if I am correct that longtermism projects are sexier by nature, when you add communal living/organizing to EA, it can probably lead to a lot of people using flimsy models to talk and discuss and theorize and pontificate, as opposed to creating tangible utility, so that they can work on cool projects without having to get their hands too dirty, all while claiming the mantle of not just the same, but greater, do-gooding. “Head to Africa to do charity work? Like a normie who never read Will MacCaskill? Buddy, I’m literally saving a billion lives in 2340 right now.” So individual EA actors, given social incentives brought upon by increased communal living, will want to find reasons to engage in longtermism projects because it will increase their social capital within the community. I don't mean to imply that anyone is doing this consciously/in bad faith, [9] but that doesn't mean it won't happen.

    So this concern takes no issue with either longtermism projects or EA-as-community practices, respectively. Instead, the concern is that, when in tandem, the latter will make people gravitate toward the former not out of disciplined dedication to greatest-utility principles, but simply because they seem cool. EA followers may be less immune to animal spirits than the average joe, but they are not immune.
     
  3. Muscle Memory Concern: I have founded an organization that helps people. It is a whole lot of work (and I've done it outside of my already stressful job). Aside from providing the core value-add of your organization, you need to worry about the day-to-day running of the organization (people, finances, legal, etc). And it can become increasingly easy for your organization, once it is self-sufficient and not facing existential threats consistently, to get distracted and focus on new projects that seem exciting and fresh.

    This is why it's important to have quick muscle memory to get back to your core value-add in case you stray and find your organization lacking its typical punch (here, the steady conversion of resources into efficiently-distributed utility), you want to be able to snap back into it fast, or else the movement can become rigid and stale and you may never find yourself back to your former self.

    I think this is a reason to avoid a disproportionate emphasis on longtermism projects. Because longtermism efficacy is inherently more difficult to calculate with confidence, it can become quite easy to forget how to provide utility quickly and confidently. Basically: if you read enough AI doomposts, you might forget how to build a malaria net. 
     
  4. Discount Factor Concern: This one is simple. Future life is less likely to exist than current life. I understand the irony here, since longtermism projects seek to make it more likely that future life exists. But inherently you just have to discount the utility of each individual future life. In the aggregate, there's no question that the utility gains are still enormous. But each individual life should have some discount based on this less-likely-to-exist factor.
     

Anyway, those are my thoughts. I hope that they provide some benefits to the community. And I do greatly appreciate the sacrifices people are making to help others! It's inspiring. Good luck!

 

  1. ^

    I’m not having anyone edit or review this for me; I’d like all my thoughts and mistakes to be my own.

  2. ^

    Simon has a follow-up post where he discusses a common critique of longtermism: uncertainty.  I don't address that critique here, since I find it unpersuasive. It's a concern, sure, but one inherent in any longtermism approach. I think it's best to focus here on ideas that aren't so obvious.

  3. ^

    "Overcorrection" here does not mean that there should never have been a correction. There should have been. I take the focus on longtermism, relative to the focus beforehand, to be a welcome development.

  4. ^

    Picking effective politicians affiliated with the the movement is obviously very important. I'll attribute the choices on that front so far to, uh, growing pains...

  5. ^

    Michael is correct that what helps make EA an attractive ideology is the idea that self-reflection and openness to criticism is healthy for the organization. That is a wonderful principle for an organization/community committed to improving, rather than simply consolidating power for individual actors.

  6. ^

    I don't want to get sidetracked, but I also have to mention that I tend to agree more with this tweet/thread by Alexander Berger than I do with most of Michael's post. Maybe another post, another day.

  7. ^

    If this is wrong, my entire point fails.

  8. ^

    Hot take: lots of EA people think they're playing Ender's Game where (spoiler alert) they actually save humanity in the end.

  9. ^

    There is a related concern here which is that longtermism projects may be easier to get funding for with weak data, a la tech founders and VC firms in the last few years, but I imagine the movement already considers this seriously.

82

0
0

Reactions

0
0

More posts like this

Comments15
Sorted by Click to highlight new comments since:
[anonymous]40
0
0

I like this. I was surprised it hasn't received more upvotes yet.

I suspect what's going on is that most people here are focused on the arguments in the post - and quite rightly so, I suppose, for a red teaming contest - and are thinking, "Meh, nothing I haven't heard before." Whereas I'm a bit unusual in that I almost always habitually focus on the way someone presents an argument and the wider context, so I read this and am like, "Omg EA-adjacent person making an effort to share their perspective and offering a sensible critique seemingly from a place of trying to help rather than to take the piss or vent their anger - this stuff is rare and valuable and I'm grateful to you for it (and to the contest organisers) and I want to encourage more of it."

Thank you so much for this!

I’m really curious about the “nothing I haven’t heard before” in relation to the Social Capital Concern. Have people raised this before? If so, what’s being done about it? As I said, I think it’s the most serious of the four I mentioned, so if it’s empirically supported, what’s the action plan against it?

[anonymous]6
0
0

I think occasionally I hear people argue that others focus on longtermist issues in large part because it's more exciting/creative/positive etc to think about futuristic utopias, then some of those people reply "Actually I really miss immediate feedback, tangible results, directly helping people etc, it's really hard to feel motivated by all this abstract stuff" and the discussion kind of ends there.

But the broader Social Capital Concern is something that deserves more serious attention I think. The 'core' of the EA community seems to be pretty longtermist (whether that's because it is sexier, or because these people have thought about / discussed / researched it a lot, whatever reason) and so you would expect this phenomenon of people acting more longtermist than they actually are in order to gain social capital within the community.

Marisa encourages neartermist EAs to hold on to their values here. Luke Freeman encourages EA to stay broad here. Owen Cotton-Barratt says "Global health is important for the epistemic foundations of EA, even for longtermists". [Edit: These are all community leaders (broadly defined), so as well as the specific arguments they make, I think the very fact that they're more prominent members of the community expressing these views is particularly useful when the issue at hand is social capital.]

I also kinda get the sense that many EA orgs/groups cater to the neartermist side of EA mainly out of epistemic humility / collaborative norms etc rather than personally prioritising the associated causes/projects. E.g. I'm pretty longtermist, but I still make some effort to help the more neartermist EAs find PAs - it felt like that was the default for a new community-focused organisation/project. And I remember some discussion around some of CEA's projects being too focused on longtermism a few years back and things seem to be more evenly distributed now.

(I think there are probably many more examples of public and private discussion along these lines, apologies for not giving a more comprehensive response - it's hard from this selection to get a sense of if we're doing enough or even too much to correct for the Social Capital Concern. My intention wasn't actually to be like "Yeah, heard it all before" otherwise I expect I would have included some links to similar discussions to start with. I was more theorising as to what others might be thinking and explaining my own upvote. Sorry for not making this clearer - I'm just re-reading my first comment now and it seems a bit rude!)

I think it’s the most serious of the four I mentioned, so if it’s empirically supported, what’s the action plan against it

I don't think "people have mentioned this before" and "it's empirically supported" are the same things! 

This seems defensive lol. My entire thing here is, I’m asking if there is support for this because I don’t know because I’m not in the community. It seems like you’re saying “it’s been mentioned but is not necessarily true.” If that’s the case, it would be helpful to say that. If it’s something else, it would be helpful to say that thing!

I didn't mean to come across as defensive. Communicating across cultural barriers is hard.

I wholeheartedly agree with Holly Morgan here! Thank you for writing this up and for sharing your personal context and perspective in a nuanced way. 

For future submissions to the Red Teaming Contest, I'd like to see posts that are much more rigorously argued than this. I'm not concerned about whether the arguments are especially novel.

My understanding of the key claim of the post is, EA should consider reallocating some more resources from longtermist to neartermist causes. This seems plausible – perhaps some types of marginal longtermist donations are predictably ineffective, or it's bad if community members feel that longtermism unfairly has easier access to funding – but I didn't find the four reasons/arguments given in this post particularly compelling.

The section Political Capital Concern appears to claim: If EA as a movement doesn't do anything to help regular near-term causes, people will think that it's not doing anything to help people, and it could die as a movement. I agree that this is possible (though I also think a "longtermism movement" could still be reasonably successful, though unlikely to have much membership compared to EA.) However, EA continues dedicate substantial resources to near-term causes – hundreds of millions of dollars of donations each year! – and this number is only increasing, as GiveWell hopes to direct 1 billion dollars of donations per year. EA continues to highlight its contributions to near-term causes. As a movement, EA is doing fine in this regard.

So then, if the EA movement as a whole is good in this regard, who should change their actions based on the political capital concern? I think it's more interesting to examine whether local EA groups, individuals, and organizations should have a direct positive impact on near-term causes for signalling reasons. The post only gives the following recommendation (which I find fairly vague): "Instead, the thought is: when running your utility models, factor this in however you can. Consider that utility translated from EA resources to present life, when done effectively and messaged well, [4] redounds as well on the gains to future life." However, rededicating resources from longtermism to neartermism has costs to the longtermist projects you're not supporting. How do we navigate these tradeoffs? It would have been great to see examples for this.

The "Social Capital Concern" section writes:

focusing on longterm problems is probably way more fun than present ones.[7] Longtermism projects seem inherently more big picture and academic, detached from the boring mundanities of present reality.

This might be true for some people, but I think for most EAs, concrete or near-term ways of helping people has a stronger emotional appeal, all else equal. I would find the inverse of the sentence a lot more convincing, to be honest: "focusing on near-term problems is probably way more fun than ones in the distant future. Near-term projects seem inherently more appealing and helpful, grounded in present-day realities."

But that aside, if I am correct that longtermism projects are sexier by nature, when you add communal living/organizing to EA, it can probably lead to a lot of people using flimsy models to talk and discuss and theorize and pontificate, as opposed to creating tangible utility, so that they can work on cool projects without having to get their hands too dirty, all while claiming the mantle of not just the same, but greater, do-gooding.

Longtermist projects may be cool, and their utility may be more theoretical than near-term projects, but I'm extremely confused what you mean when they don't involve getting your hands dirty (in a way such that near-termist work, such as GiveWell's charity effectiveness research, involves more hands-on work). Effective donations have historically been the main neartermist EA thing to do, and donating is quite hands-off.

So individual EA actors, given social incentives brought upon by increased communal living, will want to find reasons to engage in longtermism projects because it will increase their social capital within the community.

This seems likely, and thanks for raising this critique (especially if it hasn't been highlighted before), but what should we do about it? The red-teaming contest is looking for constructive and action-relevant critiques, and I think it wouldn't be that hard to take some time to propose suggestions. The action implied by the post is that we should consider shifting more resources to near-termism, but I don't think that would necessarily be the right move, compared to, e.g., being more thoughtful about social dynamics and making an effort to welcome neartermist perspectives.

The section on Muscle Memory Concern writes:

I think this is a reason to avoid a disproportionate emphasis on longtermism projects. Because longtermism efficacy is inherently more difficult to calculate with confidence, it can become quite easy to forget how to provide utility quickly and confidently.

I don't know, even the most meta of longtermist projects, such as longtermist community building (or to go even another meta level, support for longtermist community building), is quite grounded in metrics and have short feedback loops, such that you can tell if your activities are having an impact – if not impact on the utility across all time, then at least something tangible, such as high-impact career transitions. I think the skills would transfer fairly well over to something more near-termist, such as community organizing for animal welfare, or running organizations in general. In contrast, if you're doing charity effectiveness research, whether near-termist or longtermist, it can be hard to tell if your work is any good. Over time, I think that now that we have more EAs getting their hands dirty with projects instead of just earning to give, as a community, we have more experience to be able to execute projects, whether longtermist or near-termist.

As for the final section, the discount factor concern:

Future life is less likely to exist than current life. I understand the irony here, since longtermism projects seek to make it more likely that future life exists. But inherently you just have to discount the utility of each individual future life. In the aggregate, there's no question that the utility gains are still enormous. But each individual life should have some discount based on this less-likely-to-exist factor.

I think longtermists are already accounting for the fact that we should discount future people by their likelihood to exist. That said, longtermist expected utility calculations are often more naive than they should be. For example, we often wrongly interpret reducing x-risk reduction from one cause by 1% as reducing x-risk as a whole by 1%, or conflate a 1% x-risk reduction this century with a 1% x-risk reduction across all time.

(I hope you found this comment informative, but I don't know if I'll respond to this comment, as I already spent an hour writing this and don't know if it was a good use of my time.)

Thanks for the reply . Let me  just address the things I think are worth responding to.

For future submissions to the Red Teaming Contest, I'd like to see posts that are much more rigorously argued than this. I'm not concerned about whether the arguments are especially novel.

Ouch.  My humble suggestion: maybe be more friendly to outsiders, especially ones supportive and warm, when your movement has a reputation for being robotic/insular? Or just say "I don't want anyone who is not part of the movement to comment." Because that is the very obvious implication of your statement (I have no idea how much more rigorous an outsider can be than my post, which I think was thoughtful and well-researched for an outsider!).

However, EA continues dedicate substantial resources to near-term causes – hundreds of millions of dollars of donations each year! – and this number is only increasing, as GiveWell hopes to direct 1 billion dollars of donations per year. EA continues to highlight its contributions to near-term causes. As a movement, EA is doing fine in this regard.

I totally think the movement does not get the commensurate societal goodwill in return for its investment in helping people right now. As I wrote: "I know [shotermism work] happens in the movement, and my point isn’t to take away from those gains made to help people in the present." My concern was that, given that relative disconnect, longtermism projects will only exacerbate the issue.

Longtermist projects may be cool, and their utility may be more theoretical than near-term projects, but I'm extremely confused what you mean when they don't involve getting your hands dirty (in a way such that near-termist work, such as GiveWell's charity effectiveness research, involves more hands-on work). Effective donations have historically been the main neartermist EA thing to do, and donating is quite hands-off.

As I said in my post, if I am wrong about this premise, then the point fails. Am I wrong though? You should all discuss. I gave my two cents. Other people seemed to agree/upvote. As a non-member, I can't say. But if there is disagreement, then I think I raised a good point!

This seems likely, and thanks for raising this critique (especially if it hasn't been highlighted before), but what should we do about it? The red-teaming contest is looking for constructive and action-relevant critiques, and I think it wouldn't be that hard to take some time to propose suggestions. The action implied by the post is that we should consider shifting more resources to near-termism, but I don't think that would necessarily be the right move, compared to, e.g., being more thoughtful about social dynamics and making an effort to welcome neartermist perspectives.

Now we are getting into a meta debate about the red teaming contest. I don't care, tbh, because I'm not a part of this community. I contributed this, as I said, because I thought it might be helpful and I support you all. Let's follow the logic:

  1. An outsider offers insights that only an outsider can offer
  2. The outsider cannot offer concrete solutions to those insights because he, by definition, is an outsider and doesn't know enough about insider dynamics to offer solutions
  3. An insider criticizes the outsider for not offering solutions

Hmm. My value-add was #1 above in the hopes that it could spark a discussion. I can't give you answers. But I think giving worthwhile discussion topics is pretty good!

I think the skills would transfer fairly well over to something more near-termist, such as community organizing for animal welfare, or running organizations in general. In contrast, if you're doing charity effectiveness research, whether near-termist or longtermist, it can be hard to tell if your work is any good. Over time, I think that now that we have more EAs getting their hands dirty with projects instead of just earning to give, as a community, we have more experience to be able to execute projects, whether longtermist or near-termist.

This all seems fair to me. If the skills are transferrable then the concern isn't great.

I think longtermists are already accounting for the fact that we should discount future people by their likelihood to exist. 

That's good.

I'm really sorry that my comment was harsher than I intended. I think you've written a witty and incisive critique which raises some important points, but I had raised my standards since this was submitted to the Red Teaming Contest.

I was struck by your paragraph ‘ A wildly successful EA movement could do as much good for the world as almost any other social movement in history.  Even if the movement is only marginally successful, if the precepts underlying the movement are somewhat sound, the utility implications are enormous.’

I suspect if EA is to do massive good, this is more likely to come from developing and promoting ideas such as extinction risk reduction that come to be adopted politically, rather than from EA’s direct philanthropy. The biggest wins may come through political channels.

I agree with your arguments against focusing too much on longtermism. 

Re: discount factor,  longtermists have zero pure time preference. They still discount for exogenous extinction risk and diminishing marginal utility.

See: https://www.cambridge.org/core/journals/economics-and-philosophy/article/discounting-for-public-policy-a-survey/4CDDF711BF8782F262693F4549B5812E

Regarding the "muscle memory concern" you're talking about the importance of getting back to your "core value-add", but I imagine that many people would assert that longtermism is the "core value-add" these days (though I imagine near-termists would disagree). So I feel that this critique would have been stronger if you'd either a) given a reason why near-termist projects are the "main value add" or b) dodged the issue of what the main value add is and instead made it clear that your worry was about potentially losing certain capabilities that would be costly to relearn.

I guess I'm also confused because you're saying it's dangerous to always be chasing ideas that are new and fresh, but at the same time, you're talking about this resulting in the movement becoming rigid and stale.

Fair question! I should’ve been more clear that the implicit premise of the concern is that there has been an overcorrection toward longtermism.

The value-add of EA is distributing utility efficiently (not longtermism). If there’s been an overcorrection, then there’s an inefficiency and a recalibration is needed. So this concern is: how hard will it be to snap back toward the right calibration? The longer longtermism dominates, and the degree to which it does, will make it harder for the muscle memory.

If the EA movement has perfectly calibrated the amount of longtermism needed in the movement (or if there’s currently not enough longtermism), then this concern can be put aside.

Thanks for clarifying. I just thought I should mention that I interpreted "core value-add" as "most important project or aspect" and I had thought you were saying near-termist projects were the most important project.

Hmm... I'm not sure how much I agree with you regarding muscle memory since the Against Malaria Foundation and Givewell are still running and perhaps as well-funded as ever. Even if EA reduced its donations to AMF by 90% I suspect that the vast majority of "muscle memory" would be retained as I suspect continuity is more important than scale.

Curated and popular this week
Relevant opportunities