Pat Andriola

100 karmaJoined


Sorted by New


Thanks for the reply . Let me  just address the things I think are worth responding to.

For future submissions to the Red Teaming Contest, I'd like to see posts that are much more rigorously argued than this. I'm not concerned about whether the arguments are especially novel.

Ouch.  My humble suggestion: maybe be more friendly to outsiders, especially ones supportive and warm, when your movement has a reputation for being robotic/insular? Or just say "I don't want anyone who is not part of the movement to comment." Because that is the very obvious implication of your statement (I have no idea how much more rigorous an outsider can be than my post, which I think was thoughtful and well-researched for an outsider!).

However, EA continues dedicate substantial resources to near-term causes – hundreds of millions of dollars of donations each year! – and this number is only increasing, as GiveWell hopes to direct 1 billion dollars of donations per year. EA continues to highlight its contributions to near-term causes. As a movement, EA is doing fine in this regard.

I totally think the movement does not get the commensurate societal goodwill in return for its investment in helping people right now. As I wrote: "I know [shotermism work] happens in the movement, and my point isn’t to take away from those gains made to help people in the present." My concern was that, given that relative disconnect, longtermism projects will only exacerbate the issue.

Longtermist projects may be cool, and their utility may be more theoretical than near-term projects, but I'm extremely confused what you mean when they don't involve getting your hands dirty (in a way such that near-termist work, such as GiveWell's charity effectiveness research, involves more hands-on work). Effective donations have historically been the main neartermist EA thing to do, and donating is quite hands-off.

As I said in my post, if I am wrong about this premise, then the point fails. Am I wrong though? You should all discuss. I gave my two cents. Other people seemed to agree/upvote. As a non-member, I can't say. But if there is disagreement, then I think I raised a good point!

This seems likely, and thanks for raising this critique (especially if it hasn't been highlighted before), but what should we do about it? The red-teaming contest is looking for constructive and action-relevant critiques, and I think it wouldn't be that hard to take some time to propose suggestions. The action implied by the post is that we should consider shifting more resources to near-termism, but I don't think that would necessarily be the right move, compared to, e.g., being more thoughtful about social dynamics and making an effort to welcome neartermist perspectives.

Now we are getting into a meta debate about the red teaming contest. I don't care, tbh, because I'm not a part of this community. I contributed this, as I said, because I thought it might be helpful and I support you all. Let's follow the logic:

  1. An outsider offers insights that only an outsider can offer
  2. The outsider cannot offer concrete solutions to those insights because he, by definition, is an outsider and doesn't know enough about insider dynamics to offer solutions
  3. An insider criticizes the outsider for not offering solutions

Hmm. My value-add was #1 above in the hopes that it could spark a discussion. I can't give you answers. But I think giving worthwhile discussion topics is pretty good!

I think the skills would transfer fairly well over to something more near-termist, such as community organizing for animal welfare, or running organizations in general. In contrast, if you're doing charity effectiveness research, whether near-termist or longtermist, it can be hard to tell if your work is any good. Over time, I think that now that we have more EAs getting their hands dirty with projects instead of just earning to give, as a community, we have more experience to be able to execute projects, whether longtermist or near-termist.

This all seems fair to me. If the skills are transferrable then the concern isn't great.

I think longtermists are already accounting for the fact that we should discount future people by their likelihood to exist. 

That's good.

This seems defensive lol. My entire thing here is, I’m asking if there is support for this because I don’t know because I’m not in the community. It seems like you’re saying “it’s been mentioned but is not necessarily true.” If that’s the case, it would be helpful to say that. If it’s something else, it would be helpful to say that thing!

Thank you so much for this!

I’m really curious about the “nothing I haven’t heard before” in relation to the Social Capital Concern. Have people raised this before? If so, what’s being done about it? As I said, I think it’s the most serious of the four I mentioned, so if it’s empirically supported, what’s the action plan against it?

Fair question! I should’ve been more clear that the implicit premise of the concern is that there has been an overcorrection toward longtermism.

The value-add of EA is distributing utility efficiently (not longtermism). If there’s been an overcorrection, then there’s an inefficiency and a recalibration is needed. So this concern is: how hard will it be to snap back toward the right calibration? The longer longtermism dominates, and the degree to which it does, will make it harder for the muscle memory.

If the EA movement has perfectly calibrated the amount of longtermism needed in the movement (or if there’s currently not enough longtermism), then this concern can be put aside.