I guess I think it's likely some middle ground? I don't think he has a clear conceptual understanding of moral credit, but I do think he's tuning in to ways in which EA claims may be exaggerating the impact people can have. I find it quite easy to believe that's motivated by some desire to make EA look bad -- but so what? If people who want to make EA look bad make for good researchers hunting for (potentially-substantive) issues, so much the better.
I agree that Wenar's reasoning on this is confused, and that he doesn't have a clear idea of how it's supposed to work.
I do think that he's in some reasonable way gesturing at the core issue, even if he doesn't say very sensible things about how to address that issue.
And yeah, that's the rough shape of the steelman position I have in mind. I wrote a little about my takes here; sorry I've not got anything more comprehensive: https://forum.effectivealtruism.org/posts/rWoT7mABXTfkCdHvr/jp-s-shortform?commentId=ArPTtZQbngqJ6KSMo
Yeah, I agree that audience matters. I would feel bad about these articles being one of the few exposures someone had to EA. (Which means I'd probably end up feeling quite bad about the WIRED article; although possibly I'd end up thinking it was productive in advancing the conversation by giving voice to concerns that many people already felt, even if those concerns ended up substantively incorrect.)
But this letter is targeted at young people in EA. By assumption, they're not going to be ignorant of the basics. And besides any insights I might have got, I think there's something healthy and virtuous about people being able to try on the perspective of "here's how EA seems maybe flawed" -- like even if the precise criticisms aren't quite right, it could help open people to noticing subtle but related flaws. And I think the emotional register of the piece is kind of good for that purpose?
To be clear: I'm far from an unmitigated fan of the letter. I disagree with the conclusions, but even keeping those fixed there are a ton of changes that would make me happier with it overall. I wouldn't want to be sending people the message "hey this is right, you need to read it". But I do feel good about sending the message "hey this has some interesting perspectives, and this also covers reasons why some smart caring people get off the train; if you're trying to deepen your understanding of this EA thing, it's worth a look (and also a look at rebuttals)". Like I think it's valuable to have something in the niche of "canonical external critique", and maybe this isn't in the top slot for that (I remember feeling good about Michael Nielsen's notes), but I think it's up there.
Ok hmm I notice that I'm not especially keen to defend him on the details of any of his views, and my claim is more like "well I found it pretty helpful to read".
Like: I agree that he doesn't show awareness of Parfit, but think that he's pushing a position which (numbers aside) is substantively correct in this particular case, and I hadn't noticed that.
On the nearest test: I've not considered this in contrast to other imaginative exercises. I do think you should do a version without an action/inaction asymmetry. But I liked something about the grounding nature of the exercise, and I thought it was well chosen to prompt EAs to try to do that in connection with important decisions, when I think culturally there can be a risk of getting caught up in abstractions, in ways that may mean we fail to track things we know at some level.
Yes, I'd be broadly positive about it. I might say something like "I know you're trying to break through so people can hear you, but I think you're being a little unnecessarily antagonistic. Also I think you're making a number of mistakes about their movement (or about what's actually good). I sort of wish you'd been careful to avoid more of those. But despite all that I think this contains a number of pretty insightful takes, and you will be making a gift to them in offering it if they can get past the tone and the errors to appreciate it. I hope they do.
OK, I don't feel a particular desire to fight you on the posterior.
But I do feel a desire to fight you on this particular case. I re-read the letter, and I think there's actually a bunch of great stuff in there, and I think a bunch of people would benefit from reading and thinking about it. I've made an annotated version here, where I include my comments about various parts of what seems valuable or misguided.
And then I feel bad about whatever policy people are following that is leading this to attract so many downvotes.
I think Leif Wenar's "Open Letter to Young EAs" has significant flaws, but also has a lot going for it, and I would seriously recommend people who want to think about the ideal shape of EA should read it.
I went through the letter making annotations about the bits I thought were good or bad. If you want to see my annotated version, you can do that here. If you want to be able to comment, let me know and I'll quite likely be happy to grant you permission (but didn't want to set it to "anyone with the link can comment" for fear of it getting overwhelmed).
I think "judging on quality" is not quite the right standard. Especially for criticisms of EA from people outside EA.
I think generally people in EA will be able to hear and benefit from criticisms that are expressed by someone in EA and knows how to frame them in ways that gel with the general worldview. On the other hand I think it's reasonable on priors to expect there to exist external critics who are failing to perceive some important things that EA gets right, but who nonetheless manage to perceive some important things that EA is missing or getting wrong.
If everyone on the EA forum judges "is this high quality?", it's natural for them to assess that on the dimensions that they have a good grasp of -- so they'll see the critic making mistakes and be inclined to dismiss it. The points it might be important for them to hear from the critics will be less obvious, since they're a bit more alien to the EA ontology. But this is liable to contribute to an echo chamber. And just as at a personal level I think the most valuable feedback you can get is often of the form "hey you seem to not be tracking dimension X at all", and it can initially seem annoying or missing the point but then turns out to be super helpful in retrospect, so I think at the group level that EA could really do with being able to sit with external criticism -- feel into where the critic is coming from, and where they might be tracking something important, even if they're making other big mistakes. So I'd rather judge on something closer to "is this saying potentially important things that haven't been hashed to death?".
Update: I think I'd actually be less positive on it than this if I thought their antagonism might splash back on other people.
I took that not to be a relevant part of the hypothetical, but actually I'm not so sure. I think for people in the community, it's creating a public good (for the community) to police their mistakes, so I'm not inclined to let error-filled things slide for the sake of the positives. For people outside the community, I'm not so invested in building up the social fabric, so it doesn't seem worth trying to punish the errors, so the right move seems to be something like more straightforwardly looking for the good bits.