This is a special post for quick takes by djw. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.
Sorted by Click to highlight new quick takes since:

Could EA benefit from having a "bulldog"? 

That is, a pugnacious (but scrupulous) public advocate of EA and EA-adjacent ideas. In the EA community currently, who might come closest to being something like EA's bulldog? 

More precisely, I'm thinking of a hybrid between, say, Christopher Hitchens and Peter Singer (or perhaps Derek Parfit, for added dryness). A fiery, polemical wit married to a calm, analytical rigor. 

A good, non-EA -affiliated example of this style is  Alex J. O'Connor, better known as Cosmic Skeptic on YouTube, a student of philosophy at Oxford whose confrontational yet nuanced content on atheism and veganism is rather popular now (a good example is his speech on veganism and animal rights). On his podcast, he's interviewed Peter Singer, and frequently cites Hitchens as an inspiration. 

I suspect that many EAs would be skeptical and cautious of this approach, for various reasons. Certain versions of it would appear to cut against certain EA features commonly regarded as virtues: considerateness and cooperation (and their encouragement), epistemic modesty (e.g. focusing heavily on uncertainties), compassion in disagreement, respecting norms of agreeable conduct, etc.

Similarly, it seems to carry reputational risks, including a risk of doing accidental harm to EA's public image. In this sense, it risks being a hard-to-reverse decision, resulting in more costs than benefits (William MacAskill discusses this here). Maybe this is reason enough for an advocate of this kind not to wish to be publicly associated with EA even while supporting and highlighting its cause-areas. 

On the other hand, perhaps at least some of this style can attract and/or sustain more positive, public attention than milder outreach approaches, and perhaps even shape public opinion more effectively. 

There's much more to say, but this is already much longer than I intended. I'd love to read any thoughts on this, and/or to be pointed in the direction of previous, related discussion.

Goes without saying we now have the wonderful Bentham's Bulldog. 

I feel like the main role of a bulldog is to fend off the fiery, polemical enemies of a movement. Atheism and veganism (and even AI safety, kind of) have clear opponents; I don't think the same is especially true of EA (as a collection of causes). 

There are people who argue for localism, or the impracticality of measuring impact, but I can't think of the last time I've seen one of those people have a bad influence on EA. The meat industry wants to kill animals; theists want to promote religion; ineffective charities want to... raise funds? Not as directly opposed to what we're doing.

I suppose we did have the Will MacAskill/Giles Fraser debate at one point, though. MacAskill also took on Peter Buffet in an op-ed column. I don't know how he feels about those efforts in retrospect. 

We could certainly use more eloquent/impassioned public speakers on EA topics (assuming they are scrupulous, as you say), but I wouldn't think of them as "bulldogs" -- just regular advocates.

This Letter  made me feel like there can be organized opposition from ineffective charities

I don’t think this obscure philosophical critique is evidence that "ineffective" charities will ever realistically form an organized opposition to effective altruism. It doesn’t benefit "ineffective" charities’ interests to criticize or oppose effective altruism; effective altruism is too small and not influential enough to direct much of their donations away.

The critique has two parts. The first part is a critique of moral impartiality or equal consideration of interests. It seems like it’s intended to be a critique of consequentialism and utilitarianism overall. The author seems to be arguing in favour of virtue ethics.

This is too obscure and academic for pretty much any charity to care about or have an opinion on. I think most people find this kind of stuff confusing and boring. It isn’t really something you can mount a public opposition over.

The second part of the critique is standard radical leftist fare. Most charities would not align themselves with that sort of critique, unless that is already a defining part of their political beliefs. So, not a winner here, either, in terms of capturing the public interest.

Thank you Aaron, these are great points!

I know you wrote this five years ago, but I think this is the opposite of what effective altruism needs now. The worrying tendency I see in effective altruism nowadays is for people to circle the wagons around criticism.

The current EA community, or at least large parts of it, is somewhat radicalized around certain views, particularly around near-term AGI forecasts, AGI safety/alignment, social justice, scientific racism, sexual harassment, and a quirky, home-grown variety of Bayesianism-utilitarianism.

One of the root causes of this radicalization seems to be ideological insularity or the effects of being in a filter bubble/echo chamber. I’d recommend changes to promote more exposure to different viewpoints, and more serious consideration of them.

So, rather than a bulldog (or another one, or more of them), maybe what EA needs is more public debates or dialogues between people with different viewpoints on the topics where EA has become more radicalized over the last five years or so.

More from djw
Curated and popular this week
Relevant opportunities