Epistemic status: This post is meant to be a conversation starter rather than a conclusive argument. I don’t assert that any of the concerns in it are overwhelming, only that we have too quickly adopted a set of media communication practices without discussing their trade-offs.
Also, while this was in draft form, Shakeel Hashim, CEA’s new head of communications, made some positive comments on the main thesis suggesting that he agreed with a lot of my criticisms and planned to have a much more active involvement with the media. If so, this post may be largely redundant - nonetheless, it seems worth having the conversation in public.
CEA adheres to what they call the fidelity model of spreading ideas, which they formally introduced in 2017, though my sense is it was an unofficial policy well before that. In a low-fidelity nutshell, this is the claim that EA ideas are somewhat nuanced and media reporting often isn’t, and so it’s generally not worth pursuing - and often worth actively discouraging - media communication unless you’re a) extremely confident the outlet in question will report the ideas exactly as you describe them and b) you’re qualified to deal with the media.
In practice, because CEA pull many strings, this being CEA policy makes it de facto EA policy. ‘Qualified to deal with the media’ seems to mean ‘CEA-sanctioned’, and I have heard of at least one organisation being denied CEA-directed money in part because it was considered too accommodating of the media. Given that ubiquity, I think it’s worth discussing the policy more depth. We have five years of results to look back on and, to my knowledge, no further public discussion of the subject. I have four key concerns with the current approach:
- It’s not grounded in research
- It leads to a high proportion of negative coverage for EA
- It assumes a Platonic ideal of EA
- It contributes to the hero-worship/concentration of power in EA
Elaborating each…
Not empirically grounded
The article assumes that low-fidelity spreading of EA ideas is necessarily bad, but doesn’t give any data beyond some very general anecdotes to support this. There’s an obvious trade-off to be had between a small number of people doing something a lot like what we want and a larger number doing something a bit like what we want, and it’s very unclear which has higher expectation.
To see the case for the alternative, we might compare the rise of the animal rights movement in the wake of Peter Singer’s original argument for animal welfare. The former is a philosophically mutated version of the latter, so on fidelity model reasoning would have been something that’s ‘similar but different’ - apparently treated on the fidelity model as undesirable. Similarly, the emergence of reducetarianism/flexitarianism looks very like what the fidelity model would consider to be a ‘diluted’ version of the practice of veganism Singer advocated. My sense is that both of these have nonetheless been strong net positives for animal welfare.
High proportion of negative coverage
If you have influence over a group of supporters and you tell them not to communicate with the media, one result you might anticipate is that a much higher proportion of your media coverage comes from the detractors, who you don’t have influence over. Shutting out the media can also be counterproductive - they’re (mostly) human, and so tend to deal more kindly with people who deal more kindly with them. I have three supporting anecdotes, one admittedly eclipsing the others:
FTXgate
At the time of writing, if you Google ‘effective altruism news’, you still get something like this.
Similarly, if you look at Will’s Tweetstorm decrying SBF’s actions, the majority of responses are angrily negative responses to a movement that consorted with crypto billionaires, as though that’s all EA has ever been about. It seems we’ll to have to deal with this being the first impression many people have formed of the movement for quite some time.
‘The child abuse thing’
A few years ago I was at a public board games event. Sitting at the table to introduce myself with my usual social flair, I decided to mention that I was into EA as a conversation starter. My neighbour’s response was ‘effective altruism… oh right - the child abuse thing.’
I won’t link to the source that informed that delightful conversation, but in brief: a former EA with serious mental health issues had committed suicide after allegedly being sexually harassed by rationalists and/or EAs - she had written an online suicide note that didn’t distinguish between the two groups, and that made a lot of allegations against the broader communities, most of which I believe demonstratedly lacked justification. But evidently, this was passed around widely enough by people who disliked the movement that it was literally the only thing my interlocutor had heard of us.
‘Noisy fuckers’
Shortly after CEEALAR (then the EA Hotel) was founded, I was contacted by an Economist journalist excited about the project who wanted to write about it. Not knowing then of the CEA policy I invited him to come and view it. I mentioned this to a friend, who informed me of the CEA policy and, on the basis of it, strongly urged us not to engage.* So we backtracked and asked the journalist not to visit. He did anyway, and we literally turned him away at the front door. You can read the article here[1] and form your own opinions - mine is that the last paragraph feels very like a substitution for what would have been an engaged look at what people were doing in the hotel if we hadn’t both reduced the substance of the journalist’s story and presumably pissed him off in the process.
*[edit: she pointed out in the comments this was mostly advice from her own experience, not based on CEA policy
edit 2: to be clear, we weren't aware of any pressure from CEA to do so - just that they had published articles advising against engaging with journalists]
Assumes a Platonic ideal of EA
One of the most acclaimed forum posts of all time (karma adjusted) is Effective Altruism is a Question (not an Ideology). It is hard to square the letter of that post, let alone the spirit, with the thought that sharing EA ideas among the masses will distort them into ideas that are ‘related to, but importantly different from, the ideas we want to spread’ - and that this will necessarily be a bad outcome.
The fidelity article describes EA as ‘nuanced’, but honestly, I don’t think it particularly is at its core. People who publicly condemn it don’t seem to have got any factual details wrong. They either have a different emotional response to it, or they’re critical of how it’s put into practice. In the former case, maybe there was nothing we could have done - or maybe more exposure would have made the ideas feel more normal. In the latter case, if we withhold the logic behind these practices, we give up ~8 billion chances for it to be improved by critical scrutiny.
Contributes to the hero-worship and the concentration of power in EA
To my knowledge, there have been three major EA publicity campaigns. The first was the launch of Giving What We Can, which in practice (and to his chagrin) focused the attention on Toby Ord more than his project. The second - and arguably ongoing - was the launch around Doing Good Better, which intentionally put Will in the spotlight: during and since, he has given multiple high-profile interviews and CEA pays for dozens of copies of the book to be given away at every EA conference. The most recent - and arguably ongoing - was the launch around What We Owe the Future, which intentionally put Will in the spotlight: during and since he has given multiple high-profile interviews, and CEA seem to intend to pay for dozens of copies to be given away at every future EA conference. To a lesser degree, The Precipice has also been supported, again being given away by the dozen at EA conferences.
Will and Toby are arguably the main founders of the EA movement, so it’s natural to focus on them to some extent - but this effect can still cause a feedback loop that amplifies their opinions beyond what seems epistemically healthy.
Also, identifying the movement with a small number of individuals creates a huge failure mode, which we might be in the middle of facing. If Will’s reputation is tarnished by the above association with FTX, however unfairly, the movement will suffer, to say nothing of the fact that SBF himself was one of the few EAs whose media engagement was encouraged. Even if those associations fade over time, the risk remains that highlighting a very small number of thought leaders inevitably gives the movement critical points of failure.
It may also be bad for Will himself, since it puts him under an incredible amount of pressure to adhere to middle-of-the-road social norms, some of which he may be uncomfortable with.
Some (over?)generalisations of the concern
I have a couple of broader hypotheses about shortcomings of EA epistemics to which this is related. Both deserve their own post, but since it might be some time before I can write those posts, it seems worth raising them here, for potential side discussion. Needless to say, my own epistemic status on these is ‘tentative’:
- both CEA and the wider EA community should take more seriously the idea that when someone rejects an EA idea (perhaps beyond the foundational notion of ‘optimise do-gooding’), it might be because they have some insight into the ‘question’ of effective altruism - not just because they didn’t understand it. This dismissive attitude seems to inform, for example, the focus on recruiting young people to the movement, which seems to have been justified in part because they’re basically less likely to reject our way of doing things.
- the EA community seems far too willing to rely for long periods on rough and ready heuristic reasoning, often based on a single speculative argument, on some very important questions which deserve serious research. This is a theme I’ve raised and seen raised in[2] various other contexts.
Concrete(ish) suggestions
With all that’s going on at the moment, this might be an inopportune time for everyone to start rushing out to chat to journalists. So I don’t have any specific replacement policy in mind, but I want to propose some ideas:
- Public discussion of the policy between EAs, CEA employees, and the employees of other EA fundmakers - including the latter making explicit to what extent media engagement will be a consideration in their funding decisions
- More explicit acknowledgement from CEA of the epistemic and PR problems of promoting a very small number of thought leaders
- More of an experimental approach to media policy in ways that wouldn’t be too damaging if they went wrong. For example, CEA could start by trying a more liberal policy in languages with relatively small numbers of native speakers
- Some kind of historical/statistical research into the outcomes when other groups (and early EAs) have had to make a similar choice
Acknowledgements
Thanks to Linda Linsefors, Ze Shen, Michal Keda, Shakeel Hashim, Siao Si Looi and Emily Dardaman for feedback and encouragement on this post.
- ^
I’m able to access the article freely, but at least one person said it was paywalled for them, so the paragraph in question is this - though I would suggest reading the rest of the article for context if you can:
'If residents tire of their selfless work, Blackpool’s central pier—home to a “true Romany palmist” and scores of arcade games—is a short stroll away. Visitors are welcome at the hotel (partly to deter “cult-like tendencies”), though prices for non-altruists are set above market rates. None of the residents was keen to talk to The Economist. So far, the new arrivals do not seem to have caused much of a stir in Blackpool. But one hotelier complains that a recent party kept up his guests. “They’re noisy fuckers,” he grumbles of the do-gooders. Keeping the volume down would at least be one easy way for altruists to improve the lives of locals.'
- ^
Luisa’s post starts with the epistemic status ‘In general, I consider it a first step toward understanding this threat from civilizational collapse — not a final or decisive one’, but in conversation she said she had the sense people have been treating it as a concrete answer to the question.
Hey Ben,
Let me clarify a couple of my views first:
At the risk of sounding pedantic, I'm obviously not claiming they've sought to minimise it, but to control it.
I don't think they necessarily should have, and definitely not in a cavalier 'all publicity is good publicity' way. I think they should have experimented more, and been generally less willing to promote a doctrine laid out in a few hundred words based on nothing but anecdote for nearly as long as they did.
I would say more that they've strongly singled out Toby and especially Will for promotions than that they've prevented individuals from becoming public figures (with the caveat on their influence I discuss further down in this comment). For example, the majority of opening and closing talks at most EAGs until at least 2020 seem to have been by Will or Toby, as well as maybe half the fireside chats - with hardly anyone else seeming to have given more than one. Meanwhile, the promotion they've given What We Owe the Future, Doing Good Better and The Precipice is extreme. Many other books have been written on similar subjects without getting anywhere near the level of support (eg Reese's On the Future, MacFarquahar's Strangers Drowning, Russell's Human Compatible - even Singer's The Most Good You Can Do).
At the risk of sounding pedantic-er, I would restrict myself to the weaker claim that lack of training or official approval should not be a strong deterrent to people who are thinking of speaking to the media about EA-related subjects in a positive way and that, to the extent that CEA have publicly discouraged this (which, if I understood Shakeel's comment on my original doc he agrees they have), they should stop doing so (which, again, it sounds like is what he intends).
Re your arguments:
That might be true. I don't know what their policy is internally, only what guidance they display publicly. That said,
Excluding those campaigns, which are a large part of the phenomenon I'm criticising, I don't see an uptick. Just going by those links, and ignoring 2022 which was mostly WWOTF, there seems to have been a cycle of increased coverage every ~3 years, in 2013, 2015, 2018, and 2021.
I basically agree here. Now is probably a particularly bad time to talk to the media casually - I'm claiming (weakly, and more that these are claims we should be testing) that a) we would be in a better situation now had we had more exposure earlier, and b) once this stuff dies down, a more liberal policy will reduce the risk of such a negative-PR-singularity in future.
Fwiw I interpret this as supporting my case. The level of EA-media engagement in Will's summer media campaign was, as you imply highly unusual. Had there been a steady stream of articles, some good, some bad, on a wide range of EA topics for years and then 2022's articles happened to include (but not exclusively be) a large number about Will's book, I think the 'man-bites-dogness' of EA = Will = FTX would have been much reduced.
I don't update in either direction based on this. This sounds like standard marketing logic - you irritate the majority of people who see your ad and engage the minority who matter. I don't see how that makes a case that less marketing means fewer people will buy your product.
I also think this kind of data is very hard to interpret, in much the same way - few people will consciously go through an internal monologue resembling 'ah, I heard about EA in the New York Times, so I'll donate some money to Givewell', just as few people will say 'Ah, having seen that ad for Coca Cola I'll go out and buy a bottle'. It's all about priming, which is incredibly hard to self-report, even if people are perfectly scrupulous about it (which they're probably incentivised not to be). If marketing departments - both of for-profits and nonprofits - have for decades thought this was a good trade-off, I think we need some pretty robust evidence to confidently claim it won't be in our case.
Non-rhetorically-intended question: other than producing bad media, do you know of any instances of 'haters' actually impeding EA activities? Otherwise, this claim seems to be 'more media attention on EA will produce more of both good and bad media', which seems uncontroversial - the question I want us to investigate is in what proportion it does.
I think this is fair (both in that I made a mistake and that that's a reasonable interpretation). I presented it because it's not my interpretation, and it seems a relevant discussion point.
Without knowing what pressures he was under, having possibly told his boss he had been offered an interview, needing to get some article out by a deadline, I don't think this is fair to him.
This is true, but somewhat misleading in that of these orgs, only CEA has meaningfully funded individuals . More subtly but IMO with similar effects, EA Funds have been the group most explicitly funding EA startup projects - the others you mention mostly review existing charities that would counterfactually exist, albeit with a reduced budget absent their support. So CEA has wielded existential power over orgs who are the most likely to be explicitly EA aligned. There also seems to have been some amount of deference from the other funders, who sometimes seem to contribute to some project if and only if CEA do it first.
Having said that, as I understand it EA Funds are now separate from CEA, so this is a primarily historical concern. But CEA still run the forum, the EAGs, and have a large hand in EAGxes. So they probably still have an outsized influence on the community comprising the most engaged non-billionaires. Also, they seem to have been much more proactive than the other orgs both in engaging and deterring engagement with the media. Lastly, there's a huge amount of cross-pollination between these groups, with many of the staff at them having come via one of the other orgs, or being employed by one while being a trustee or advisor for another, etc.
I don't buy this. I'm a big fan of Toby, but I don't think he's especially adept at public speaking, and I don't think it's a common view that he is. Sure, his story is 'I started the movement' (at least, I think so. Jonas is disagreeing below), which is worth a lot, but if you look through the thousands of actively engaged EAs I'm sure there are plenty of others with at least as good a human interest angle.
This is perhaps a quibble, but I think considering it as necessarily a 'job' is part of my concern. Often the story is just going to be 'person X did something the media are interested in' and the question is whether they do or should feel pressure not to speak to the media even if they feel like it's a good idea.
I don't think there's a strong case to be made either way yet - again, I want to see more exploration, not sudden adoption of a policy with the opposite emphasis. Fwiw by my count more of the comments on this thread are positive than negative wrt the idea of at least somewhat liberalising the historical policy.