Hide table of contents

YouGov recently reported the results of a survey (n=1000) suggesting that about “one in five (22%) Americans are familiar with effective altruism.”[1]


We think these results are exceptionally unlikely to be true. Their 22% figure is very similar to the proportion of Americans we previously found claim to have heard of effective altruism (19%) in our earlier survey (n=6130). But, after conducting appropriate checks, we estimated that much lower percentages are likely to have genuinely heard of EA[2] (2.6% after the most stringent checks, which we speculate is still likely to be somewhat inflated[3]).


Is it possible that these numbers have simply dramatically increased following the FTX scandal?

Fortunately, we have tested this with multiple followup surveys explicitly designed with this possibility in mind.[4] 

In our most recent survey (conducted October 6th[5]), we estimated that approximately 16% (13.0%-20.4%) of US adults would claim to have heard of EA. Yet, when we add in additional checks to assess whether people appear to have really heard of the term, or have a basic understanding of what it means, this estimate drops to 3% (1.7% to 4.4%), and even to approximately 1% with a more stringent level of assessment.[6] 
 


These results are roughly in line with our earlier polling in May 2022, as well as additional polling we conducted between May 2022 and October 2023, and do not suggest any dramatic increase in awareness of effective altruism, although assessing small changes when base rates are already low is challenging.

We plan to continue to conduct additional surveys, which will allow us to assess possible changes from just before the trial of Sam Bankman-Fried to after the trial.

Attitudes towards EA

YouGov also report that respondents are, even post-FTX, overwhelmingly positive towards EA, with 81% of those who (claim to) have heard of EA approving or strongly approving of EA.

Fortunately, this positive view is broadly in line with our own findings- across different ways of breaking down who has heard of EA and different levels of stringency- which we aim to report on separately at a later date. However, our earlier work did find that  awareness of FTX  was associated with more negative attitudes towards EA. 

Conclusions

The point of this post is not to criticise YouGov in particular. However, we do think it’s worth highlighting that even highly reputable polling organizations should not be assumed to be employing all the additional checks that may be required to understand a particular question. This may apply especially in relation to niche topics like effective altruism, or more technical topics like AI, where additional nuance and checks may be required to assess understanding.


 

  1. ^

    Also see this quick take.

  2. ^

    There are many reasons why respondents may erroneously claim knowledge of something. But simply put, one reason is that people like demonstrating their knowledge, and may err on the side of claiming to have heard of something even if they are not sure. Moreover, if the component words that make up a term are familiar, then the respondent may either mistakenly believe they have already encountered the term, or think it is sufficient that they believe they can reasonably infer what the term means from its component parts to claim awareness (even when explicitly instructed not to approach the task this way!). Some people also appear to conflate the term with others - for example, some amalgamation of inclusive fitness/reciprocal altruism appears quite common. 

    For reference, in another check we included, over 12% of people claim to have heard of the specific term “Globally neutral advocacy”: A term that our research team invented, which returns no google results as a quote, and which is not recognised as a term by GPT—a large-language model trained on a massive corpus of public and private data. “Globally neutral advocacy” represents something of a canary for illegitimate claims of having heard of EA, in that it is composed of terms people are likely to know, and the combination of which they might reasonably think they can infer the meaning or even simply mistakenly believe they have encountered. 

  3. ^

    For example, it is hard to prevent a motivated respondent from googling “effective altruism” in order to provide a reasonable open comment explanation of what effective altruism means. However, we have now implemented additional checks to guard against this.

  4. ^

    The results of some of these have been reported earlier here. Some of these are part of our Pulse survey program.

  5. ^

    n=1300 respondents overall, but respondents were randomly assigned to receive one of two different question formats to assess their awareness of EA. Results were post-stratified to be representative of US adults. This is a smaller sample size than we typically recommend for nationally representative sample, as this was an intermediate, 'pre-test' survey, and hence the error bars around these estimates are relatively wider than they otherwise would be. A larger N would be especially useful for more robustly determining the rates of low incidence outcomes (such as awareness of niche topics).

  6. ^

    As an additional check, we also assessed EA awareness using an alternative approach, in which a different subset of the respondents were shown the term and its definition, then asked if they knew the term only, the term and associated ideas, only the ideas, or neither the term nor the ideas. Using this design, approximately 15% claimed knowledge either of the term alone or both the term and ideas, while only 5% claimed knowlege of the term and the ideas.

  7. Show all footnotes
Comments8
Sorted by Click to highlight new comments since:

Strongly agree. Given the question design ("Are you familiar with effective altruism?"), there's clear risk of acquiescence bias - on top of the fundamental social desirability bias of wanting to not appear ignorant to your interviewer.

For sure, and just misunderstanding error could account for a lot of positive responses too - people thinking they know it when they don't.

Agreed. As we note in footnote 2:

There are many reasons why respondents may erroneously claim knowledge of something. But simply put, one reason is that people like demonstrating their knowledge, and may err on the side of claiming to have heard of something even if they are not sure. Moreover, if the component words that make up a term are familiar, then the respondent may either mistakenly believe they have already encountered the term, or think it is sufficient that they believe they can reasonably infer what the term means from its component parts to claim awareness (even when explicitly instructed not to approach the task this way!). 

For reference, in another check we included, over 12% of people claim to have heard of the specific term “Globally neutral advocacy”: A term that our research team invented, which returns no google results as a quote, and which is not recognised as a term by GPT—a large-language model trained on a massive corpus of public and private data. “Globally neutral advocacy” represents something of a canary for illegitimate claims of having heard of EA, in that it is composed of terms people are likely to know, and the combination of which they might reasonably think they can infer the meaning or even simply mistakenly believe they have encountered. 

I think this is one reason why "effective altruism" gets higher levels of claimed awareness than other fake or low incidence terms (which people would be very unlikely to have encountered).

Outsider here! I dropped out of grad school years ago and was never really involved in the "elite" academic or professional scene to which most EA members belong. The term "effective altruism" was familiar to me since my student days in the early '10s, but I didn't really know much about it until very recently (the whole OpenAI scandal brought it to my attention, and I decided to explore the philosophical roots of it all over the holiday).

 

What are the stringent and permissive criteria for judging that someone has heard of EA?

The full process is described in our earlier post, and included a variety of other checks as well. 

But, in brief, the "stringent" and "permissive" criteria refer to respondents' open comment explanations of what they understand "effective altruism" to means, and whether they either displayed clear familiarity with effective altruism, such that it would be very unlikely someone would give that response if they were not genuinely familiar with effective altruism (e.g. by referring to using evidence and reason to maximise the amount of good done with your donations or career; or referring to specific EA figures, books, orgs, events etc.), or whether it was merely probable based on their comment that they had heard of effective altruism (e.g. because the responses were more vague or less specific).

This is a very helpful post, thanks! 

You write "YouGov also report that respondents are ... overwhelmingly positive towards EA, with 81% of those who (claim to) have heard of EA approving or strongly approving of EA. Fortunately, this positive view is broadly in line with our own findings ... which we aim to report on separately at a later date". 

Could you give an ETA for that? Or could you provide further details? Even if you haven't got data for the Netherlands it'd help us make estimates, which will then inform our strategy.  

Thanks!

We'll definitely be reporting on changes in awareness of and attitudes towards EA in our general reporting of EA Pulse in 2024. I'm not sure if/when we'd do a separate dedicated post towards changes in EA awareness/attitudes. We have a long list (this list is very non-exhausive) of research which is unpublished due to lack of capacity. A couple of items on that list also touch on attitudes/awareness of EA post-FTX, although we have run additional surveys since then.

Feel free to reach out privately if there are specific things it would be helpful to know for EA Netherlands.

Curated and popular this week
 ·  · 20m read
 · 
Once we expand to other star systems, we may begin a self-propagating expansion of human civilisation throughout the galaxy. However, there are existential risks potentially capable of destroying a galactic civilisation, like self-replicating machines, strange matter, and vacuum decay. Without an extremely widespread and effective governance system, the eventual creation of a galaxy-ending x-risk seems almost inevitable due to cumulative chances of initiation over time across numerous independent actors. So galactic x-risks may severely limit the total potential value that human civilisation can attain in the long-term future. The requirements for a governance system to prevent galactic x-risks are extremely demanding, and they need it needs to be in place before interstellar colonisation is initiated.  Introduction I recently came across a series of posts from nearly a decade ago, starting with a post by George Dvorsky in io9 called “12 Ways Humanity Could Destroy the Entire Solar System”. It’s a fun post discussing stellar engineering disasters, the potential dangers of warp drives and wormholes, and the delicacy of orbital dynamics.  Anders Sandberg responded to the post on his blog and assessed whether these solar system disasters represented a potential Great Filter to explain the Fermi Paradox, which they did not[1]. However, x-risks to solar system-wide civilisations were certainly possible. Charlie Stross then made a post where he suggested that some of these x-risks could destroy a galactic civilisation too, most notably griefers (von Neumann probes). The fact that it only takes one colony among many to create griefers means that the dispersion and huge population of galactic civilisations[2] may actually be a disadvantage in x-risk mitigation.  In addition to getting through this current period of high x-risk, we should aim to create a civilisation that is able to withstand x-risks for as long as possible so that as much of the value[3] of the univers
 ·  · 47m read
 · 
Thank you to Arepo and Eli Lifland for looking over this article for errors.  I am sorry that this article is so long. Every time I thought I was done with it I ran into more issues with the model, and I wanted to be as thorough as I could. I’m not going to blame anyone for skimming parts of this article.  Note that the majority of this article was written before Eli’s updated model was released (the site was updated june 8th). His new model improves on some of my objections, but the majority still stand.   Introduction: AI 2027 is an article written by the “AI futures team”. The primary piece is a short story penned by Scott Alexander, depicting a month by month scenario of a near-future where AI becomes superintelligent in 2027,proceeding to automate the entire economy in only a year or two and then either kills us all or does not kill us all, depending on government policies.  What makes AI 2027 different from other similar short stories is that it is presented as a forecast based on rigorous modelling and data analysis from forecasting experts. It is accompanied by five appendices of “detailed research supporting these predictions” and a codebase for simulations. They state that “hundreds” of people reviewed the text, including AI expert Yoshua Bengio, although some of these reviewers only saw bits of it. The scenario in the short story is not the median forecast for any AI futures author, and none of the AI2027 authors actually believe that 2027 is the median year for a singularity to happen. But the argument they make is that 2027 is a plausible year, and they back it up with images of sophisticated looking modelling like the following: This combination of compelling short story and seemingly-rigorous research may have been the secret sauce that let the article to go viral and be treated as a serious project:To quote the authors themselves: It’s been a crazy few weeks here at the AI Futures Project. Almost a million people visited our webpage; 166,00
 ·  · 8m read
 · 
Note: This post was crossposted from the Open Philanthropy Farm Animal Welfare Research Newsletter by the Forum team, with the author's permission. The author may not see or respond to comments on this post. ---------------------------------------- > Despite setbacks, battery cages are on the retreat My colleague Emma Buckland contributed (excellent) research to this piece. All opinions and errors are mine alone. It’s deadline time. Over the last decade, many of the world’s largest food companies — from McDonald’s to Walmart — pledged to stop sourcing eggs from caged hens in at least their biggest markets. All in, over 2,700 companies globally have now pledged to go cage-free. Good things take time, and companies insisted they needed a lot of it to transition their egg supply chains — most set 2025 deadlines to do so. Over the years, companies reassured anxious advocates that their transitions were on track. But now, with just seven months left, it turns out that many are not. Walmart backtracked first, blaming both its customers and suppliers, who “have not kept pace with our aspiration to transition to a full cage-free egg supply chain.” Kroger soon followed suit. Others, like Target, waited until the last minute, when they could blame bird flu and high egg prices for their backtracks. Then there are those who have just gone quiet. Some, like Subway and Best Western, still insist they’ll be 100% cage-free by year’s end, but haven’t shared updates on their progress in years. Others, like Albertsons and Marriott, are sharing their progress, but have quietly removed their pledges to reach 100% cage-free. Opportunistic politicians are now getting in on the act. Nevada’s Republican governor recently delayed his state’s impending ban on caged eggs by 120 days. Arizona’s Democratic governor then did one better by delaying her state’s ban by seven years. US Secretary of Agriculture Brooke Rollins is trying to outdo them all by pushing Congress to wipe out all stat