Hide table of contents

I want to know how biased towards simply AI focus on the front page of the forum is with respect to the stated aims of the Effective altruism movement which has at least 10 areas of interest. 

Front forum page screenshot, with at least 6 threads out of 11 clearly related to Artificial Inteligence in one way or another


Vs the 10 areas of interest. 

I admit I am new, and I ask out of curiosity, surprise and misalignment between expectations to what I first saw when entering the forum.

18

1
0
3
1

Reactions

1
0
3
1
New Answer
New Comment

2 Answers sorted by

Hi Andreu. The EA Forum definitely has a lot of stuff about AI because that's the hot topic to talk about, and it sure seems like a lot of people in the movement these days are focused on AI, but according to a survey in 2024, the top priority cause area for 29% of people in EA is global poverty and global health and the top priority for 31% of people is AI risk, so AI risk and global poverty/health are about tied — at least on that metric. (Another way of averaging the data from the same survey puts global poverty/health slightly ahead of AI risk.)

The last survey to ask where people in EA were donating is from way back in 2020. A whole lot has changed since 2020. For what it's worth, 62% of respondents to that survey said they were donating to global health and development charities, 27% said animal welfare, and 18% said AI and "long term". 

The 2020 survey also found 16% of people named global poverty as their top cause, while 14% said AI risks. It's interesting that this is true given where people said they donated in 2020. I would guess that's probably because, regardless of which cause area you think is more important, it's not clear where you would donate if you wanted to reduce AI risk, whereas with global poverty there are many great options, including GiveWell's top charities. So, maybe even now, more people are donating to charities related to global poverty than to AI risk, but I don't know about any actual data on that.

By the way, if you click "Customize feed" on the EA Forum homepage, you can reduce or fully hide posts about any particular topic. So, you could see fewer posts on AI or just hide them altogether, if you want. 

Also, if you want to read posts expressing skepticism about AI risk, the forum has an "AI risk skepticism" tag that makes it easy to find posts about that. You have different options for sorting these posts that will show you different stuff. "Top" (the default) will mostly show you posts from years ago. "New & upvoted" will mostly show you posts from within the last year (including some of mine!).

Hi Yarrow, great analysis, that helps me have a clearer picture.

But the surveys and the Forum are two different datasets, it would be relatively easy to have a 'real time' tracking of the forum's sentiments or do statistics from archival data to see how the trends are and how they map, or not, to survey results. 

But still, a roughly 50% of top posts being about AI to a roughly 1/3 of people concerned about AI risks map quite well if you add the fad factor of AI. 
 

2
Yarrow Bouchard 🔸
Oh, it’s actually 86-96% who are concerned about AI risk, according to the 2024 survey. 31% was just the number of people who picked it as their top cause area. 
1
AndreuAndreu
ah, oks, so the 'Forum numbers' are not as bad then related to that :), thks!

I think most, though no doubt not all people you'd think of as EA leaders think AI is the most important cause area to work in and have thought that for a long time. AI is also more fun to argue about on the internet than global poverty or animal welfare, which drives discussion of it. 

 

But having said all that, there is still plenty EA funding of global health and development stuff, including by Open Philanthropy, who in fact have a huge chunk of the EA money in the world. People do and fund animal stuff too, including Open Phil. If you want to, you can just engage with EA stuff on global development and/or animal welfare, and just ignore the AI stuff altogether. And even if you decide that the AI stuff is so prominent and-in your view-so wrong that you don't want to call yourself an EA, you don't have to give up on the idea of effective charity. If you want to, you can try and do the most good you can on global poverty or animal welfare, while not identifying as an EA at all. Lots, likely most good work in these areas will be done by organisations that don't see themselves as EA anyway. You can donate to or work for these orgs without engaging with the whole EA scene at all. 


Hi David, oks this is the most enlightening and decision orienting answer I could get. Thanks! 

Indeed i came to the Forums through a workshop and had a completely inverted expectative. That the leaders at the EA where conscientious of the AI fad and used that galvanising attention to redirect people to more pressing matters. But from your comment, especially this bit "most, though no doubt not all people you'd think of as EA leaders think AI is the most important cause area to work" really concerns me that the direction of the movement is somehow dece... (read more)

2
David Mathers🔸
My guess is you might find it hard to find EA people in global development stuff who are particularly interested in preserving/expanding cultural diversity. Generally the people who work on that stuff want to prioritize health, income and economic growth. 
Comments6
Sorted by Click to highlight new comments since:

Strongly upvote this!

I just posted an answer. I hope you find it helpful!

Very good point on coming new to EA. Maybe hearing about different cause areas in an intro workshop then landing here and wondering if it is the Alignment Forum. It might even feel a bit like bait and switch? If this is a recurring theme for newcomers to EA, this is something that should be looked at. Not sure if anyone is tracking the funnel of onboarding into EA? If so, one might see people being interested initially, then dropping off when they meet a "wall of AI". 

this is concerning if the bait is cool, old fashioned, volunteering, and the switch is to AI. Read my answer to David's comment, from my background I interpret AI risk to be a fad, not without its merits, and will be relevant when/if robots self-manufacture and also control all the means of production, but that realistically is at least 2-3 human generations away. 

A cool read on a related topic, the technosphere  
https://theconversation.com/climate-change-weve-created-a-civilisation-hell-bent-on-destroying-itself-im-terrified-writes-earth-scientist-113055

and the original coining of the 2014 term by Peter Haff
https://journals.sagepub.com/doi/10.1177/2053019614530575


 

You might be interested in these post series I put together, so far just 3 posts in each series.

The series "Skepticism about near-term AGI" is general and tries to be accessible and interesting to a newcomer to these debates, although there may be some technical and inaccessible parts to some of them. 

The post "3 reasons AGI might still be decades away" by Zershaaneh Qureshi on the 80,000 Hours blog is very quick and accessible, and I'd like to add it to the series, but it hasn't been published on the EA Forum. I recommend that post too.

The other series "Criticism of specific accounts of imminent AGI" is very much inside baseball and might feel unimportant or inaccessible to newcomers to these debates. Each of the 3 posts is responding to something very specific in the AGI debates, and if you don't know or care about that very specific thing, then you might not care about those posts. I think they are all excellent and necessary pieces of criticism, it's just we're really getting into the weeds at that point, so someone who isn't caught up on the AGI debates might be totally confused. So, I'd recommend the "Skepticism about near-term AGI" series first. 
 

To be clear, I think there is absolutely no intention of doing this. EA existed before AI became hot, and many EAs have expressed concerns about the recent, hard pivot towards AI. It seems in part, maybe mostly (?), to be a result of funding priorities. In fact, a feature of EA that hopefully makes it more immune than many impact focused communities to donor influence (although far from total immunity!) is the value placed on epistemics - decisions and priorities should be argued clearly and transparently, why AI should take priority over other cause areas. Glad to have you engage skeptically on this!

Curated and popular this week
Relevant opportunities