Today we're launching a new podcast feed that might be useful to you or someone you know.
It's called Effective Altruism: An Introduction, and it's a carefully chosen selection of ten episodes of The 80,000 Hours Podcast, with various new intros and outros to guide folks through them.
We think that it fills a gap in the introductory resources about effective altruism that are already out there. It's a particularly good fit for people:
- prefer listening over reading, or conversations over essays
- have read about the big central ideas, but want to see how we actually think and talk
- want to get a more nuanced understanding of how the community applies EA principles in real life — as an art rather than science.
The reason we put this together now, is that as the number of episodes of The 80,000 Hours Podcast show has grown, it has become less and less practical to suggest that new subscribers just 'go back and listen through most of our archives.'
We hope EA: An Introduction will guide new subscribers to the best things to listen to first in order to quickly make sense of effective altruist thinking.
Across the ten episodes, we discuss:
- What effective altruism at its core really is
- The strategies for improving the world that are most popular within the effective altruism community, and why they’re popular
- The key disagreements between researchers in the field
- How to ‘think like an effective altruist’
- How you might figure out how to make your biggest contribution to solving the world’s most pressing problems
At the end of each episode we suggest the interviews people should go to next if they want to learn more about each area.
If someone you know wants to get an understanding of what 80,000 Hours or effective altruism are all about, and audio content fits into their life better than long essays, hopefully this will prove a great resource to point them to.
It might also be a great fit for local groups who we've learned are already using episodes of the show for discussion groups.
Like 80,000 Hours itself, the selection leans towards a focus on longtermism, though other perspectives are covered as well.
The most common objection to our selection is that we didn’t include dedicated episodes on animal welfare or global development. (ADDED: See more discussion of how we plan to deal with this issue here.)
We did seriously consider including episodes with Lewis Bollard and Rachel Glennister, but i) we decided to focus on our overall worldview and way of thinking rather than specific cause areas (we also didn’t include a dedicated episode on biosecurity, one of our 'top problems'), and ii) both are covered in the first episode with Holden Karnofsky, and we prominently refer people to the Bollard and Glennerster interviews in our 'episode 0', as well as the outro to Holden's episode.
If things go well with this one, we may put together multiple curated feeds, likely differentiated by difficulty level, or cause area.
Folks can find it by searching for 'effective altruism' in their podcasting app.
We’re very open to feedback – comment here, or you can email us at podcast@80000hours.org.
— Rob and Keiran
TL;DR. I'm very substantially in agreement with Brian's comment. I expand on those concerns, put them in stronger terms, then make a further point about how I'd like 80k to have more of a 'public service broadcasting' role. Because this is quite long, I thought it was better to have it as a new comment.
It strikes me as obviously inappropriate to describe the podcast series as "effective altruism: an introduction" when it focuses almost exclusively on a specific worldview - longtermism. The fact this objection is acknowledged, and that a "10 problems areas" series is also planned, doesn't address it. In addition, and relatedly, it seems mistaken to produce and distribute such a narrow introduction to EA in the first place.
The point of EA is to work out how to do the most good, then do it. There are three target groups one might try to benefit - (1) (far) future lives, (2) near-term humans, (3) (near-term) animals. Given this, one cannot, in good faith, call something an 'introduction' when it focuses almost exclusively on object-level attempts to benefit just one group. At the very least, this does not seem to be in good faith when there is a substantial fraction of the EA community, and people who try to live by EA principles, who do prioritise each of three.
For people inside effective altruism who do not share 80k's worldview, stating that this is an introduction runs the serious risk of conveying to those people that they are not "real EAs", they are not welcome in the EA community, and their sincere and thoughtful labours and perspectives are unimportant. It does not seem adequately inclusive, welcoming, open-minded, and considerate - values EAs tend to endorse.
For people outside EA who are being introduced to the ideas for the first time, it genuinely fails to introduce them to the relevant possibilities of how they might do the most good, leaving them with a misleading impression of what EA is or can be. It would have been trivially easy to include the Bollard and Glennister interviews - or something else to represent those who focus on animals or humans in the near-term – and so indicate that those are credible altruistic paths and enthuse those who might take them.
By analogy, if someone taught an "introduction to political ideologies" course which glossed over conservatism and liberalism to focus primarily on (the merits of) socialism, you would assume they were either incompetent or pushing an agenda. Either way, if you hoped that they would cover all the material and do so in an even-handed manner, you would be disappointed.
Given this podcast series is not an introduction to effective altruism, it should not be called "effective altruism: an introduction". More apt might be “effective longtermism: an introduction” or “80k’s opinionated introduction to effective altruism” or “effective altruism: 80k’s perspective”. In all cases, there should be more generous signposting of what the other points of view are and where they could be found.
A good introduction to EA would, at the very least, include a wide range of steel-manned positions about how to do the most good that are held by sincere, thoughtful, individuals aspiring to do the most good. I struggle to see why someone would produce such a narrow introduction unless they thought those holding alternative views were errant and irrelevant fools.
I can imagine someone defending 80k by saying that this is their introduction to effective altruism and there’s nothing to stop someone else writing their own and sharing it (note RobBesinger does this below).
While this is technically true, I do not find it compelling for the following reason. In a cooperative altruistic community, you want to have a division, rather than a duplication, of labour, where people specialise in different tasks. 80k has become, in practice, the primary source of introductory materials to EA: it is the single biggest channel by which people are introduced to effective altruism, with 17% of EA survey respondents saying they first heard about EA through it; it produces much of the introductory content individuals read or listen to. 80k may not have a monopoly on telling people about EA, but it is something like the ‘market leader’.
The way I see it, given 80k’s dominant position, they should fulfil something like a public service broadcasting role for EA, where they strive to be impartial, inclusive, and informative (https://en.wikipedia.org/wiki/Public_broadcasting).
Why? Because they are much better placed to do it than anyone else! In terms any 80k reader will be familiar with, 80k should do this because it is their comparative advantage and they are not easily replaced. Their move to focusing on longtermism has left a gap. A new organisation, Probably Good, has recently stepped into this gap to provide more cause neutral careers advice but I see it as cause for regret that this had to happen.
While I think it would be a good idea if 80k had more of a public service broadcasting model, I don't expect this to happen, seeing as they've consciously moved away from it. It does, however, seem feasible for 80k to be a bit more inclusive - in this case, one very easy thing would be to expand the list from 10 to 12 items so concerns for animals and near-term humans feature. It would be a huge help to non-longtermist EAs that 80ks talks about them a bit (more), and it would be a small additional cost to 80k.
Although I understand the nationalism example isn't meant to be analogous, but my impression is this structural objection only really applies when our situation is analogous.
If historically EA paid a lot of attention to nationalism (or trans-humanism, the scepticism community, or whatever else) but had by-and-large collectively 'moved on' from these, contemporary introductions to the field shouldn't feel obliged to cover them extensively, nor treat it the relative merits of what they focus on now versus then as an open question.
Yet, however you slice... (read more)