Today we're launching a new podcast feed that might be useful to you or someone you know.
It's called Effective Altruism: An Introduction, and it's a carefully chosen selection of ten episodes of The 80,000 Hours Podcast, with various new intros and outros to guide folks through them.
We think that it fills a gap in the introductory resources about effective altruism that are already out there. It's a particularly good fit for people:
- prefer listening over reading, or conversations over essays
- have read about the big central ideas, but want to see how we actually think and talk
- want to get a more nuanced understanding of how the community applies EA principles in real life — as an art rather than science.
The reason we put this together now, is that as the number of episodes of The 80,000 Hours Podcast show has grown, it has become less and less practical to suggest that new subscribers just 'go back and listen through most of our archives.'
We hope EA: An Introduction will guide new subscribers to the best things to listen to first in order to quickly make sense of effective altruist thinking.
Across the ten episodes, we discuss:
- What effective altruism at its core really is
- The strategies for improving the world that are most popular within the effective altruism community, and why they’re popular
- The key disagreements between researchers in the field
- How to ‘think like an effective altruist’
- How you might figure out how to make your biggest contribution to solving the world’s most pressing problems
At the end of each episode we suggest the interviews people should go to next if they want to learn more about each area.
If someone you know wants to get an understanding of what 80,000 Hours or effective altruism are all about, and audio content fits into their life better than long essays, hopefully this will prove a great resource to point them to.
It might also be a great fit for local groups who we've learned are already using episodes of the show for discussion groups.
Like 80,000 Hours itself, the selection leans towards a focus on longtermism, though other perspectives are covered as well.
The most common objection to our selection is that we didn’t include dedicated episodes on animal welfare or global development. (ADDED: See more discussion of how we plan to deal with this issue here.)
We did seriously consider including episodes with Lewis Bollard and Rachel Glennister, but i) we decided to focus on our overall worldview and way of thinking rather than specific cause areas (we also didn’t include a dedicated episode on biosecurity, one of our 'top problems'), and ii) both are covered in the first episode with Holden Karnofsky, and we prominently refer people to the Bollard and Glennerster interviews in our 'episode 0', as well as the outro to Holden's episode.
If things go well with this one, we may put together multiple curated feeds, likely differentiated by difficulty level, or cause area.
Folks can find it by searching for 'effective altruism' in their podcasting app.
We’re very open to feedback – comment here, or you can email us at podcast@80000hours.org.
— Rob and Keiran
I think the least contentious argument is that 'an introduction' should introduce people to the ideas in the area, not just the ideas that the introducer thinks are most plausible. Eg a curriculum on political ideology wouldn't focus nearly exclusively on 'your favourite ideology'. A thoughtful educator would include arguments for and against their position and do their best to steelman. Even if your favourite ideology was communism and you were doing 'an intro to communism' you would still expect it not just to focus on your favourite strand of communism. Hence, I would have had more sympathy with (the original incarnation) if billed as "an intro to longtermism".
But, further, there can be good reasons to do things for symbolic or coalition reasons. To think otherwise implies a rather naive understanding of politics and human interaction. If you want people to support you - you can frame this in terms of moral trade, if you want - sometimes you also need to support to include them. The way I'd like EA to work is "this is what I believe matters most, but if you disagree because of A, B, C, then you should talk to my friend". This strikes me as coalitional moral trade that benefits all the actors individually (by their own lights). An alternative, and more or less what 80k had been proposing was, is "this is what I believe, but I'm not going to tell what the alternatives are or what you should do if you disagree". This isn't an engagement in moral trade.
I'm pretty worried about a scenario where the different parts of the EA world believe (rightly or wrongly) that others aren't engaging in moral trade and so decide to embark on 'moral trade wars' against each other instead.