Hey everyone, I’m the producer of The 80,000 Hours Podcast, and a few years ago I interviewed AJ Jacobs on his writing, and experiments, and EA. And I said that my guess was that the best approach to making a high-impact TV show was something like: You make Mad Men — same level of writing, directing, and acting — but instead of Madison Avenue in the 1950-70s, it’s an Open Phil-like org.

So during COVID I wrote a pilot and series outline for a show called Bequest, and I ended up with something like that (in that the characters start an Open Phil-like org by the middle of the season, in a world where EA doesn't exist yet), combined with something like: Breaking Bad, but instead of raising money for his family, Walter White is earning to give. (That’s not especially close to the story, and not claiming it’s anywhere near that quality, but that’s the inspiration!)

My aim was to create a show that’s popular independent of the message, thinking that if folks are super engaged they'll naturally learn about core EA ideas — like how fans of Mad Men can’t help but learn a lot about advertising in the 1960s.

And then in big red letters in my mind I had the warning: “Don’t be preachy, it’s a massive turn-off”. So I decided against exploring core EA ideas until 4 or 5 episodes into a 10-episode show (where in a perfect world it’d end up getting multiple seasons).

Now, actually getting a high-quality TV show made feels close to impossible for any given idea / script, so you’re really just buying a ticket to a raffle. But with Bequest I had a brief glimmer of hope: some impressive folks liked the script and passed it on to industry connections. As far as I know, none of those influential people read it, but the initial interest was promising. I’d also thought that if a TV show seemed unrealistic, maybe I could turn it into a novel instead.

Then FTX imploded — and suddenly a show about a man who’d committed serious crimes and now wanted to donate $1B+ to effective causes seemed a lot less fun and exciting to EAs!

So I shelved it, knowing that even if we could press a button to have Vince Gilligan (the creator of Breaking Bad) make a brilliant version of Bequest for Netflix — there’d be plenty of people in the community who’d vote against that given the SBF scandal. And it’s not the kind of thing anyone should be excited about pushing forward unilaterally.

Anyway, flash forward to today: we’re releasing an 80k podcast episode I hosted with Elizabeth Cox, who founded an independent production company with a ~$2.5M grant from Open Phil. In our conversation, I use Bequest as an example of a totally different approach to doing good via storytelling than the one Elizabeth went with for her new show Ada — so I figured I might as well share the pilot script and the 10-episode outline here for anyone who’d be interested.

[Flagging that the pitch deck / series outline contains massive spoilers for the script — so if you’re up for reading both, I’d recommend starting with the script!]

I’ve lowered my goal slightly since the start of this project, from “make one of the best shows ever!” to “give more than 15 people an entertaining 45-minute read!” — would be great to hear from you if I’m closing in on the new target!

Pilot script link

Pitch deck / series outline link

Email: Keiran.J.Harris [at] gmail [dot] com

138

0
0

Reactions

0
0

More posts like this

Comments11


Sorted by Click to highlight new comments since:

The trailer for Ada makes me think it falls in a media no mans land between extremely low-cost, but potentially high-virality creator content and high-cost, fully produced series that go out on major networks. Interested to hear how Should We are navigating the (to me) inorganic nature of their approach.  

Sounds like Bequest was making a speculative bet on high-cost, fully produced – which I think is worthwhile. When I think about in-the-water ideas like environmentalism and social justice, my sense is they leveraged media by gently injecting their themes/ideas into independently engaging characters and stories (i.e. the kinds of things for-profit studios would want to produce independent of whether these ideas appeared in the plot). 

Less seriously, you might enjoy my 2022 April 1 post on Impact Island.

Oh wow just read the whole pilot! It's really cool! Definitely an angle on doing the most good that I did not expect.

That's so great to hear — really appreciate it!

I just wanted to say I like this idea

Thanks for sharing this! I really enjoyed the script and the pitch deck - I found the ideas really original and I think it would be exciting to watch. I hope you continue to write creatively because I think you have a real talent for it. 

Thank you so much Amber, what a lovely comment!

I would love to read it if I had the time. But I think you'd have more of an impact by getting NON-EA people to read it rather than people who are already on board?

Yeah I think that'd definitely be true if I had scripts for all 10 episodes, but the plan was to introduce EA ideas from episodes 4-10 — and so there isn't much to learn for anyone in the pilot. The goal was really just to make it as engaging as possible so people would come back for episode 2.

There is one page at the end of the pitch deck on doing good, but it's just a shorter version of this Effective altruism in a nutshell piece I wrote — so I think it'd be better to share that with non-EAs.

Curated and popular this week
 ·  · 13m read
 · 
Notes  The following text explores, in a speculative manner, the evolutionary question: Did high-intensity affective states, specifically Pain, emerge early in evolutionary history, or did they develop gradually over time? Note: We are not neuroscientists; our work draws on our evolutionary biology background and our efforts to develop welfare metrics that accurately reflect reality and effectively reduce suffering. We hope these ideas may interest researchers in neuroscience, comparative cognition, and animal welfare science. This discussion is part of a broader manuscript in progress, focusing on interspecific comparisons of affective capacities—a critical question for advancing animal welfare science and estimating the Welfare Footprint of animal-sourced products.     Key points  Ultimate question: Do primitive sentient organisms experience extreme pain intensities, or fine-grained pain intensity discrimination, or both? Scientific framing: Pain functions as a biological signalling system that guides behavior by encoding motivational importance. The evolution of Pain signalling —its intensity range and resolution (i.e., the granularity with which differences in Pain intensity can be perceived)— can be viewed as an optimization problem, where neural architectures must balance computational efficiency, survival-driven signal prioritization, and adaptive flexibility. Mathematical clarification: Resolution is a fundamental requirement for encoding and processing information. Pain varies not only in overall intensity but also in granularity—how finely intensity levels can be distinguished.  Hypothetical Evolutionary Pathways: by analysing affective intensity (low, high) and resolution (low, high) as independent dimensions, we describe four illustrative evolutionary scenarios that provide a structured framework to examine whether primitive sentient organisms can experience Pain of high intensity, nuanced affective intensities, both, or neither.     Introdu
 ·  · 7m read
 · 
Article 5 of the 1948 Universal Declaration of Human Rights states: "Obviously, no one shall be subjected to torture or to cruel, inhuman or degrading treatment or punishment." OK, it doesn’t actually start with "obviously," but I like to imagine the commissioners all murmuring to themselves “obviously” when this item was brought up. I’m not sure what the causal effect of Article 5 (or the 1984 UN Convention Against Torture) has been on reducing torture globally, though the physical integrity rights index (which “captures the extent to which people are free from government torture and political killings”) has increased from 0.48 in 1948 to 0.67 in 2024 (which is good). However, the index reached 0.67 already back in 2001, so at least according to this metric, we haven’t made much progress in the past 25 years. Reducing government torture and killings seems to be low in tractability. Despite many countries having a physical integrity rights index close to 1.0 (i.e., virtually no government torture or political killings), many of their citizens still experience torture-level pain on a regular basis. I’m talking about cluster headache, the “most painful condition known to mankind” according to Dr. Caroline Ran of the Centre for Cluster Headache, a newly-founded research group at the Karolinska Institutet in Sweden. Dr. Caroline Ran speaking at the 2025 Symposium on the recent advances in Cluster Headache research and medicine Yesterday I had the opportunity to join the first-ever international research symposium on cluster headache organized at the Nobel Forum of the Karolinska Institutet. It was a 1-day gathering of roughly 100 participants interested in advancing our understanding of the origins of and potential treatments for cluster headache. I'd like to share some impressions in this post. The most compelling evidence for Dr. Ran’s quote above comes from a 2020 survey of cluster headache patients by Burish et al., which asked patients to rate cluster headach
 ·  · 2m read
 · 
A while back (as I've just been reminded by a discussion on another thread), David Thorstad wrote a bunch of posts critiquing the idea that small reductions in extinction risk have very high value, because the expected number of people who will exist in the future is very high: https://reflectivealtruism.com/category/my-papers/mistakes-in-moral-mathematics/. The arguments are quite complicated, but the basic points are that the expected number of people in the future is much lower than longtermists estimate because: -Longtermists tend to neglect the fact that even if your intervention blocks one extinction risk, there are others it might fail to block; surviving for billions  (or more) of years likely  requires driving extinction risk very low for a long period of time, and if we are not likely to survive that long, even conditional on longtermist interventions against one extinction risk succeeding, the value of preventing extinction (conditional on more happy people being valuable) is much lower.  -Longtermists tend to assume that in the future population will be roughly as large as the available resources can support. But ever since the industrial revolution, as countries get richer, their fertility rate falls and falls until it is below replacement. So we can't just assume future population sizes will be near the limits of what the available resources will support. Thorstad goes on to argue that this weakens the case for longtermism generally, not just the value of extinction risk reductions, since the case for longtermism is that future expected population  is many times the current population, or at least could be given plausible levels of longtermist extinction risk reduction effort. He also notes that if he can find multiple common mistakes in longtermist estimates of expected future population, we should expect that those estimates might be off in other ways. (At this point I would note that they could also be missing factors that bias their estimates of