Hide table of contents

Donate to our Manifund (as of 14.04.25 we have two more days of donation matching up to $10,000). Email me at connor.axiotes or DM me on Twitter for feedback and questions.

Project summary:

  1. To create a cinematic, accessible, feature-length documentary. 'Making God' is an investigation into the controversial race toward artificial general intelligence (AGI).
  2. Our audience is a largely non-technical one, and so we will give them a thorough grounding in recent advancements in AI, to then explore the race to the most consequential piece of technology ever created.
  3. Following in the footsteps of influential social documentaries like Blackfish/Seaspiracy/The Social Dilemma/Inconvenient truth/and others - our film will shine a light on the risks associated with the development of AGI.
  4. We are aiming for film festival acceptance/nomination/wins and to be streamed on the world’s biggest streaming platforms.
  5. This will give the non-technical public a strong grounding in the risks from a race to AGI. If successful, hundreds of millions of streaming service(s) subscribers will be more informed about the risks and more likely to take action when a moment may present itself.

Rough narrative outline:

  • Making God will begin by introducing an audience with limited technical knowledge about recent advancements in AI. Perhaps the only thing some may have used or know about, is ChatGPT since OpenAI launched their website in November 2022. A documentary like this is neglected, as most other AI documentaries assume a lot of prior knowledge.
  • In giving the audience a grounding in AI advancements and future risks they may pose, we deep dive into the frontier: looking at the individual driving forces behind the race to AGI. We will put a spotlight on the CEOs behind the major AI companies, interview leading experts, speak to those worried in political and civil society.
  • The documentary will take an objective and truth-seeking approach. The primary goal being to truly understand if we should be worried or optimistic for the coming technological revolution.

Our basic model for why this is needed:

  • We think advanced AI and AGI, if developed correctly and with complementary regulation and governance, can change the world for the better.
  • We are worried that, as things stand, leading AI companies seem to be prioritizing capabilities over safety, international governance on AI cooperation seems to be breaking down, and technical alignment bets might just not work in time.
  • We think at minimum a documentary made for people who do not yet know about the risks, aimed at a huge audience (like a streaming service), might help the commons have a better understanding of the risks. Hundreds of millions of people watch their content from streaming services.
  • At most, we might catalyze a Blackfish/Seaspiracy/Inconvenient Truth-style spirit in the audience, so that one day they might protest/get in touch with their legislator/join a movement, etc.

***

Update [14.04.25]

  • We spent the last couple weeks on filming for our “Proof of Concept” - to show funders of the quality of our documentary.
  • We have conducted 5 cinematic interviews among civil society, unions, legal experts, and AI experts. We have provided stills, but it should be noted they are yet to be fully edited. But they do give an indication of style and high quality.
  • In order to increase the likelihood of film festival acceptance and streaming service acquisition thereafter, we need additional funding over the next two months to hire a full production team and gear. Mike filmed these interviews by himself and I (Connor) interviewed.
  • Next steps: edit up these 5 interviews to show as our Proof of Concept video; hire full production team for new shoots and reshoots where necessary; get more great interviews; continue fundraising.

    For the last two weeks, we have conducted 5 interviews, and have more in the schedule for the next couple weeks.

1) Prof. Rose Chan Loui is the Founding Executive Director, Lowell Milken Center on Philanthropy and Nonprofits at UCLA.

  • We went to UCLA for a conference on AI and Nonprofits, and later filmed Rose in her family home. As we entered her home we met her Husband and her two dogs. Rose beamed at us and showed us into her office, hoping we wouldn’t find it too messy. It wasn’t!
  • When we sat down to interview, Rose spoke to us about: her work in nonprofits; being dragged into the AI conversation through her legal background; the history of OpenAI as a nonprofit; Delaware Public Benefit Corporations and Anthropic; Sam Altman’s firing and OpenAI board hopes and worries; the worries she has for her family around the development of AGI.

  • On 3 separate 80,000 Hours podcasts she amassed millions of views for her expertise on AI lab nonprofit structures.

2) Prof. Ellen Aprill is Senior Scholar in Residence and taught Political Activities of Nonprofit Organizations at UCLA in 2024.

  • Ellen retired last year but came back to UCLA to work at the center focused on nonprofits. Like Rose, she’d been dragged into the world of AI and is worried about its implications for the world. As we turned up to her drive, Ellen ushered us into her family home. We were briefly introduced to her husband, Sunny, a retired lawyer, Ellen asked if we didn’t mind setting up as she called a student and went through their work. Even after retiring it seemed Ellen hadn’t lost her enthusiasm to teach and support.
  • Ellen spoke to us in her home office about: the incorporated missions of nonprofits; valuing nonprofit AI research labs; her worries about the future; and her optimism about humanity.

3) Holly Elmore is the Executive Director of Pause AI US.

  • She spoke to us in Dolores park at a leafletting session where her and other Pause AI volunteers gave their spare time to educate the public on risks from AI. The public seemed interested in what they had to say, but most smiled and carried on with their day. Holly and other volunteers spoke to us about the reason for protesting. Their solution to mitigating risks from AI is a ‘pause’ on the development of AI.

4) Eli Lifland is the Founding Researcher at the AI Futures Project, and a top forecaster.

  • He spoke to us about: his work in forecasting the development of AI and in particular artificial super intelligence; a brief history of deep learning; what are LLMs?; his work on AI 2027 predicting when ASI might flood the remote job economy; his worries about AGI lab race dynamics; a race to the bottom on AI Safety; US-China race dynamics; his hopes that we slow down a bit to get this right; the burden of predicting possible catastrophe and the rest of the world being seemingly not aware and unprepared.

5) Heather-Rose is Government Affairs Lead in LA for Labor Union SAG-AFTRA.

  • She spoke to us about her: political campaigning to educate Congressmen and Women on risks from AI; serves on SAG-AFTRA’s New Technology Committee, focusing on protecting actors' rights against AI misuse; she became interested in AI safety in 2020 and has since been advocating for regulations on AI-generated content and deepfakes; job loss concerns, too.

Civil Society

  • We have also been interviewing the general public about their views on AI and their worries and hopes.

Upcoming Interviews

  1. Cristina Criddle, Financial Times Tech Correspondent covering AI - recently broke the Financial Times story about OpenAI giving days long safety-testing rather than months for new models).
  2. David Duvenaud, Former Anthropic Team Lead.
  3. John Sherman, Dads Against AI and podcasting.

Potential Interviews

  1. Jack Clark (we are in touch with Anthropic Press Team).
  2. Gary Marcus (said to get back to him in a couple weeks).

Interviews We’d Love

  1. Kelsey Piper, Vox.
  2. Daniel Kokotajlo, formerly OpenAI.
  3. AI Lab employees.
  4. Lab whistleblowers.
  5. Civil society leaders.

Points to Note:

  • The legal interviews focus on Sam Altman and OpenAI as the Professors are legal experts in the field of nonprofit reorganization. Future interviews will focus on other AGI labs, too. Like with the Eli interview, which focuses on the other players in the field.
  • The stills are from interviews with a 1-man crew (just Mike, our Director). Future stills of future interviews will be even more cinematic with a full (or even half) crews. This is what we need the immediate next funding for.

    ***

Project Goals:

  1. We are aiming for film festival acceptance/nomination/wins and to be streamed on the world’s biggest streaming platforms, like Netflix, Amazon Prime, and Apple TV+.
  2. To give the non-technical public a strong grounding in the risks from a race to AGI.
  3. If successful, hundreds of millions of streaming service(s) subscribers will be more informed about the risks and more likely to take action when a moment may present itself.
  4. As timelines are shortening, technical alignment bets are looking less likely to pay off in time for AI, international governance mechanisms seem to be breaking down - and so our goal is to influence public opinion on the risks so that they might take political or social action before the arrival of AGI. If we do this right, we could have a high chance of moving the needle.

Some rough numbers:

  • Festival Circuit: We are targeting acceptance at major film festivals including Sundance, SXSW, and Toronto International Film Festival, which have acceptance rates of 1-3%.
  • Streaming Acquisition: Following festival exposure, we aim for acquisition by Netflix, Amazon Prime, or Apple TV+, platforms with 200M+ subscribers collectively. Based on comparable documentary performance, we estimate:
    • Conservative scenario: 8M viewers (4% platform reach)
    • Moderate scenario: 15M viewers (7.5% platform reach)
    • Optimistic scenario: 25M+ viewers (12.5%+ platform reach)
  • Impact Metrics: We will track:
    • Viewership numbers across platforms
    • Pre/post viewing surveys on AI risk understanding
    • Media coverage and policy discussions citing the documentary
    • Changes in public opinion polling on AI regulation
  • Theory of Impact: If successful, we will create an informed constituency capable of supporting responsible AI development policies during potentially critical decision points in the next 2-5 years.

How will this funding be used?

In order to seriously have a chance at being on streaming services, the production quality and entertainment value has to be high. As such, we would need the following funding over the next 3 months to create a product like this.

Accommodation [Total: £30,000]

  • AirBnB: £10,000 a month for 3 months (dependent on locations for filming and accommodating crew).

Travel [Total: £13,500]

  • Car Hire: £6,000 for 3 months.
  • Flights: £4,500 for 3 months (to move us and crews around to locations in California, D.C., and New York) .
  • Misc: (trains, cabs, etc) £3,000 for 3 months.

Equipment [Total: £41,000]

  • Purchasing Filming Equipment: £5000
  • Hiring Filming Equipment
  • £36,000 (18 shooting days)

Production Crew (30 Days of Day Rate) [Total: £87,000]

  • Director of Photography: £19,500
  • Sound Recordist: £18,000
  • Camera Assistant/Gaffer: £13,500
  • Additional Crew: £36,000

Director (3 Months): [Total: £15,000]

Executive Producer (3 months): [Total: £15,000]

MISC: £25,000 (to cover any unforeseen costs, get legal advice, insurance and other practical necessities).

TOTAL: £226,500 ($293,046)

Who is on your team? What's your track record on similar projects?

Mike Narouei [Director]:

  • Former Creative Director at Control AI (Directed multiple viral AI Risk films amassing over 60M+ total views over nine months.
  • Directed & led a 40-person production team on a £100,000+ commercial, generating 32M views/engagements across social media within one month.
  • Artistic Director for Michael Trazzi’s ‘SB-1047’ Documentary.
  • Work featured by BBC, Sky News, ITV News, and The Washington Post.
  • Partnered with MIT at the World Economic Forum in Davos, demonstrating Deepfake technology live in collaboration with Max Tegmark, covered by The Washington Post & SwissInfo.
  • Collaborated with Apollo Research to create an animated demo for their recent paper.
  • Shortlisted for the Royal Court Playwriting Award.
  • Directed a number of commercials for clients such as Starbucks, Pale Waves and Mandarin Oriental.

Watch Your Identity Isn’t Yours - which Mike filmed, produced, and edited when he was at Control AI. The still above is from that.

Connor Axiotes [Executive Producer]:

  • Has been on TV multiple times, and has helped to produce videos and TV interviews.
  • Wrote multiple op-eds for big papers, and blogs. Have a look here for a depository.
  • Produced viral engagement with millions of impressions on X at Conjecture and the ASI.
  • He worked as a senior communications adviser for a UK Cabinet Minister, making videos, and interacting with senior journalists and TV channels in coordinating high-stakes and pressure environments.
  • Wrote the centre-right Adam Smith Institutes’ first ever AI Safety policy paper called ‘Tipping Point: on the edge of Superintelligence’ in 2023.
  • He worked on a Prime Ministerial campaign and a General Election as part of the then Prime Minister’s operations team. Below he works for the Prime Minister in a media capacity in 2024.

Donate to our Manifund (as of 14.04.25 we have two more days of donation matching up to $10,000). Email me at connor.axiotes or DM me on Twitter for feedback and questions.

Comments8


Sorted by Click to highlight new comments since:

Have you applied to LTFF? Seems like the sort of thing they would/should fund. @Linch @calebp if you have actually already evaluated this project I would be interested in your thoughts as would others I imagine! (Of course, if you decided not to fund it, I'm not saying the rest of us should defer to you, but it would be interesting to know and take into account.)

Given that they've made a public Manifund application, it seems fine to share that there has been quite a lot of discussion about this project on the LTFF internally. I don't think we are in a great place to share our impressions right now, but if Connor would like me to, I'd be happy to share some of my takes in a personal capacity.

Hey! Thanks for the comments. I’d be super happy to hear your personal takes, Caleb!

Some quick takes in a personal capacity:

  • I agree that a good documentary about AI risk could be very valuable. I'm excited about broad AI risk outreach and few others seem to be stepping up. The proposal seem ambitious and exciting.
  • I suspect that a misleading documentary would be mildly net-negative, and it's easy to be misleading. So far, a significant fraction of public communications from the AI safety community has been fairly misleading (definitely not all—there is some great work out there as well).
  • In particular, equivocating between harms like deepfakes and GCRs seems pretty bad. I think it's fine to mention non-catastrophic harms, but often, the benefits of AI systems seem likely to dwarf them. More cooperative (and, in my view, effective) discourse should try to mention the upsides and transparently point to the scale of different harms.
  • In the past, team members have worked on (or at least in the same organisation) comms efforts that seemed low integrity and fairly net-negative to me (e.g., some of their work on deepfakes, and adversarial mobile billboards around the UK AI Safety summit). Idk if these specific team members were involved in those efforts.
  • The team seems very agentic and more likely to succeed than most "field-building" AIS teams.
  • Their plan seems pretty good to me (though I am not an expert in the area). I'm pretty into people just trying things. Seems like there are too few similar efforts, and like we could regret not making more stuff like this happen, particularly if your timelines are short.


I'm a bit confused. Some donors should be very excited about this, and others should be much more on the fence or think it's somewhat net-negative. Overall, I think it's probably pretty promising.

Thanks Caleb, very useful. @ConnorA I'm interested in your thoughts re how to balance comms on catastrophic/existential risks and things like Deepfakes. (I don't know about the particular past efforts Caleb mentioned, and I think I am more open to comms of Deepfakes being useful to develop a broader coalition, even though deepfakes are a tiny fraction of what I care about wrt AI.)

To be clear, I'm open to building broad coalitions and think that a good documentary could/would feature content on low-stakes risks; but, I believe people should be transparent about their motivations and avoid conflating non-GCR stuff with GCR stuff.

Thanks Caleb and Oscar! 

Will write up my full thoughts this weekend. But regarding your worry that our doc will end up conflating deepfakes and GCRs: we don't plan to do this and we are very clear they are different. 

Our model of the non-technical public is that they feel they are at higher risk of job loss than the world ending. So our film intends to explain clearly the potential risks to their jobs - and also show how that same AI that might automate their jobs, could also be used to, for example, create bioweapons for terrorists who may seek to deploy them on the world. We do not (and will not) conflate the two- but both will be included in the film. 

To Oscar: thanks for the comment! Do get in touch if you'd like to help out/thinking of donating.

To Caleb: we really appreciate your comments here, and think they're fair. But although we worked on comms with our former employers, we have different views/ways of communicating than them. (I still think Control AI and Conjecture did and do good comms work on the whole, though). I think if we grabbed a coffee/Zoom call we'd probably see we're closer than you think. 

Have a good day!

Executive summary: This post introduces Making God, a planned feature-length documentary aimed at a non-technical audience to raise awareness of the risks associated with the race toward AGI; the filmmakers seek funding to complete high-quality production and hope to catalyze public engagement and political action through wide distribution on streaming platforms.

Key points:

  1. Making God is envisioned as a cinematic, accessible documentary in the style of The Social Dilemma or Seaspiracy, aiming to educate a broad audience about recent AI advancements and the existential risks posed by AGI.
  2. The project seeks to fill a gap in public discourse by creating a high-production-value film that doesn’t assume prior technical knowledge, targeting streaming platforms and major film festivals to reach tens of millions of viewers.
  3. The filmmakers argue that leading AI companies are prioritizing capabilities over safety, international governance is weakening, and technical alignment may not be achieved in time—thus increasing the urgency of public awareness and involvement.
  4. The team has already filmed five interviews with legal experts, civil society leaders, forecasters, and union representatives to serve as a “Proof of Concept,” and they are seeking further funding (~$293,000) to expand production and ensure festival/streaming viability.
  5. The documentary’s theory of impact is that by informing and emotionally engaging a mass audience, it could generate public pressure and policy support for responsible AI development during a critical window in the coming years.
  6. The core team—Director Mike Narouei and Executive Producer Connor Axiotes—bring strong credentials from viral media production, AI safety advocacy, and political communications, and are currently fundraising via Manifund (with matching donations active as of April 14, 2025).

 

 

This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.

Curated and popular this week
 ·  · 13m read
 · 
Notes  The following text explores, in a speculative manner, the evolutionary question: Did high-intensity affective states, specifically Pain, emerge early in evolutionary history, or did they develop gradually over time? Note: We are not neuroscientists; our work draws on our evolutionary biology background and our efforts to develop welfare metrics that accurately reflect reality and effectively reduce suffering. We hope these ideas may interest researchers in neuroscience, comparative cognition, and animal welfare science. This discussion is part of a broader manuscript in progress, focusing on interspecific comparisons of affective capacities—a critical question for advancing animal welfare science and estimating the Welfare Footprint of animal-sourced products.     Key points  Ultimate question: Do primitive sentient organisms experience extreme pain intensities, or fine-grained pain intensity discrimination, or both? Scientific framing: Pain functions as a biological signalling system that guides behavior by encoding motivational importance. The evolution of Pain signalling —its intensity range and resolution (i.e., the granularity with which differences in Pain intensity can be perceived)— can be viewed as an optimization problem, where neural architectures must balance computational efficiency, survival-driven signal prioritization, and adaptive flexibility. Mathematical clarification: Resolution is a fundamental requirement for encoding and processing information. Pain varies not only in overall intensity but also in granularity—how finely intensity levels can be distinguished.  Hypothetical Evolutionary Pathways: by analysing affective intensity (low, high) and resolution (low, high) as independent dimensions, we describe four illustrative evolutionary scenarios that provide a structured framework to examine whether primitive sentient organisms can experience Pain of high intensity, nuanced affective intensities, both, or neither.     Introdu
 ·  · 7m read
 · 
Article 5 of the 1948 Universal Declaration of Human Rights states: "Obviously, no one shall be subjected to torture or to cruel, inhuman or degrading treatment or punishment." OK, it doesn’t actually start with "obviously," but I like to imagine the commissioners all murmuring to themselves “obviously” when this item was brought up. I’m not sure what the causal effect of Article 5 (or the 1984 UN Convention Against Torture) has been on reducing torture globally, though the physical integrity rights index (which “captures the extent to which people are free from government torture and political killings”) has increased from 0.48 in 1948 to 0.67 in 2024 (which is good). However, the index reached 0.67 already back in 2001, so at least according to this metric, we haven’t made much progress in the past 25 years. Reducing government torture and killings seems to be low in tractability. Despite many countries having a physical integrity rights index close to 1.0 (i.e., virtually no government torture or political killings), many of their citizens still experience torture-level pain on a regular basis. I’m talking about cluster headache, the “most painful condition known to mankind” according to Dr. Caroline Ran of the Centre for Cluster Headache, a newly-founded research group at the Karolinska Institutet in Sweden. Dr. Caroline Ran speaking at the 2025 Symposium on the recent advances in Cluster Headache research and medicine Yesterday I had the opportunity to join the first-ever international research symposium on cluster headache organized at the Nobel Forum of the Karolinska Institutet. It was a 1-day gathering of roughly 100 participants interested in advancing our understanding of the origins of and potential treatments for cluster headache. I'd like to share some impressions in this post. The most compelling evidence for Dr. Ran’s quote above comes from a 2020 survey of cluster headache patients by Burish et al., which asked patients to rate cluster headach
 ·  · 2m read
 · 
A while back (as I've just been reminded by a discussion on another thread), David Thorstad wrote a bunch of posts critiquing the idea that small reductions in extinction risk have very high value, because the expected number of people who will exist in the future is very high: https://reflectivealtruism.com/category/my-papers/mistakes-in-moral-mathematics/. The arguments are quite complicated, but the basic points are that the expected number of people in the future is much lower than longtermists estimate because: -Longtermists tend to neglect the fact that even if your intervention blocks one extinction risk, there are others it might fail to block; surviving for billions  (or more) of years likely  requires driving extinction risk very low for a long period of time, and if we are not likely to survive that long, even conditional on longtermist interventions against one extinction risk succeeding, the value of preventing extinction (conditional on more happy people being valuable) is much lower.  -Longtermists tend to assume that in the future population will be roughly as large as the available resources can support. But ever since the industrial revolution, as countries get richer, their fertility rate falls and falls until it is below replacement. So we can't just assume future population sizes will be near the limits of what the available resources will support. Thorstad goes on to argue that this weakens the case for longtermism generally, not just the value of extinction risk reductions, since the case for longtermism is that future expected population  is many times the current population, or at least could be given plausible levels of longtermist extinction risk reduction effort. He also notes that if he can find multiple common mistakes in longtermist estimates of expected future population, we should expect that those estimates might be off in other ways. (At this point I would note that they could also be missing factors that bias their estimates of