Hide table of contents

The effective altruism community has struggled to identify robust interventions for mitigating existential risks from advanced artificial intelligence. In this post, I identify a new strategy for delaying the development of advanced AI while saving lives roughly 2.2 million times [-5 times, 180 billion times] as cost-effectively as leading global health interventions, known as the WAIT (Wasting AI researchers' Time) Initiative. This post will discuss the advantages of WAITing, highlight early efforts to WAIT, and address several common questions.

early logo draft courtesy of claude

Theory of Change

Our high-level goal is to systematically divert AI researchers' attention away from advancing capabilities towards more mundane and time-consuming activities. This approach simultaneously (a) buys AI safety researchers time to develop more comprehensive alignment plans while also (b) directly saving millions of life-years in expectation (see our cost-effectiveness analysis below). Some examples of early interventions we're piloting include:

  1. Bureaucratic Enhancement: Increasing administrative burden through strategic partnerships with university IRB committees and grant funding organizations. We considered further coordinating with editorial boards at academic journals but to our surprise they seem to have already enacted all of our protocol recommendations.
  2. Militant Podcasting: Inviting leading researchers at top AI organization to come onto podcasts with enticing names like "Silicon Savants" and "IQ Infinity",  ensuring each recording runs 4+ hours and requires multiple re-recordings due to technical difficulties.
  3. Conference Question Maximization: We plan to deploy trained operatives to ask rambling, multi-part questions during Q&A sessions at leading ML conferences that begin with "This is more of a comment than a question..." and proceed until their microphones are snatched away.
  4. Twitter/X Stupidity Farming: Our novel bots have been trained to post algorithmically optimized online discourse that consistently confuses the map for the territory. Also they won't shut up about Elon and "sexual marketplace dynamics" but apparently there is no limit to how long everyone will find these topics interesting for.
  5. Romantic Partnership Engineering (RPE): Bay area researchers are uniquely receptive to polyamory and other relationship structures requiring increased communication and processing time, and we're building teams to do the matchmaking. We're placing strategic emphasis on recruiting women from the VH (very hot) community which, at first glance, appears to present a strong talent pool for these programs. Please let us know if you'd like to get involved.
  6. Targeted Nerdsniping: It's also well-known that substantial overlap exists between the AI community and the HN (huge nerd) community.  WAIT interventions will aim to exploit this overlap through a new groundbreaking partnership with EA (Electronic Arts). Our venture fund has already acquired minority stakes in EA, allowing us to influence video game release schedules to coincide with major AI conference submission deadlines. But note that our engagement with EA isn't a theme of the movement or anything.
By our estimates, Zelda: Tears of the Kingdom accidentally achieved a return of over 12,000 QALYs per dollar.

Cost-Effectiveness Analysis

To efficiently estimate existential risk from AI, consider the only two possibilities: AI extinction or survival.  We thus derive an existential risk parameter of 50%. Multiplying by ~8 billion human beings on earth * 1/365 (per day saved) we estimate ~10,958,904 years of human life could be saved per day of WAITing. This figure is further evidence:

Our analyses suggest the most effective interventions could cost approximately $25k to WAIT for 1 effective researcher day, making WAIT approximately 2.2 million times as cost effective as leading GiveWell charities. Further evidence for this claim:

Answers to Common Questions

  1. "How do you measure success?" We've developed a novel Stagnation of AI Frontier Threats Index (SAIFTI), which incorporates factors like the ratio of Tweets to Github commits among employees at leading AI labs.
  2. "What are the primary risks?" We think the primary risk WAITing poses is something Streisand-ish; for example, our fake podcasts keep accidentally blowing up - apparently podcast listeners have near infinite patience for listening to two dudes just talkin'. We plan to dilute the audio quality as a band-aid solution in the short term.
  3. "How can I contribute?" Our most urgent need is for volunteers willing to schedule and cancel multiple coffee chats with AI researchers 5 minutes after the scheduled meeting time. We've found this to be particularly effective, but it requires a consistent volunteer rotation.
  4. "Isn't this post kinda infohazardous?" We think it's the exact opposite; AI researchers' time spent reading this post might well have come at the cost of a substantive technical insight. In other words, you're welcome for the additional four minutes, humanity.

If you're an AI researcher wanting to learn more about the WAIT Initiative, we encourage you to reach out to us; we'd love to schedule a time to get coffee and chat about it with you.

Comments1


Sorted by Click to highlight new comments since:

I’m playing my part by stirring drama on Twitter and tempting AI researchers with Factorio 🫡

Curated and popular this week
 ·  · 1m read
 · 
Although some of the jokes are inevitably tasteless, and Zorrilla is used to set up punchlines, I enjoyed it and it will surely increase concerns and donations for shrimp. I'm not sure what impression the audience will have of EA in general.  Last week The Daily Show interviewed Rutger Bregman about his new book Moral Ambition (which includes a profile of Zorrilla and SWP). 
 ·  · 7m read
 · 
Article 5 of the 1948 Universal Declaration of Human Rights states: "Obviously, no one shall be subjected to torture or to cruel, inhuman or degrading treatment or punishment." OK, it doesn’t actually start with "obviously," but I like to imagine the commissioners all murmuring to themselves “obviously” when this item was brought up. I’m not sure what the causal effect of Article 5 (or the 1984 UN Convention Against Torture) has been on reducing torture globally, though the physical integrity rights index (which “captures the extent to which people are free from government torture and political killings”) has increased from 0.48 in 1948 to 0.67 in 2024 (which is good). However, the index reached 0.67 already back in 2001, so at least according to this metric, we haven’t made much progress in the past 25 years. Reducing government torture and killings seems to be low in tractability. Despite many countries having a physical integrity rights index close to 1.0 (i.e., virtually no government torture or political killings), many of their citizens still experience torture-level pain on a regular basis. I’m talking about cluster headache, the “most painful condition known to mankind” according to Dr. Caroline Ran of the Centre for Cluster Headache, a newly-founded research group at the Karolinska Institutet in Sweden. Dr. Caroline Ran speaking at the 2025 Symposium on the recent advances in Cluster Headache research and medicine Yesterday I had the opportunity to join the first-ever international research symposium on cluster headache organized at the Nobel Forum of the Karolinska Institutet. It was a 1-day gathering of roughly 100 participants interested in advancing our understanding of the origins of and potential treatments for cluster headache. I'd like to share some impressions in this post. The most compelling evidence for Dr. Ran’s quote above comes from a 2020 survey of cluster headache patients by Burish et al., which asked patients to rate cluster headach
 ·  · 2m read
 · 
A while back (as I've just been reminded by a discussion on another thread), David Thorstad wrote a bunch of posts critiquing the idea that small reductions in extinction risk have very high value, because the expected number of people who will exist in the future is very high: https://reflectivealtruism.com/category/my-papers/mistakes-in-moral-mathematics/. The arguments are quite complicated, but the basic points are that the expected number of people in the future is much lower than longtermists estimate because: -Longtermists tend to neglect the fact that even if your intervention blocks one extinction risk, there are others it might fail to block; surviving for billions  (or more) of years likely  requires driving extinction risk very low for a long period of time, and if we are not likely to survive that long, even conditional on longtermist interventions against one extinction risk succeeding, the value of preventing extinction (conditional on more happy people being valuable) is much lower.  -Longtermists tend to assume that in the future population will be roughly as large as the available resources can support. But ever since the industrial revolution, as countries get richer, their fertility rate falls and falls until it is below replacement. So we can't just assume future population sizes will be near the limits of what the available resources will support. Thorstad goes on to argue that this weakens the case for longtermism generally, not just the value of extinction risk reductions, since the case for longtermism is that future expected population  is many times the current population, or at least could be given plausible levels of longtermist extinction risk reduction effort. He also notes that if he can find multiple common mistakes in longtermist estimates of expected future population, we should expect that those estimates might be off in other ways. (At this point I would note that they could also be missing factors that bias their estimates of