Take the 2025 EA Forum Survey to help inform our strategy and prioritiesTake the survey
Hide table of contents

Forethought[1] is a new AI macrostrategy research group cofounded by Max Dalton, Will MacAskill, Tom Davidson, and Amrit Sidhu-Brar.

We are trying to figure out how to navigate the (potentially rapid) transition to a world with superintelligent AI systems. We aim to tackle the most important questions we can find, unrestricted by the current Overton window.

More details on our website.

Why we exist

We think that AGI might come soon (say, modal timelines to mostly-automated AI R&D in the next 2-8 years), and might significantly accelerate technological progress, leading to many different challenges. We don’t yet have a good understanding of what this change might look like or how to navigate it. Society is not prepared.

Moreover, we want the world to not just avoid catastrophe: we want to reach a really great future. We think about what this might be like (incorporating moral uncertainty), and what we can do, now, to build towards a good future.

Like all projects, this started out with a plethora of Google docs. We ran a series of seminars to explore the ideas further, and that cascaded into an organization.

This area of work feels to us like the early days of EA: we’re exploring unusual, neglected ideas, and finding research progress surprisingly tractable. And while we start out with (literally) galaxy-brained schemes, they often ground out into fairly specific and concrete ideas about what should happen next. Of course, we’re bringing principles like scope sensitivity, impartiality, etc to our thinking, and we think that these issues urgently need more morally dedicated and thoughtful people working on them.

Research

Research agendas

We are currently pursuing the following perspectives:

  • Preparing for the intelligence explosion: If AI drives explosive growth there will be an enormous number of challenges we have to face. In addition to misalignment risk and biorisk, this potentially includes: how to govern the development of new weapons of mass destruction; what rights to give digital beings; how to govern an automated military; and how to avoid dictatorship or authoritarianism.

  • Achieving a near-best future: Most explicitly longtermist work to date has been focused on avoiding existential catastrophe, but achieving a near-best future might be even more important. Research avenues here include mapping out what a desirable “viatopia” would look like (i.e. a state of the world which is very likely to lead to a very good future), figuring out how space resources should be allocated, and addressing issues around AI epistemics and persuasion.

We tend to think that many non-alignment areas of work are particularly neglected.

However, we are not confident that these are the best frames for this work, and we are keen to work with people who are pursuing their own agendas.

Recent work

Today we’re also launching “Preparing for the Intelligence Explosion”, which makes a more in-depth case for some of the perspectives above.

You can see some of our other recent work on the site. We have a backlog of research, so we’ll be publishing something new every few days for the next few weeks.

Approach

Comparison to other efforts

We draw inspiration from the Future of Humanity Institute and from OpenPhil’s Worldview Investigations team: like them we aim to focus on big picture important questions, have high intellectual standards, and build a strong core team.

Generally, we’re more focused than many existing organizations on:

  • Explosive growth and short timelines
  • Work outside the current Overton window
  • Issues beyond AI alignment, including new technologies and challenges that AI will unleash
  • Reaching really good futures

Principles

  1. Stay small: We aim to hire from among the handful of people who have the best records of tackling hard questions about AI futures, to offer a supportive institutional home to such people, and to grow slowly.

  2. Communicate to the nerds: We will mostly share research and ideas with wonk-y folks thinking about AI in think tanks, companies, and government, rather than working directly with policymakers. We plan to be thoughtful about how best to communicate and publish, but likely on our website and in arXiv papers.

  3. Be open to “weird” ideas: The most important ideas in history often seemed strange or even blasphemous at the time. And rapid AI-driven technological progress would mean that many issues that seem sci-fi are really quite pressing. We want to be open to ideas based on their plausibility and importance, not on whether they are within the current Overton window.

  4. Offer intellectual autonomy: Though we try to focus on what's most important, there are many different reasonable views on what that is. Senior researchers in particular are encouraged to follow their instinct on what research avenues are most important and fruitful, and to publish freely. There isn't a "party line" on what we believe.

What you can do

Engage with our research

We’d love for you to read our research, discuss the ideas, and criticize them! We’d also love to see more people working on these topics.

You can follow along by subscribing to our podcast, RSS feed, or Substack.

Please feel free to contact us if you are interested in collaborating, or would like our feedback on something (though note that we won’t be able to substantively engage with all requests).

Apply to work with us

We are not currently actively hiring (and will likely stay quite small), but we have an expression of interest form on our site, and would be particularly keen to hear from people who have related research ideas that they would like to pursue.

Funding

We have funding through to approximately March 2026 at our current size, from two high-net-worth donors.

We’re looking for $1-2M more, which would help us to diversify funding, make it easier for us to hire more researchers, and extend our runway to 2 years. If you are interested to learn more, please contact us.


  1. We are a new team and project, starting in mid-2024. However, we’ve built ourselves out of the old Forethought Foundation for Global Priorities Research to help get the operations started, and Will was involved with both projects. We considered like 500 names and couldn’t find something that we liked better than “Forethought”. Sorry for the confusion! ↩︎

170

3
0
12

Reactions

3
0
12

More posts like this

Comments10


Sorted by Click to highlight new comments since:

Why don't you disclose who the two high-net-worth donors are that are funding this? I thought you valued transparency, especially in the AI space rife with conflicts of interest. 

I am wondering if you could say something about how the political developments in the US (i.e., Trump 2.0) are affecting your thinking on AGI race dynamics? It seems like the default assumption communicated publicly is still that the US are "the good guys" and a "western liberal democracy" that can be counted on, when the actual actions on the world stage are casting at least some doubt on this position. In some sense, one could even argue that we are already playing out a high-stakes alignment crisis at this very moment.

Any reactions or comments on this issue? I understand that openness around this topic is difficult at the moment but I also don't think that complete silence is all that wise either. 

Congrats on launching the org. Would developing plans to avoid gradual disempowerment be in scope for your research?

Thanks! Yes, definitely in scope. There was a lot of discussion of this paper when it came out, and we had Raymond Douglas speak at a seminar. 

Opinions vary within the team on how valuable it is to work on this; I believe Fin and Tom are pretty worried about this sort of scenario (I don't know about others).. I feel a bit less convinced on the value of working on it (relative to other things), and I'll just say why briefly:
- I feel less convinced that people wouldn't foresee the bad gradual disempowerment scenarios and act to stop them from happening, esp with advanced AI assistance
- In the cases that feel more likely, I feel less convinced that gradual disempowerment is particularly bad (rather than just "alien").
- Insofar as there are bad outcomes here, it seems particularly hard to steer the course of history away from them.

The biggest upshot I see is that, the more you buy these sorts of scenarios, the more it increases the value of AGI being developed by a single e.g. multilateral project rather than being developed by multiply companies and countries. That's something I'm really unsure about, so reasoning around this could easily switch my views.

quick thougths RE your reasons for working on it or not:

1) It seems like many people are not seeing them coming (e.g. AI safety community seems surprisingly unreceptive and to have made many predictable mistakes by ignoring structural causes of risk, e.g. being overly optimistic about companies prioritizing safety over competitiveness)
1) It seems like seeing them coming is predictably insufficient to stopping them happening, because they are the result of social dilemmas.
1) The structure of the argument appears to be the (fallacious): "if it is a real problem, other people will address it, so we don't need to" (cf https://www.explainxkcd.com/wiki/index.php/2278:_Scientific_Briefing) 

2) Interesting.  Seems potentially cruxy.

3) I guess we might agree here... combined with (1), I guess your argument is: "won't be neglected (1) and is not tractable (3)", whereas I might say: "currently neglected, could require a lot of work to become tractable, seems important enough to warrant that effort"

The main upshots I see are:
- higher P(doom) due to stories that are easier for many people to swallow --> greater ability and potential for public awareness and political will if messaging includes this.
- more attention needed to questions of social organization post-AGI.

Exciting! Am I right in understanding that Forethought Foundation for Global Priorities Research is no longer operational?

Hi Rockwell! 

Yes, in most relevant senses that's correct. We're a new team, we think of ourselves as a new project, and Forethought Foundation's past activities (e.g. its Fellowship programs) and public presence have been wound down. We do have continuity with Forethought Foundation in some ways, mainly legal/administrative.

"OpenPhil’s Worldview Investigations team" refers I think to Rethink Priorities', or another one at Open Philanthrophy? Thanks!

We meant the Open Philanthropy one: apparently it's been merged into their GCR Cause Prio research team, but it was where Joe Carlsmith, Tom Davidson, Lukas Finnveden, and others wrote a bunch of foundational reports on AI timelines etc.

Interesting, thanks, will try to find more info!

Curated and popular this week
 ·  · 1m read
 · 
This morning I was looking into Switzerland's new animal welfare labelling law. I was going through the list of abuses that are now required to be documented on labels, and one of them made me do a double-take: "Frogs: Leg removal without anaesthesia."  This confused me. Why are we talking about anaesthesia? Shouldn't the frogs be dead before having their legs removed? It turns out the answer is no; standard industry practice is to cut their legs off while they are fully conscious. They remain alive and responsive for up to 15 minutes afterward. As far as I can tell, there are zero welfare regulations in any major producing country. The scientific evidence for frog sentience is robust - they have nociceptors, opioid receptors, demonstrate pain avoidance learning, and show cognitive abilities including spatial mapping and rule-based learning.  It's hard to find data on the scale of this issue, but estimates put the order of magnitude at billions of frogs annually. I could not find any organisations working directly on frog welfare interventions.  Here are the organizations I found that come closest: * Animal Welfare Institute has documented the issue and published reports, but their focus appears more on the ecological impact and population decline rather than welfare reforms * PETA has conducted investigations and released footage, but their approach is typically to advocate for complete elimination of the practice rather than welfare improvements * Pro Wildlife, Defenders of Wildlife focus on conservation and sustainability rather than welfare standards This issue seems tractable. There is scientific research on humane euthanasia methods for amphibians, but this research is primarily for laboratory settings rather than commercial operations. The EU imports the majority of traded frog legs through just a few countries such as Indonesia and Vietnam, creating clear policy leverage points. A major retailer (Carrefour) just stopped selling frog legs after welfar
 ·  · 4m read
 · 
Note: This post was crossposted from the Open Philanthropy Farm Animal Welfare Research Newsletter by the Forum team, with the author's permission. The author may not see or respond to comments on this post. ---------------------------------------- > Why ending the worst abuses of factory farming is an issue ripe for moral reform I recently joined Dwarkesh Patel’s podcast to discuss factory farming. I hope you’ll give it a listen — and consider supporting his fundraiser for FarmKind’s Impact Fund. (Dwarkesh is matching all donations up to $250K; use the code “dwarkesh”.) We discuss two contradictory views about factory farming that produce the same conclusion: that its end is either inevitable or impossible. Some techno-optimists assume factory farming will vanish in the wake of AGI. Some pessimists see reforming it as a hopeless cause. Both camps arrive at the same conclusion: fatalism. If factory farming is destined to end, or persist, then what’s the point in fighting it? I think both views are wrong. In fact, I think factory farming sits in the ideal position for moral reform. Because its end is neither inevitable nor impossible, it offers a unique opportunity for advocacy to change the trajectory of human moral progress. Not inevitable Dwarkesh raised an objection to working on factory farming that I often hear from techno-optimists who care about the issue: isn’t its end inevitable? Some cite the long arc of moral progress; others the promise of vast technological change like cultivated meat or Artificial General Intelligence (AGI) which surpasses human capabilities. It’s true that humanity has achieved incredible moral progress for humans. But that progress was never inevitable — it was the result of moral and political reform as much as technology. And that moral progress mostly hasn’t yet extended to animals. For them, the long moral arc of history has so far only bent downward. Technology may one day end factory farming, just as cars liberated w
 ·  · 3m read
 · 
We’re already in the second half of 2025 and there are still an incredible number of EAGx and Summit events coming up, as well as the first ever EA Global: New York City. We’re so excited to continue watching the EA community connect and grow.  Below is our conference schedule for the rest of the year. We’re supporting the first EAGx in Brazil, and inaugural EA conferences in France, Vietnam, New Zealand, and Turkey. We hope to reach more people than ever before. Please spread the word, especially for events happening near you!  We are also eager to initiate new events; if you’d like to apply to run an EAGx or Summit in 2026, please fill out this form! Apply to run an EAGx or EA Summit in 2026 Upcoming EA conferences EA Global * EA Global: New York City (10–12 Oct) | Applications close September 28—apply now! EAGx * EAGxSãoPaulo (22–24 Aug) | Applications just extended until August 18—apply now! * EAGxBerlin (3–5 Oct) | Applications close September 7 * EAGxSingapore (15–16 Nov) Applications close October 20 * EAGxAustralasia (28–30 Nov) | Applications close November 9 * EAGxAmsterdam (12–14 Dec) | Applications close November 23 * EAGxIndia (13–14 Dec) | Applications close November 30 EA Summits * EA Summit: Paris (13 Sep 2025) | Applications close September 9 * EA Summit: Vancouver (19–20 Sep) | Applications close September 15 * EA Summit: Vietnam (20 Sep) | Applications close September 12 * EA Summit: Philippines (27 Sep) | Applications close August 20 * EA Summit: New Zealand (27 Sep) | Applications close September 12 * EA Summit: South Africa (11 Oct) | Applications close October 4 * EA Summit: Istanbul (18 Oct) | Applications close October 18 Tentative events that have not yet been confirmed:  * EA Summit: Bogota (Nov) * EA Summit: Los Angeles (22 Nov) What is the difference between EA Global, EAGx, and EA Summits? * EA Global (EAG) conferences are for people with a firm grasp of EA principles who are taking, or planning to take,