Forethought[1] is a new AI macrostrategy research group cofounded by Max Dalton, Will MacAskill, Tom Davidson, and Amrit Sidhu-Brar.
We are trying to figure out how to navigate the (potentially rapid) transition to a world with superintelligent AI systems. We aim to tackle the most important questions we can find, unrestricted by the current Overton window.
More details on our website.
Why we exist
We think that AGI might come soon (say, modal timelines to mostly-automated AI R&D in the next 2-8 years), and might significantly accelerate technological progress, leading to many different challenges. We don’t yet have a good understanding of what this change might look like or how to navigate it. Society is not prepared.
Moreover, we want the world to not just avoid catastrophe: we want to reach a really great future. We think about what this might be like (incorporating moral uncertainty), and what we can do, now, to build towards a good future.
Like all projects, this started out with a plethora of Google docs. We ran a series of seminars to explore the ideas further, and that cascaded into an organization.
This area of work feels to us like the early days of EA: we’re exploring unusual, neglected ideas, and finding research progress surprisingly tractable. And while we start out with (literally) galaxy-brained schemes, they often ground out into fairly specific and concrete ideas about what should happen next. Of course, we’re bringing principles like scope sensitivity, impartiality, etc to our thinking, and we think that these issues urgently need more morally dedicated and thoughtful people working on them.
Research
Research agendas
We are currently pursuing the following perspectives:
* Preparing for the intelligence explosion: If AI drives explosive growth there will be an enormous number of challenges we have to face. In addition to misalignment risk and biorisk, this potentially includes: how to govern the development of new weapons of mass destr
In this hypothetical it sounds like you think the harm done will occur regardless, and so the tradeoff is between the additional benefits you can accrue from your ROI, VS the moral discomfort you will have in buying in? So there's both a moral uncertainty based on how strongly you personally think casinos should be illegal, how strongly you think the means justifies the ends (i.e. how important is it that you aren't the one responsible for the harm here, and an empirical Q of how much additional ROI you are actually getting compared to the next best option.
It also depends if this is something done in a personal capacity VS something that an EA organization is doing, because then there are other considerations that are harder to measure. E.g., even if you fully take the "ends justifies means" view, if CEA buys out a bunch of casinos, what negative effect does that have on movement building, and is that worth the marginal extra ROI compared to the next best investment?