Forethought[1] is a new AI macrostrategy research group cofounded by Max Dalton, Will MacAskill, Tom Davidson, and Amrit Sidhu-Brar.
We are trying to figure out how to navigate the (potentially rapid) transition to a world with superintelligent AI systems. We aim to tackle the most important questions we can find, unrestricted by the current Overton window.
More details on our website.
Why we exist
We think that AGI might come soon (say, modal timelines to mostly-automated AI R&D in the next 2-8 years), and might significantly accelerate technological progress, leading to many different challenges. We donโt yet have a good understanding of what this change might look like or how to navigate it. Society is not prepared.
Moreover, we want the world to not just avoid catastrophe: we want to reach a really great future. We think about what this might be like (incorporating moral uncertainty), and what we can do, now, to build towards a good future.
Like all projects, this started out with a plethora of Google docs. We ran a series of seminars to explore the ideas further, and that cascaded into an organization.
This area of work feels to us like the early days of EA: weโre exploring unusual, neglected ideas, and finding research progress surprisingly tractable. And while we start out with (literally) galaxy-brained schemes, they often ground out into fairly specific and concrete ideas about what should happen next. Of course, weโre bringing principles like scope sensitivity, impartiality, etc to our thinking, and we think that these issues urgently need more morally dedicated and thoughtful people working on them.
Research
Research agendas
We are currently pursuing the following perspectives:
* Preparing for the intelligence explosion: If AI drives explosive growth there will be an enormous number of challenges we have to face. In addition to misalignment risk and biorisk, this potentially includes: how to govern the development of new weapons of mass destr
Update:
FLI have released a full statement on their website here, and there is an FAQ post on that statement where discussion has mostly moved to on the Forum. I will respond to these updates there, and otherwise leave this post as-is (for now).
However, it looks like an 'ignorance-based' defence is the correct interpretation of what happened here. I don't regret this post - I still think it was important, and got valuable information out there. I also think that emotional responses should not be seen as 'wrong'. Nevertheless, I do have some updating to do, and I thank all commenters in the thread below.
I have also made some retractions, with explanations in the footnotes
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Epistemic Status: Unclear, but without much reason to dispute the factual case presented by Expo. As I wrote this comment, an ignorance-based defence seemed less and less convincing, and consequently my anger rose. I apologise if this means the post is of a lower tone than the forum is used to. I will also happily correct or retract this post partially or fully if better evidence is provided.
[Clarity Edit: FLI refers to the Future of Life Institute (FLI) not the Future for Humanity Institute (FHI) which has caused some confusion below. Max Tegmark is President of the former, Nick Bostrom is Director of the latter. The two dramas today are not related, other than longtermist organisations needing better acronyms]
Some other things noted for further detail from the article:
seems to confirm the decision to fundNya Dagbladethad been made, but there is now no promise from FLI and not likely to be[2]. They have not responded to Expo since the initial email exchange. It is unclear why FLI decided to make the grant initially, and later change their minds.My understanding:
I cannot speak to any legal questions here, or liability that FLI might face, though I'm not sure why there would be.
There are, however, massive reputational issues at stake. The EA movement is under intense scrutiny right now, and this seems to be another case of
a major actora well-known actor in our movement doing something that has massively poor consequences for the public perception of EA unless they can explain why. Critically, Nya Dagbladet while small seems to beopenly[3] far-right, supporting anti-vaccination sentiment and holocaust denial. I am struggling to charitably interpret how funding them would improve the future of humanity, or do the most good for the world right now.I think it would be prudent for someone from FLI to explain what happened here.
If not, people both inside and outside the EA movement, be they supporters or critics, may
correctly[4] be led to infer thata major organisation ina well-known organisation aligned with EA promised funds to an openly politically far-right organisation, knowing what they stand for. That is not what EA should stand for, and to the extent that it does, I would want no part in it.Many seem to be taking the 'pro-nazi' as a crux. That was the characterisation Expo gave, and I went with their framing as default. Depending on your definition of 'pro-nazi' this might be false, Nya Dagbladet don't seem to openly support the persecution of Jews or a white ethnostate openly that I could see - but it'd be very difficult for any publication to do so openly. In the Expo article, there's a sidebar with two of the most damning pieces of content.
I would at least characterise them as far-right/populist reactionary/ethno-nationalist, which even if not as morally horrifying as openly 'pro-nazi', is something which I believe to be strongly antithetical to what EA is and should stand for. But I think I will elaborate on my thoughts on EA/politics in a future post, rather than here. In any case, I think the issue is why this grant was considered in the first place given the political affiliation of the recipients, rather than whether those political affiliations are less far-right than the Expo article implies.
[Edit: I think that this claim is false. No grant was ever confirmed, and the FAQ states that the 'letter of intent' was a specific request by Nya Dagbladet and not part of FLI's usual grant-making process]
[Edit: I retract the use of 'openly' here, they seem to openly be populist right, but don't make their far-rights leanings immediately obvious]
[Edit: I retract the use of 'correctly' here - I meant it to refer to a counterfactual case where the worst possible case was true, but I think it is probably more confusing than useful]
Listened to it while doing other stuff so might not be 100 % accurate.
To my understanding Tegmark appears for 10 minutes, doing a normal AI-risk spiel. I think the angle relevant to the podcast is the risk of concentration of power in the hands of a few. So some accusations of big tech capturing AI conferences etc.
There's a small segue talking about covid where Tegmark states he felt it was such an infected discussion that he couldn't talk about it openly in some work environments for fear of repercussions.