This week, we are highlighting Forethought's Better Futures series. To make the future go better, we can either work to avoid near-term catastrophes like human extinction or improve the futures where we survive. This series from Forethought explores that second option.

Fin Moorhouse (@finm), who authored two chapters in the series (Convergence and Compromise, and No Easy Eutopia) along with @William_MacAskill, has agreed to answer a few of your questions. 

You can read (and comment) on the full series on the Forum. In order, the chapters are:

Leave your questions and comments below. Note that Fin isn't committing to answer every question, and if you see someone else's question you can answer, you're free to. 

Comments7
Sorted by Click to highlight new comments since:

For example, in What We Owe The Future, Will said he thought that the expected value of the future, given survival, was less than 1% of what it might be.1 After being exposed to some of the arguments in this essay, he revised his views closer to 10%; after analysing them in more depth, that percentage dropped a little bit, to 5%-10%.

[...]

However, it's unlikely to me that companies will in fact produce morally uncertain AIs that are motivated by doing good de dicto. They probably won't have thought about this issue, and won't be motivated by trying to improve scenarios in which humanity is disempowered.

Given this combination of views, I'm surprised that Will doesn't support what @Holly Elmore ⏸️ 🔸 calls "Pause NOW" and instead want to see a pause later (after we have human-level AI). I'm curious if your own views are similar or how they differ from Will's. (My own "expected value of the future, given survival" I would say is similarly pessimistic, but I'm reluctant to put into numbers due to being very unsure how to quantify it.)

Aside from what Holly said in the linked comment, which I agree with, another argument more relevant to the current discussion is that many opportunities for making the future better seem to exist during the AI transition, including the early parts of it, so by not pausing ASAP (and currently having few resources for such interventions), we're permanently giving up these opportunities. Conversely, by pausing NOW, we buy more time to think and strategize about how to better intervene on these opportunities, or otherwise lay the groundwork for them.

For example, during the pause, we could:

  1. Try to solve metaphilosophy, or otherwise think about how to improve AI philosophical competence or moral epistemology.
  2. Try to get AI companies to "think about this issue" (of morally uncertain AIs that are motivated by doing good de dicto).
  3. Research ways to make such AIs safer from our (human) perspective so that there's less of a tradeoff between safety and Better Futures.
  4. Spread the idea of Better Futures generally so that when AI development resumes, there will be more people aware of and working on these issues.

Such interventions could mean the difference between the first human-level AIs being competent and critical moral/philosophical advisors, or independent moral (and safe) agents, vs uncritically doing what humans seem to want and/or giving bad/incompetent/sycophantic "advice" (when humans think to ask for it), which seemingly can make a big difference to how well the future goes.

What do you think about this argument, and overall about pause now vs later?

Hi Fin,

I have a lot of questions so I figure I would just share all of them and you could respond to the ones you want to.

  1. I think Forethought is a super cool institution. What advice would have for someone who wanted to work there as a researcher? Do you think it's important to have a strong understanding of how LLMs work?
  2. I made this post where I categorized flourishing cause areas based on "How To Make The Future Better." I thought I'd share. I'm curious if this categorization generally aligns with how you think about the problem.
    1. Locking-in one’s values
    2. Ensuring the future is aligned with the correct values
      1. Working towards viatopia
      2. Promoting futures with more moral reflection
      3. Improving the ability for people with different views to get their desired futures
    3. Ensuring future people are able to create a good future
      1. Keeping humanity’s options open
      2. Improving global stability
      3. Improving future human’s decision making
      4. Empowering responsible actors
    4. Speeding up progress
  3. I made this post which is an overview of longtermism's ideas, writings, individuals, institutions, and history. I thought I'd share since you made the longtermism website.
  4. The Better Futures series assumes that the future will be net-positive by default. To me, the ideas presented in the series (strong self-modification, modification of descendants, selection of beliefs by evolutionary pressures) indicate that we should expect future humans to be very different from us, and that, as a result, we should expect the future to be neutral in expectation. Do you agree with this logic or do you think the future will be net-positive by default? Additionally, why?
  5. Currently, there are a wide range of ideas about how a post-AGI future will go and what features it will contain. To me, this strongly indicates that we should expect the post-AGI future could go in a very broad range of ways and that we should prepare for the many different ways it could go. At the same time, I get the sense that Forethought has a very specific vision about how a post-AGI future will go (there will be an intelligence explosion, tools for epistemics will be beneficial, we might begin acquiring resources in other solar systems, small sets of actors could use AGI in malicious ways.) I'm wondering how do you decide what ideas you think are likely, and do you guys have any measures in place to ensure you're receiving criticism of your ideas so you don't create an epistemic bubble?
  6. I understand that you have done some work related to space governance. A criticism I have of working on this field is that (1) it seems like it has been very intractable due to the lack of space treaties (2) if any great power has a decisive advantage, global treaties won't matter (3) even if you are able to get a law or treaty passed, corporate or state interests could easily override these laws later on (4) there's probably a low chance of success of even getting into a position where you could influence this stuff. As such, I'm wondering, if you think it's valuable for additional people to work in the field, why do you think this?
  7. It seems like longtermism is an unhelpful idea since it requires people to believe that our actions could persist for millions of years. I personally am pretty skeptical of this, although I do think it is possible. It also seems like the idea has been somewhat harmful to EA as a movement since people can always point out that some of the founders of the movement are focused on helping people millions of years from now, which sounds pretty crazy. I'm wondering if you agree with this assessment.
  8. In "How To Make The Future Better," MacAskill argues that we should make AIs encourage humans to be good people and use them as a source of moral reflection. This seems like it could be deeply problematic in case moral sense theory is true, but AIs lack a moral sense. Do you agree with this?

Thanks James!

What advice would have for someone who wanted to work there as a researcher?

Some things I appreciate in my colleagues: having some discernment for which questions or ideas are most important, rather than just conceptually interesting but not urgent; being able to contribute to group conversations by driving at cruxes, being willing to ask naive questions and avoiding the impulse to sound clever for the sake of it, and being able to spot and entertain "big if true" hypotheses; and being able to clearly communicate ideas where you often don't have an especially deep literature to draw on.

Do you think it's important to have a strong understanding of how LLMs work?

I think it's important to understand how AI works in the fundamentals; including some of the theory. I don't think it's important to have deep technical knowledge of LLMs, unless you can think why those details could end up being relevant for macrostrategy.

On your second question, many of those points seem good to me. I'll single out "Locking-in one’s values" since I've been thinking about it recently. It seems to me that some people roughly think that great futures are futures which resemble our own (or which carry on our values) in many particular ways. In particular, maybe great futures are futures which are recognisably human in their values. Inhuman futures, like futures where AI successors call the shots, might just seem empty of what we today care about; even if they involve a lot of moral reflection and nothing morally offensive from a human perspective. We could call this a "humanity forever" view.

On the other hand, some people roughly think that great futures are necessarily futures which are radically different from humanity today, including in the values which guide it, and perhaps the kind of actors living there. See Dan Faggella on the "Worthy Successor" idea (and here), which I see as one version of this view.

Both these views care about preventing obvious catastrophes from AGI, but it seems to me like they might end up disagreeing quite profoundly on what should come next. It's possible that there is opportunity for trade and compromise between the two views, but in any case this strikes me as a potentially important difference in approach to post-AGI futures.

To me, the ideas presented in the series (strong self-modification, modification of descendants, selection of beliefs by evolutionary pressures) indicate that we should expect future humans to be very different from us, and that, as a result, we should expect the future to be neutral in expectation.

Firstly, you're right that the series doesn't discuss negative futures, but I should say that's not because Will or I think they are worth ignoring, or very unlikely in absolute terms. We didn't discuss them more just so we could make a more focused argument about how to think about making good futures even better.

I think your point (quoted) touches on the difference I mentioned above between "humanity forever" views and views which are more open to change in values. I think it's coherent to take a view such as:

  • You want to value whatever is ultimately valuable. You're unsure what that is, but you trust the processes which guide the future to converge on it;
  • You want to value whatever you would value under some idealised process of reflection, and you think the processes which guide the future will emulate idealised reflection on your own values closely enough;
  • You value roughly what you currently value. But you're scope-insensitive: in order to think we've reached a great future, you just need your neck of the woods to be how you want it, and the rest of the future to avoid things you think are morally repugnant. You expect almost the entire future not to be guided by what you value; but you're confident you can get the things you want to be satisfied the future is great, and you're confident the rest of the future will avoid the morally repugnant (perhaps through trade)
  • Similar to above, but what you personally value is cheap by the lights of other value systems which guide the future, and vice-versa. So you are confident you can secure a great future by your lights through trade.

Better Futures argues that these views may be less tenable than they first appear, but I think they're not totally doomed.

Additionally I would point out a potential "missing mood" in the framing we adopt of cardinally quantifying the value of the future in terms of a fraction of the value of the best feasible future. This suggests futures which are only, say, 10E-5 the value of the best feasible future are barren, hollow, 'neutral'. But this would be a mistake: potentially our own world, even with all the harm and pain removed, is achieving a tiny fraction of what a great future could achieve. So we might imagine (as Better Futures points out) a "common-sense eutopia" which is radically better than the world today, but still only a fraction as good as things could get. That could be true, but it doesn't undermine the value of such a future, which would also truly be (by stipulation) wildly better than the world today! All the joy and freedom and discovery and so on, in this near-zero world, would be entirely real and could dwarf all the good we have achieved and enjoyed so far.

Currently, there are a wide range of ideas about how a post-AGI future will go and what features it will contain. To me, this strongly indicates that we should expect the post-AGI future could go in a very broad range of ways and that we should prepare for the many different ways it could go.

Maybe I'm misreading but I don't think it follows from uncertainty about how things go that many different things will actually happen. For example, if you're uncertain who wins a political election, you don't infer that everyone wins and shares power.

At the same time, I get the sense that Forethought has a very specific vision about how a post-AGI future will go (there will be an intelligence explosion, tools for epistemics will be beneficial, we might begin acquiring resources in other solar systems, small sets of actors could use AGI in malicious ways.) I'm wondering how do you decide what ideas you think are likely, and do you guys have any measures in place to ensure you're receiving criticism of your ideas so you don't create an epistemic bubble?

I'm in a few minds about this, so I'll just list some reactions:

  • You say the Forethought vision is "very specific", and then you list some claims (e.g. "small sets of actors could use AGI in malicious ways") which seem… surprisingly anodyne? Or in particular it doesn't strike me as egregious or unusual to put a decent amount of credence in those claims being true. I think that's all you need to take them seriously and work on them. Indeed I don't myself feel extremely confident in any one of them.
  • I think there is a way to do criticism in a performative way, where you invite people you know to disagree, for reasons you are already familiar with. I don't think that is totally useless, because performing these dialogues in public can be useful for other people to decide what they think.
  • On the other hand, I think the best kind of outside criticism for the sake of throwing out bad ideas often isn't very flashy, and can look like outside experts telling you "this isn't really how [my domain of expertise works], so [ABC] seems confused but [XYZ] seems plausible".
  • From my perspective there is quite a lot of internal disagreement, including between broad worldviews, although that's relative.
  • Speaking personally, I worry a bit that there are components of the implicit shared Forethought worldview which are tricky to pin down from the outside, and thus more likely to influence research decisions in an unscrutinised way. I do think this is a generic problem, and think the most useful place from which to notice and communicate these implicit beliefs is straddling being enough of an insider to have context, and enough of an outsider to see alternatives.
  • On the other hand I think you do at some point just need to pick some assumptions and some worldview and work within it to make any progress at all. In my experience simply pointing out that those assumptions could be wrong is often less valuable than proposing more fleshed-out alternative assumptions and worldviews, which themselves can be criticised and so on…

I'm wondering, if you think it's valuable for additional people to work in the field, why do you think this?

We are at Forethought, running a research programme on space right now, which I guess reflects a view that it does seem worth investigating more. I don't think the central case for space runs through the hope for binding international treaties because I agree that we shouldn't expect them to hold. I think there are a few other reasons to want to investigate space. One is that the space economy could be somewhat relevant for the course of AGI development, for example if orbital data centres are a big deal, or because of the role of sensing satellites in peace and security.

Another is that most of the physical stuff is in space. At some point it seems likely to me, if the human project continues at all, that most of the important stuff will also eventually be in space. AGI + automated manufacturing + rapid R&D progress suggests that expanding into space could happen in the time span of decades rather than centuries or millennia; and that seems generically worth planning for. And it seems like there are some policy levers which don't root through international treaties.

To be clear I don't currently think that space governance should be the next big cause in EA or anything like that.

It seems like longtermism is an unhelpful idea since it requires people to believe that our actions could persist for millions of years.

This feels like a slightly odd sentence construction, because you seem to be saying that longtermism is unhelpful because it requires people to believe one of its central claims. I agree it's contentious and I'm certainly not confident that the effects of our actions could persist for millions of years but it seems plausible enough that the anticipated long-term effects of our actions should meaningfully weigh into what we prioritise, at least where you can tell a story about how your decisions could have some systematic long-run effects.

It also seems like the idea has been somewhat harmful to EA as a movement since people can always point out that some of the founders of the movement are focused on helping people millions of years from now, which sounds pretty crazy.

I do think that is plausible. Although, to state the obvious, there is a difference between which ideas have good or bad PR effects when you say them out loud, and which ideas are actually true or important. So questions about communicating longtermist ideas are, naturally, different from the question of whether longtermist ideas are worth taking seriously as ideas.

And then, I also want to say: the full-on version of longtermism — that the very long-run effects of our actions are overwhelmingly important for what we prioritise — just doesn't feel especially necessary for working on most or even all of the topics that Forethought is focused on. There is a far more common-sense and mundane reason to focus on them, which is that they could matter enormously within our own lifetimes! Another way of putting that is that when trying to prioritise between possible focuses within Forethought, my personal view is that longtermism is rarely a crux. Maybe my colleagues disagree with that; obviously I'm not speaking on their behalf.

In "How To Make The Future Better," MacAskill argues that we should make AIs encourage humans to be good people and use them as a source of moral reflection. This seems like it could be deeply problematic in case moral sense theory is true, but AIs lack a moral sense. Do you agree with this?

I'm not sure I'm entirely following your points but I don't see a strong reason why AIs or non-human entities could not in principle engage in genuine moral reasoning in the same way that humans do. Maybe instead the AIs will do something which superficially resembles real moral reasoning, but which is closer to just telling humans what they want to hear.

I do think that is not a crazy thing to worry about because it is much easier to train some skill where an uncontroversial and abundant source of ground truth data exists. Moral reasoning is not one of those domains because people often don't agree on what good moral reasoning looks like. So I think there is much work to be done on that front although I'm not sure that answers your question.

Thanks again for your questions!

Hey Fin,

Thanks for so thoughtfully answering my questions!

Forethought's view that improving the future conditional on survival is more important than ensuring survival goes against the dominant view in EA for many years that we need to reduce extinction risk. Two questions on this:

  1. How far away from the optimal allocation of (longtermist) resources do you think the community currently is?
    1. For example, should we be radically reducing investment in things like addressing biorisk or nuclear risk? Do we need to be rethinking the allocation of resources within AI risk?
  2. Do you think there is anything that is being prioritized in the community that is actually harmful?
    1. For example, could certain AI alignment approaches be bad for future digital sentience?

I really liked the series :)

Curated and popular this week
Relevant opportunities