ETE

Elliott Thornley (EJT)

Research Fellow @ Global Priorities Institute
1078 karmaJoined
www.elliott-thornley.com

Bio

I work on AI alignment. Right now, I'm using ideas from decision theory to design and train safer artificial agents.

I also do work in ethics, focusing on the moral importance of future generations.

You can email me at thornley@mit.edu.

Comments
87

I said a little in another thread. If we get aligned AI, I think it'll likely be a corrigible assistant that doesn't have its own philosophical views that it wants to act on. And then we can use these assistants to help us solve philosophical problems. I'm imagining in particular that these AIs could be very good at mapping logical space, tracing all the implications of various views, etc. So you could ask a question and receive a response like: 'Here are the different views on this question. Here's why they're mutually exclusive and jointly exhaustive. Here are all the most serious objections to each view. Here are all the responses to those objections. Here are all the objections to those responses,' and so on. That would be a huge boost to philosophical progress. Progress has been slow so far because human philosophers take entire lifetimes just to fill in one small part of this enormous map, and because humans make errors so later philosophers can't even trust that small filled-in part, and because verification in philosophy isn't much quicker than generation.

I'm not sure but I think maybe I also have a different view than you on what problems are going to be bottlenecks to AI development. e.g. I think there's a big chance that the world would steam ahead even if we don't solve any of the current (non-philosophical) problems in alignment (interpretability, shutdownability, reward hacking, etc.).

try to make them "more legible" to others, including AI researchers, key decision makers, and the public

Yes, I agree this is valuable, though I think it's valuable mainly because it increases the probability that people use future AIs to solve these problems, rather than because it will make people slow down AI development or try very hard to solve them pre-TAI.

I don't think philosophical difficulty is that much of an increase to the difficulty of alignment, mainly because I think that AI developers should (and likely will) aim to make AIs corrigible assistants rather than agents with their own philosophical views that they try to impose on the world. And I think it's fairly likely that we can use these assistants (if we succeed in getting them and aren't disempowered by a misaligned AI instead) to help a lot with these hard philosophical questions.

I didn't meant to imply that Wei Dai was overrating the problems' importance. I agree they're very important! I was making the case that they're also very intractable.

If I thought solving these problems pre-TAI would be a big increase to the EV of the future, I'd take their difficulty to be a(nother) reason to slow down AI development. But I think I'm more optimistic than you and Wei Dai about waiting until we have smart AIs to help us on these problems.

I'm a philosopher who's switched to working on AI safety full-time. I also know there are at least a few philosophers at Anthropic working on alignment.

With regards to your Problems in AI Alignment that philosophers could potentially contribute to:

  • I agree that many of these questions are important and that more people should work on them.
  • But a fair amount of them are discussed in conventional academic philosophy, e.g.:
    • How to resolve standard debates in decision theory?
    • Infinite/multiversal/astronomical ethics
    • Fair distribution of benefits
    • What is the nature of philosophy?
    • What constitutes correct philosophical reasoning?
    • How should an AI aggregate preferences between its users?
    • What is the nature of normativity?
  • And these are all difficult, controversial questions.
    • For each question, you have to read and deeply think about at least 10 papers (and likely many more) to get a good understanding of the question and its current array of candidate answers.
    • Any attempt to resolve the question would have to grapple with a large number of considerations and points that have previously been made in relation to the question.
      • Probably, you need to write something at least book-length.
        • (And it's very hard to get people to read book-length things.)
    • In trying to do this, you probably don't find any answer that you're really confident in.
      • I think most philosophers' view on the questions they study is: 'It's really hard. Here's my best guess.'
      • Or if they're confident of something, it'll be a small point within existing debates (e.g. 'This particular variant of this view is subject to this fatal objection.').
    • And even if you do find an answer you're confident in, you'll have a very hard time convincing other philosophers of that answer.
      • They'll bring up some point that you hadn't thought of.
      • Or they'll differ from you in their bedrock intuitions, and it'll be hard for either of you to see any way to argue the other out of their bedrock intuition.
      • In some cases -- like population ethics and decision theory -- we have proofs that every possible answer will have some unsavory implication. You have to pick your poison, and different philosophers will make different picks.
        • And on inductive grounds, I suspect that many other philosophical questions also have no poison-free answers.
  • Derek Parfit is a good example here.
    • He spent decades working on On What Matters, trying to settle the questions of ethics and meta-ethics.
    • He really tried to get other philosophers to agree with him.
    • But very few do. The general consensus in philosophy is that it's not a very convincing book.
    • And I think a large part of the problem is a difference of bedrock intuitions. For example, Bernard Williams simply 'didn't have the concept of a normative reason,' and there was nothing that Parfit could do to change that.
  • It also seems like there's not much of an appetite among AI researchers for this kind of work.
    • If there were, we might see more discussions of On What Matters, or any of the other existing works on these questions.

When I decided to start working on AI, I seriously considered working on the kinds of questions you list. But due to the points above, I chose to do my current work instead.

Makes sense! Unfortunately any x-risk cost-effectiveness calculation has to be a little vibes-based because one of the factors is 'By how much would this intervention reduce x-risk?', and there's little evidence to guide these estimates.

Whether longtermism is a crux will depend on what we mean by 'long,' but I think concern for future people is a crux for x-risk reduction. If future people don't matter, then working on global health or animal welfare is the more effective way to improve the world. The more optimistic of the calculations that Carl and I do suggest that, by funding x-risk reduction, we can save a present person's life for about $9,000 in expectation. But we could save about 2 present people if we spent that money on malaria prevention, or we could mitigate the suffering of about 12.6 million shrimp if we donated to SWP.

Oops yes, fundamentals between my and Bruce's cases are very similar. Should have read Bruce's comment!

The claim we're discussing - about the possibility of small steps of various kinds - sounds kinda like a claim that gets called 'Finite Fine-Grainedness'/'Small Steps' in the population axiology literature. It seems hard to convincingly argue for, so in this paper I present a problem for lexical views that doesn't depend on it. I sort of gestured at it above with the point about risk without making it super precise. The one-line summary is that expected welfare levels are finitely fine-grained.

Oh yep nice point, though note that - e.g. - there are uncountably many reals between 1,000,000 and 1,000,001 and yet it still seems correct (at least talking loosely) to say that 1,000,001 is only a tiny bit bigger than 1,000,000.

But in any case, we can modify the argument to say that S* feels only a tiny bit worse than S. Or instead we can modify it so that S is degrees celsius of a fire that causes suffering that just about can be outweighed, and S* is degrees celsius of a fire that causes suffering that just about can't be outweighed.

Load more