finm

Researcher @ Longview Philanthropy
2985 karmaJoined Working (0-5 years)Oxford, UK
www.finmoorhouse.com/writing

Bio

I do research at Longview Philanthropy. Previously I was a Research scholar at FHI and assistant to Toby Ord. Philosophy at Cambridge before that.

I also do a podcast about EA called Hear This Idea.

www.finmoorhouse.com/writing

www.hearthisidea.com

Posts
38

Sorted by New
3
finm
· · 1m read
20
finm
· · 7m read
194
finm
· · 7m read
188
finm
· · 20m read
67
finm
· · 29m read
77
finm
· · 13m read

Comments
157

Thanks!

should this be "does make sense"?

No

Answer by finm5
0
0

Great question, maybe someone should set up some kind of moral trade ‘exchange’ platform.

Quick related PSA in case it's not obvious: typically donation swapping for political campaigns (i.e. with individual contribution caps) is extremely illegal

This is a stupid analogy! (Traffic accidents aren't very likely.)

Oh, I didn't mean to imply that I think AI takeover risk is on par with traffic accident-risk. I was just illustrating the abstract point that the mere presence of a mission-ending risk doesn't imply spending everything to prevent it. I am guessing you agree with this abstract point (but furthermore think that AI takeover risk is extremely high, and as such we should ~entirely focus on preventing it).

I think Wei Dei's reply articulates my position well:

Maybe I'm splitting hairs, but “x-risk could be high this century as a result of AI” is not the same claim as “x-risk from AI takeover is high this century”, and I read you as making the latter claim (obviously I can't speak for Wei Dai).

No, the correct reply is that dolphins won't run the world because they can't develop technology

That's right, and I do think the dolphin example was too misleading and straw-man-ish. The point I was trying to illustrate, though, is not that there is no way to refute the dolphin theory, but that failing to adequately describe the alternative outcome(s) doesn't especially support the dolphin theory, because trying to accurately describe the future is just generally extremely hard.

No, but they had sound theoretical arguments. I'm saying these are lacking when it comes to why it's possible to align/control/not go extinct from ASI.

Got it. I guess I see things as messier than this — I see people with very high estimates of AI takeover risk advancing arguments, and I see others advancing skeptical counter-arguments (example), and before engaging with these arguments a lot and forming one's own views, I think it's not obvious which sets of arguments are fundamentally unsound.

But it's worse than this, because the only viable solution to avoid takeover is to stop building ASI, in which case the non-takeover work is redundant (we can mostly just hope to luck out with one of the exotic factors).

Makes sense.

Thanks for the comment. I agree that if you think AI takeover is the overwhelmingly most likely outcome from developing ASI, then preventing takeover (including by preventing ASI) should be your strong focus. Some comments, though —

  • Just because failing at alignment undermines ~every other issue, doesn't mean that working on alignment is the only or overwehelmingly most important thing.[1] Tractability and likelihood also matters.
  • I'm not sure I buy that things are so stark as “there are no arguments against AI takeover”, see e.g. Katja Grace's post here. I also think there are cases where someone presents you with an argument that superficially drives toward a conclusion that sounds unlikely, and it's legitimate to be skeptical of the conclusion even if you can't spell out exactly where the argument is going wrong (e.g. the two-envelope “paradox”). That's not to say you can justify not engaging with the theoretical arguments whenever you're uncomfortable with where they point, just that humility deducing bold claims about the future on theoretical grounds cuts both ways.
  • Relatedly, I don't think you don't need to be able to describe alternative outcomes in detail to reject a prediction about how the world goes. If I tell someone the world will be run by dolphins in the year 2050, and they disagree, I can reply, “oh yeah, well you tell me what the world looks like in 2050”, and their failure to describe their median world in detail doesn't strongly support the dolphin hypothesis.[2]
  • “Default” doesn't necessarily mean “unconditionally likely” IMO. Here I take it to mean something more like “conditioning on no specific response and/or targeted countermeasures”. Though I guess it's baked into the meaning of “default” that it's unconditionally plausible (like, ⩾5%?) — it would be misleading to say “the default outcome from this road trip is that we all die (if we don't steer out of oncoming traffic)”.
  • In theory, one could work on making outcomes from AI takeover less bad, as well as making them less likely (though less clear what this looks like).

Altogether, I think you're coming from a reasonable but different position, that takeover risk from ASI is very high (sounds like 60–99% given ASI?) I agree that kinds of preparedness not focused on avoiding takeover look less important on this view (largely because they matter in fewer worlds). I do think this axis of disagreement might not be as sharp as it seems, though — suppose person A has 60% p(takeover) and person B is on 1%. Assuming the same marginal tractability and neglectedness between takeover and non-takeover work, person A thinks that takeover-focused work is 60× more important; but non-takeover work is 40/99≈0.4 times as important, compared to person B.

  1. ^

    By (stupid) analogy, all the preparations for a wedding would be undermined if the couple got into a traffic accident on the way to the ceremony; this does not justify spending ~all the wedding budget on car safety.

  2. ^

    Again by analogy, there were some superficially plausible arguments in the 1970s or thereabouts that population growth would exceed the world's carrying capacity, and we'd run out of many basic materials, and there would be a kind of system collapse by 2000. The opponents of these arguments were not able to describe the ways that the world could avoid these dire fates in detail (they could not describe the specific tech advances which could raise agricultural productivity, or keep materials prices relatively level, for instance).

Thanks for these comments, Greg, and sorry for taking a while to get round to them.

This is conservative. Why not "GPT-5"? (In which case the 100,000x efficiency gain becomes 10,000,000,000x.)

Of course there's some ambiguity in what “as capable as a human being” means, since present-day LLMs are already superhuman in some domains (like general knowledge), and before AI systems are smarter in every important way than humans, they will be smarter in increasingly many but not all ways. But in the broader context of the piece, we're interested in AI systems which effectively substitute for a human researcher, and I just don't think GPT-5 will be that good. Do you disagree or were we just understanding the claim differently?

See APM section for how misaligned ASI takeover could lead to extinction.

Are you missing a quotation here?

Why is this [capturing more wealth before AI poses meaning catastrophic risk] likely? Surely we need a Pause to be able to do this?

It's a conditional, so we're not claiming it's more likely than not that AI generates a lot of wealth before reaching very high “catastrophic risk potential” (if ever), but I do think it's plausible. One scenario where this looks likely is the one described by Epoch in this post, where AI services are diffusely integrated into the world economy. I think it would be more likely if we do not see something like a software intelligence explosion (i.e. “takeoff” from automating AI R&D). It would also be made more likely by laws and regulations which successfully restrict dangerous uses of AI.

A coordinated pause might block a lot of the wealth-generating effects of AI, if most those effects come from frontier models. But a pause (generally or on specific applications/uses) could certainly make the scenario we mention more likely (and even if it didn't, that in itself wouldn't make it a bad idea).

Expect these to be more likely to cause extinction than a good future? (Given Vulnerable World)

Not sure how to operationalise that question. I think most individual new technologies (historically and in the future) will make the world better, and I think the best world we can feasibly get to at the current technology level is much less good than the best world we can get to with sustained tech progress. How likely learning more unknown unknowns is (in general) to cause extinction is partly a function of whether there are “recipes for ruin” hidden in the tech tree, and then how society handles them. So I think I'd prefer “a competent and well-prepared society continues to learn new unknown unknowns (i.e. novel tech or other insights)” over “we indefinitely stop the kind of tech progress/inquiry that could yield unknown unknowns” over “a notably incompetent or poorly-prepared society learns lots of new unknown unknowns all at once”.

If superintelligence is catastrophically misaligned, then it will take over, and the other challenges won’t be relevant.

I expect we agree on this at least in theory, but maybe worth noting explicitly: if you're prioritising between some problems, one problem completely undermines everything else if you fail on it, it doesn't follow that you should fully prioritise work on that problem. Though I do think the work going into preventing AI takeover is embarrassingly inadequate to the importance of the problem.

It [ensuring that we get helpful superintelligence earlier in time] increases takeover risk(!)

Emphasis here on the “helpful” (with respect to the challenges we list earlier, and a background level of frontier progress). I don't think we should focus efforts on speeding up frontier progress in the broad sense. This appendix to this report discusses the point that speeding up specific AI applications is rarely if ever worthwhile, because it involves speeding up up AI progress in general.

We need at least 13 9s of safety for ASI, and the best current alignment techniques aren't even getting 3 9s...

Can you elaborate on this? How are we measuring the reliability of current alignment techniques here? If you roughly know the rate of failure of components of a system, and you can build in redundancy, and isolate failures before they spread, you can get away with any given component failing somewhat regularly. I think if you can confidently estimate the failure rates of different (sub-)components of the system, you're already in a good place, because then you can build AIs the way engineers build and test bridges, airplanes, and nuclear power stations. I don't have an informed view on whether we'll reach that level of confidence in how to model the AIs (which is indeed reason to be pretty freaked out).

This whole section (the whole paper?) assumes that an intelligence explosion is inevitable.

Sure — “assumes that” in the sense of “is conditional on”. I agree that most of the points we raise are less relevant if we don't get an intelligence explosion (as the title suggests). Not “assumes” as in “unconditionally asserts”. We say: “we think an intelligence explosion is more likely than not this century, and may well begin within a decade.” (where “intelligence explosion” is informally understood as a very rapid and sustained increase in the collective capabilities of AI systems). Agree it's not inevitable, and there are levers to pull which influence the chance of an intelligence explosion.

But could also just lead to Mutually Assured AI Malfunction (MAIM).

Is this good or bad, on your view? Seems more stabilising than a regime which favours AI malfunction “first strikes”?

This [bringing forward the start of the intelligence explosion] sounds like a terrible and reckless idea! Because we don’t know exactly where the thresholds are for recursive self-improvement to kick in.

I agree it would be reckless if it accidentally made a software intelligence explosion happen sooner or be more likely! And I think it's a good point that we don't know much about the thresholds for accelerating progress from automating AI R&D. Suggests we should be investing more in setting up relevant measures and monitoring them carefully (+ getting AI developers to report on them).

Yes, unless we stop it happening (and we should!)

See comment above! Probably we disagree on how productive and feasible a pause on frontier development is (just going off the fact that you are working on pushing for it and I am not), but perhaps we should have emphasised more that pausing is an option.

Problem is knowing that by the time the “if” is verified to have occurred, it could well be too late to do the “then” (e.g. once a proto-ASI has already escaped onto the internet).

I think we're operating with different pictures in our head here. I agree that naive “if-then” policies could easily kick in too late to prevent deceptively aligned AI doing some kind of takeover (in particular because the deceptively aligned AI could know about and try to avoid triggering the “if” condition). But most “if-then” policies I am imagining are not squarely focused on avoiding AI takeover (nor is most of the piece).

Need a moratorium now, not unworkable “if-then” commitments!

it's not clear to me if-then policies are less “workable” than a blanket moratorium on frontier AI development, in terms of the feasibility of implementing them. I guess you could be very pessimistic about whether any if-then commitments would at all help, which it sounds like you are.

This [challenges downstream of ASI] is assuming ASI is alignable! (The whole Not just misalignment section is).

Again, it's true that we'd only face most the challenges we list if we avoid full-blown AI takeover, but we're not asserting that ASI is alignable with full confidence. I agree that if you are extremely confident that ASI is not alignable, then all these downstream issues matters less. I currently think it's more likely than not that we avoid full-blown AI takeover, which makes me think it's worth considering downstream issues.

Thanks again for your comments!

at the sharp end of the intelligence explosion it will be able to do subjective decades of R&D before the second mover gets off the ground, even if the second mover is only hours behind

Where are you getting those numbers from? If by “subjective decades” you mean “decades of work by one smart human researcher”, then I don't think that's enough to secure it's position as a singleton.

If you mean “decades of global progress at the global tech frontier” then imagining that the first-mover can fit ~100 million human research-years into a few hours shortly after (presumably) pulling away from the second-mover in a software intelligence explosion, then I'm skeptical (for reasons I'm happy to elaborate on).

Do you think octopuses are conscious? I do — they seem smarter than chickens, for instance. But their most recent common ancestor with vertebrates was some kind of simple Precambrian worm with a very basic nervous systems.

Either that most recent ancestor was not phenomenally conscious in the sense we have in mind, in which case consciousness arose more than once in the tree of life. Or else it was conscious, in which case consciousness would seem easy to reproduce (wire together some ~1,000 nerves).

The main question of the debate week is: “On the margin, it is better to work on reducing the chance of our extinction than increasing the value of the future where we survive”.

Where “our” is defined in a footnote as “earth-originating intelligent life (i.e. we aren’t just talking about humans because most of the value in expected futures is probably in worlds where digital minds matter morally and are flourishing)”.

I'm interested to hear from the participants how likely they think extinction of “earth-originating intelligent life” really is this century. Note this is not the same as asking what your p(doom) is, or what likelihood you assign to existential catastrophe this century.

My own take is that literal extinction of intelligent life, as defined, is (much) less than 1% likely to happen this century, and this upper-bounds the overall scale of the “literal extinction” problem (in ITN terms). I think this partly because the definition counts AI survival as non-extinction, and I truly struggle to think of AI-induced catastrophes leaving only charred ruins, without even AI survivors. Other potential causes of extinction, like asteroid impacts, seem unlikely on their own terms. As such, I also suspect that most work directed at existential risk is just already not in practice targeting extinction as defined, though of course it is also not explicitly focusing on “better futures” instead — more like “avoiding potentially terrible global outcomes”.

(This became more a comment than a question… my question is: “thoughts?”)

Nice! Consolidating some comments I had on a draft of this piece, many of them fairly pedantic:

  • Why would value be disributed over some suitable measure of world-states in a way that can be described as a power law specifically (vs some other functional form where the most valuable states are rare)? In particular, shouldn't we think that there is a most valuable world state (or states)? So at least we need to say it's a power law distribution with a max value.
  • "Then there is a powerful argument that the expected value of the future is very low."
    • Very low as a fraction of the best futures, but not any lower relative to (e.g.) the world today, or the EV of the future. Indeed the future could be amazing by all existing measures. One framing on what you are saying is that it could be even better than we think, which is not a pessimistic result!
      • Another framing is more like “it's easier than we thought to make amazing-seeming worlds far less valuable than they seem, by making mistakes like e.g. ignoring animal farming”. That is indeed bad news.
    • And of course the decision-relevance of MPL depends on the feasibility of the best futures, not just how they're distributed.
  • One possibility that would support MPL is the possibility that value scales superlinearly with the amount of "optimised matter" — e.g. with brain size. The task of a risk-neutral classical utilitarian can then effectively be boiled down to "maximising the chance of getting ~the most optimized state possible", as long as "~the most optimized state possible" is at all feasible.
  • "If you value pretty much anything (e.g. consciousness, desire satisfaction), there’s likely to be a sharp line in phase space where a tiny change to the property makes an all-or-nothing difference to value." — this is true, but that the [arrangements of matter] → [value] function has discontinuities doesn't imply that the very most valuable states are extremely rare and far more valuable than all the others. So I think it's weak evidence for MPL.
  • Some distinctions which occur to me below. Assuming some measure over states:
    • EV of the world at a state, vs value of a state itself (where EV cares about future states)
      • Note that the [state]→[EV] function should be discontinuous, because the evolution of states over time is discontinuous, because that's how the world works! E.g. in some cases you should evaluate a big difference in EV just by changing a vote counter after an election by one.
    • Fragility of value/EV, something like how much value tends to change with small changes in space of states
    • Rarity of value, something like what fraction of all states are >50% as valuable as the most valuable state(s)
    • Unity of value, something like whether all the most valuable states are clumped together in state space, or whether the 'peaks' of the landscape are far apart and separated by valleys of zero or negative value
  • I think it's a true and important point that people currently converge on states they agree are high EV, because the option space is limited, and most good we value are still scarce instrumental goods — but when the option space grows, the latent disagreement becomes more important.
  • "I don’t know if my folk history is accurate, but my understanding is that early religions and cultures had a lot in common with each other"
    • I guess it depends on how you interpret ~indexical beliefs like "I want my group to win and the group over the hill to lose" — both sides can think that same thing, but might hate any compromise solution.
    • I think this is a reason for pessism about different values agreeing on the same states, and ∴ supportive of MPL.
  • Re brains, there are some (weak) reasons to expect finite optimal sizes, like speed of light. A 'Jupiter brain' is not very different from many smaller brains with high bandwidth (but laggy) communication.
  • I doubt how rare near-best futures are among desired futures is a strong guide to the expected value of the future. At least, you need to know more about e.g. the feasibility of near-best futures; whether deliberative processes and scientific progress converge on an understanding of which futures are near-best, etc.
    • There is an analogous argument which says: "most goals in the space of goals are bad and lead to AI scheming; AI will ~randomly initialise on a goal; so AI will probably scheme". But obviously whenever we make things by design (like cars or whatever), we are creating things which are astronomically unlikely configurations in the "space of ways to organise matter". And the likelihood that humans build cars just doesn't have much to do with what fraction of matter state space they occupy. It's just not an illuminating frame. The more interesting stuff is "will humans choose to make them", and "how easy are they to make". (I think Ben Garfinkel has made roughly this point, as has Joe Carlsmith more recently.)
finm
2
0
0
64% disagree

Partly this is because I think “extinction” as defined here is very unlikely (<<1%) to happen this century, which upper bounds the scale of the area. I think most “existential risk” work is not squarely targeted at avoiding literal extinction of all Earth-originating life.

Load more