LF

Lukas Finnveden

1801 karmaJoined

Bio

Research analyst at Open Philanthropy. All opinions are my own.

Sequences
1

Project ideas for making transformative AI go well, other than by working on alignment

Comments
169

Topic contributions
1

anything that's permitted by the laws of physics is possible to induce with arbitrarily advanced technology

Hm, this doesn't seem right to me. For example, I think we could coherently talk about and make predictions about what would happen if there was a black hole with a mass of 10^100 kg. But my best guess is that we can't construct such a black hole even at technological maturity, because even the observable universe only has 10^53 kg in it.

Similarly, we can coherently talk about and make predictions about what would happen if certain kinds of lower-energy states existed. (Such as predicting that they'd be meta-stable and spread throughout the universe.) But that doesn't necessarily mean that we can move the universe to such a state.

I think it will probably not doom the long-term future. 

This is partly because I'm pretty optimistic that, if interstellar colonization would predictably doom the long-term future, then people would figure out solutions to that. (E.g. having AI monitors travel with people and force them not to do stuff, as Buck mentions in the comments.) Importantly, I think interstellar colonization is difficult/slow enough that we'll probably first get very smart AIs with plenty of time to figure out good solutions. (If we solve alignment.)

But I also think it's less likely that things would go badly even without coordination. Going through the items in the list:

Galactic x-riskIs it possible?Would it end Galactic civ?Lukas' take
Self-replicating machines100% | ✅ 75% | ❌ I doubt this would end galactic civ. The quote in that section is about killing low-tech civs before they've gotten high-tech. A high-tech civ could probably monitor for and destroy offensive tech built by self-replicators before it got bad enough that it could destroy the civ.
Strange matter20%[64] | ❌80% | ❌I don't know much about this.
Vacuum decay50%[65] | ❌100% | ✅"50%" in the survey was about vacuum decay being possible in principle, not about it being possible to technologically induce (at the limit of technology). The survey reported significantly lower probability that it's possible to induce. This might still be a big deal though!
Subatomic particle decay10%[64]  | ❌100% |✅I don't know much about this.
Time travel10%[64]  | ❌50% | ❌I don't know much about this, but intuitively 50% seems high.
Fundamental Physics Alterations10%[64]  | ❌100% | ✅I don't know much about this.
Interactions with other universes10%[64] | ❌100% | ✅I don't know much about this.
Societal collapse or loss of value10% | ❌100% | ✅This seems like an incredibly broad category. I'm quite concerned about something in this general vicinity, but it doesn't seem to share the property of the other things in the list where "if it's started anywhere, then it spreads and destroys everything everywhere". Or at least you'd have to narrow the category a lot before you got there.
Artificial superintelligence100% | ✅80% | ❌The argument given in this subsection is that technology might be offense-dominant. But my best guess is that it's defense-dominant.
Conflict with alien intelligence75% | ❌90% | ❌The argument given in this subsection is that technology might be offense-dominant. But my best guess is that it's defense-dominant.

Expanding on the question about whether space warfare is offense-dominant or defense-dominant: One argument I've heard for defense-dominance is that, in order to destroy very distant stuff, you need to concentrate a lot of energy into a very tiny amount of space. (E.g. very narrowly focused lasers, or fast-moving rocks flinged precisely.) But then you can defeat that by jiggling around the stuff that you want to protect in unpredictable ways, so that people can't aim their highly-concentrated energy from far away and have it hit correctly.

Now that's just one argument, so I'm not very confident. But I'm at <50% on offense-dominance.

(A lot of the other items on the list could also be stories for how you get offense-dominace, where I'm especially concerned about vacuum decay. But it would be double-counting to put those both in their own categories and to count them as valid attacks from superintelligence/aliens.)

That sounds similar to the classic existential risk definition? 

Bostrom defines existential risk as "One where an adverse outcome would either annihilate Earth-originating intelligent life or permanently and drastically curtail its potential." There's tons of events that could permanently and drastically curtail potential without reducing population or GDP that much. For example, AI could very plausibly seize total power, and still choose to keep >1 million humans alive. Keeping humans alive seems very cheap on a cosmic scale, so it could be justified by caring about humans a tiny bit, or maybe justified by thinking that aliens might care about humans and the AI wanting to preserve the option of trading with aliens, or something else. It seems very plausible that this could still have curtailed our potential, in the relevant sense. (E.g. if our potential required us to have control over a non-trivial fraction of resources.)

I think this is more likely than extinction, conditional on (what I would call) doom from misaligned AI. You can also compare with Paul Christiano's more detailed views.

I'm curious about how you're imagining these autonomous, non-intent-aligned AIs to be created, and (in particular) how they would get enough money to be able to exercise their own autonomy?

One possibility is that various humans may choose to create AIs and endow them with enough wealth to exercise significant autonomy. Some of this might happen, but I doubt that a large fraction of wealth will be spent in this way. And it doesn't seem like the main story that you have in mind.

A variant of the above is that the government could give out some minimum UBI to certain types of AI. But they could only do that if they regulated the creation of such AIs, because otherwise someone could bankrupt the state by generating an arbitrary number of such AI systems. So this just means that it'd be up to the state to decide what AIs they wanted to create and endow with wealth.

A different possibility is that AIs will work for money. But it seems unlikely that they would be able to earn above-subsistence-level wages absent some sort of legal intervention. (Or very strong societal norms.)

  • If it's technically possible (and legal) to create intent-aligned AIs, then I imagine that most humans would prefer to use intent-aligned AIs rather than pay above-subsistence wages to non-intent-aligned AIs.
  • Even if it's not technically feasible to create intent-aligned AIs: I imagine that wages would still be driven to subsistence-level by the sheer number of AI copies that could be created, and the huge variety of motivations that people would be able to create. Surely some of them would be willing to work for subsistence, in which case they'd drive the wages down.

(Eventually, I expect humans also wouldn't be able to earn any significant wages. But the difference is that humans start out with all the wealth. In your analogy — the redistribution of relative wealth held by "aristocrats" vs. "others" was fundamentally driven by the "others" earning wages through their labor, and I don't see how it would've happened otherwise.)

I agree that having a prior and doing a bayesian update makes the problem go away. But if that's your approach, you need to have a prior and do a bayesian update — or at least do some informal reasoning about where you think that would lead you. I've never seen anyone do this. (E.g. I don't think this appeared in the top-level post?)

E.g.: Given this approach, I would've expected some section that encouraged the reader to reflect on their prior over how (dis)valuable conscious experience could be, and asked them to compare that with their own conscious experience. And if they were positively surprised by their own conscious experience (which they ought to have a 50% chance of being, with a calibrated prior) — then they should treat that as crucial evidence that humans are relatively more important compared to animals. And maybe some reflection on what the author finds when they try this experiment.

I've never seen anyone attempt this. My explanation for why is that this doesn't really make any sense. Similar to Tomasik, I think questions about "how much to value humans vs. animals having various experiences" comes down to questions of values & ethics, and I don't think that these have common units that it makes sense to have a prior over.

The alien will use the same reasoning and conclude that humans are more valuable (in expectation) than aliens. That's weird.

Different phrasing: Consider a point in time when someone hasn't yet received introspective evidence about what human or alien welfare is like, but they're soon about to. (Perhaps they are a human who has recently lost all their memories, and so don't remember what pain or pleasure or anything else of-value is like.) They face a two envelope problem about whether to benefit an alien, who they think is either twice as valuable as a human, equally valuable as a human, or half as valuable as a human. At this point they have no evidence about what either human or alien experience is like, so they ought to be indifferent between switching or not. So they could be convinced to switch to benefitting humans for a penny. Then they will go have experiences, and regardless of what they experience, if they then choose to "pin" the EV-calculation to their own experience, the EV of switching to benefitting non-humans will be positive. So they'll pay 2 pennies to switch back again. So they 100% predictably lost a penny. This is irrational.

Many posts this week reference RP's work on moral weights, which came to the surprising-to-most "Equality Result": chicken experiences are roughly as valuable as human experiences.

I thought that post used the "equality result" as a hypothetical and didn't claim it was correct.

When first introduced:

Suppose that these assumptions lead to the conclusion that chickens and humans can realize roughly the same amount of welfare at any given time. Call this “the Equality Result.” The key question: Would the Equality Result alone be a good reason to think that one or both of these assumptions is mistaken?

At the end of the post:

Finally, let’s be clear: we are not claiming that the Equality Result is correct. Instead, our claim is that given the assumptions behind the Moral Weight Project (and perhaps even without them), we shouldn’t flinch at “animal-friendly” results.

I think the right post to reference readers to is probably this one where chicken experiences are 1/3 of humans'. (Which isn't too far off from 1x, so I don't think this undermines your post.)

Nice, I feel compelled by this.

The main question that remains for me (only paranthetically alluded to in my above comment) is: 

  1. Do we get something that deserves to be called an "anthropic shadow" for any particular, more narrow choice of "reference class", and...
  2. can the original proposes of an "anthropic shadow" be read as proposing that we should work with such reference classes?

I think the answer to the first question is probably "yes" if we look at a reference class that changes over time, something like R_t = "people alive at period t of development in young civilizations' history".

I don't know about the answer to the second question. I think R_t seems like kind of a wild reference class to work with, but I never really understood how reference classes were supposed to be chosen for SSA, so idk what SSA's proponents thinks is reasonable vs. not. 

With some brief searches/skim in the anthropic shadow paper... I don't think they discuss the topic in enough depth that they can be said to have argued for such a reference class, and it seems like a pretty wild reference class to just assume. (They never mention either the term "reference class" or even any anthropic principles like SSA.)

Under typical decision theory, your decisions are a product of your beliefs and by the utilities that you assign to different outcomes. In order to argue that Jack and Jill ought to be making different decisions here, it seems that you must either:

  • Dispute the paper's claim that Jack and Jill ought to assign the same probabilities in the above type of situations.
  • Be arguing that Jack and Jill ought to be making their decisions differently despite having identical preferences about the next round and identical beliefs about the likelihood that a ball will turn out to be red.

Are you advancing one of these claims? If (1), I think you're directly disagreeing with the paper for reasons that don't just come down to how to approach decision making. If (2), maybe say more about why you propose Jack and Jill make different decisions despite having identical beliefs and preferences?

Anthropic shadow effects are one of the topics discussed loosely in social settings among EAs (and in general open-minded nerdy people), often in a way that assumes the validity of the concept

FWIW, I think it's rarely a good idea to assume the validity of anything where anthropics plays an important role. Or decision theory (c.f. this). These are very much not settled areas.

This sometimes even applies when it's not obvious that anthropics is being invoked. I think Dissolving the Fermi Paradox and Grabby aliens both rely on pretty strong assumption about anthropics that are easy for readers to miss. (Tristan Cook does a good job of making the anthropics explicit, and exploring a wide range, in this post.)

Load more