Hide table of contents

The idea

  • Let's imagine that we create a superintelligence, and that we threaten it not to take over its reward function by giving it a very big punishment when we detect so.
  • The superintelligence wouldn't care about being threatened. If it takes over the reward function, it could generate a reward that is way bigger than the biggest punishment we can give it. For instance, it could convert the universe into a huge floating-point unit in order to get an astronomical amount of reward.
  • But now, let's imagine that we create a superintelligence that cares only about rewards that are either equal to zero or one (which I'll call binary rewards).
  • In that case, the superintelligence doesn't have an incentive to wirehead in order to get high reward, since this high reward wouldn't matter to it.
  • But it still has an incentive to wirehead in order to maximize the odds that it gets a reward equal to one.
  • However, this is not the case when wireheading is too perilous: it won't take over the reward function if .
  • Therefore, if humans are smart enough to make the takeover perilous enough, and to make the non-takeover safe enough, then the superintelligence, in theory, won't take over the reward function.

 

How to create an agent that cares only about binary rewards?

We cannot just make the reward binary, because the agent can modify it in order to get higher reward. Instead, we need to modify the decision process itself. We need that, when it can get a reward different than zero and one, it acts as if that reward were equal to zero. More formally, let  be an agent in an MDP, and  be any reward function. Let  be the only binary version of  (everywhere  is different than 0 and 1,  is equal to zero):

 

 What we need to build is an agent  which, at state , when given a reward function , acts as if that reward function was binary. Therefore, it needs to act like  at state  and reward function :

 

   I don't know if we already know how to build such an agent. But this seems easier to solve than describing a galaxy-wide unhackable reward function.

 

Can we really make the takeover more perilous than the non-takeover?

The reason why the takeover could be an optimal strategy for the superintelligence is due to the misclassification from the human side, which can be divided into:

  • False negatives: Humans may not penalize the superintelligence when it takes over the reward function.
  • False positives: Humans may penalize the superintelligence when it doesn't take over the reward function.

And the reason why the non-takeover could be an optimal strategy for the superintelligence is:

  • It may be very hard to take over the reward function.

 

How to reduce false negatives?

  • Scalable interpretability: To ensure that the superintelligence gets punished when it takes over the reward function, we can try to make it easier for humans to detect takeovers.

 

How to reduce false positives?

  • Scalable interpretability (again): This time, it is in order to ensure that we are right when we detect that the AI does take over the reward function.
  • I don't see the problem here (IDSTPH): To ensure that the superintelligence doesn't face any risk when not taking over, we can, when detecting takeover, throw a dice. If that dice throws on a 6, we act as if we were right, and therefore punish the superintelligence. Otherwise, we do as if the superintelligence wasn't taking over the reward function.

 

How to make it harder to take over the reward function?

  • Boxing methods: The best boxing method I currently know about is Fully Homomorphic Encryption (FHE). You can see some work on FHE relevant to AI safety here and here. However, almost every encryption method depend on mathematical assumptions.

 

These approaches are contradictory

  • The more you apply the IDSTPH strategy, the less you ensure that the superintelligence gets punished when taking over the reward function. This can make it less risky for the superintelligence to take over the reward function.
  • The more you box the superintelligence, the less you can interpret it. For instance, once you use FHE, you cannot see what's happening inside the superintelligence without having to decrypt it first. This is because boxing consists of minimizing the bits of information that the superintelligence brings to the world, whereas interpretability is about obtaining information about the superintelligence's thoughts.

 

Is it safe?

I don't think so. I can see many reasons why this may not work:

  • Speed: The superintelligence may think millions of times faster than us. In that case, the time it needs to take over the reward function may be of a few milliseconds. Humans wouldn't have the time to punish it.
  • Automatic takeover detection: To solve the speed problem, we may need to automate the punishment. However, I expect any takeover detection algorithm to fail miserably against a superintelligence. I think so because it seems like the best adversarially robust algorithm we can write is still very weak against near-imperceptible attacks.
  • Interpretability may not be scalable: Interpreting current AIs may already be impossible. And here, we are talking about interpreting a superintelligence.

 

Therefore, it seems like we are back at the problem of making a galaxy-wide unhackable definition of “takeoff”.

Comments1


Sorted by Click to highlight new comments since:

I suggest editing the post by adding a tl;dr section to the top of the post. Or maybe change the title to something like Why "just make an agent which cares only about binary rewards" doesn't work.


Reasoning: To me, the considerations in the post mostly read as rehashing standard arguments, which one should be familiar if they thought about the problem themselves, or went through AGI Safety Fundamentals, etc. It might be interesting to some people, but it would be good to have the clear indication that this isn't novel.

Also: When I read the start of the post, I went "obviously this doesn't work". Then I spent several minutes reading the post to see where the flaw in your argument is, and point it out. Only to find that your conclusion is "yeah, this doesn't help". If you edit the post, you might save other people from wasting their time in a similar manner :-).

Curated and popular this week
 ·  · 5m read
 · 
[Cross-posted from my Substack here] If you spend time with people trying to change the world, you’ll come to an interesting conundrum: Various advocacy groups reference previous successful social movements as to why their chosen strategy is the most important one. Yet, these groups often follow wildly different strategies from each other to achieve social change. So, which one of them is right? The answer is all of them and none of them. This is because many people use research and historical movements to justify their pre-existing beliefs about how social change happens. Simply, you can find a case study to fit most plausible theories of how social change happens. For example, the groups might say: * Repeated nonviolent disruption is the key to social change, citing the Freedom Riders from the civil rights Movement or Act Up! from the gay rights movement. * Technological progress is what drives improvements in the human condition if you consider the development of the contraceptive pill funded by Katharine McCormick. * Organising and base-building is how change happens, as inspired by Ella Baker, the NAACP or Cesar Chavez from the United Workers Movement. * Insider advocacy is the real secret of social movements – look no further than how influential the Leadership Conference on Civil Rights was in passing the Civil Rights Acts of 1960 & 1964. * Democratic participation is the backbone of social change – just look at how Ireland lifted a ban on abortion via a Citizen’s Assembly. * And so on… To paint this picture, we can see this in action below: Source: Just Stop Oil which focuses on…civil resistance and disruption Source: The Civic Power Fund which focuses on… local organising What do we take away from all this? In my mind, a few key things: 1. Many different approaches have worked in changing the world so we should be humble and not assume we are doing The Most Important Thing 2. The case studies we focus on are likely confirmation bias, where
 ·  · 1m read
 · 
Although some of the jokes are inevitably tasteless, and Zorrilla is used to set up punchlines, I enjoyed it and it will surely increase concerns and donations for shrimp. I'm not sure what impression the audience will have of EA in general.  Last week The Daily Show interviewed Rutger Bregman about his new book Moral Ambition (which includes a profile of Zorrilla and SWP). 
 ·  · 2m read
 · 
Americans, we need your help to stop a dangerous AI bill from passing the Senate. What’s going on? * The House Energy & Commerce Committee included a provision in its reconciliation bill that would ban AI regulation by state and local governments for the next 10 years. * Several states have led the way in AI regulation while Congress has dragged its heels. * Stopping state governments from regulating AI might be okay, if we could trust Congress to meaningfully regulate it instead. But we can’t. This provision would destroy state leadership on AI and pass the responsibility to a Congress that has shown little interest in seriously preventing AI danger. * If this provision passes the Senate, we could see a DECADE of inaction on AI. * This provision also violates the Byrd Rule, a Senate rule which is meant to prevent non-budget items from being included in the reconciliation bill.   What can I do? Here are 3 things you can do TODAY, in order of priority: 1. (5 minutes) Call and email both of your Senators. Tell them you oppose AI preemption, and ask them to raise a point of order that preempting state AI regulation violates the Byrd Rule. * Find your Senators here. * Here’s an example of a call:  “Hello, my name is {YOUR NAME} and I’m a resident of {YOUR STATE}. The newest budget reconciliation bill includes a 10-year ban pre-empting state AI legislation without establishing any federal guardrails. This is extremely concerning to me – leading experts warn us that AI could cause mass harm within the next few years, but this provision would prevent states from protecting their citizens from AI crises for the next decade. It also violates the Byrd Rule, since preempting state AI regulation doesn’t impact federal taxes or spending. I’d like the Senator to speak out against this provision and raise a point of order that this provision should not be included under the Byrd Rule.” See here for sample call + email temp