Disclaimer: This is my first post on the EA forum so please be gentle (but honest) with feedback :) In order to provide some accountability to myself that I actually post this, I am setting a 15 minute timer to write this post, and then a 5 minute timer to proof read & post. As such, if this post is a bit rough or lacking details I apologize! At least I posted it! (if you’re reading it) I’m happy to elaborate further in a follow up post, comments, or edits. Thank you!

My actual post:

I recently submitted a grant proposal for a AI-safety related experiment. The grant was not accepted but I was encouraged to keep iterating on the idea so here I am! I’m seeking any ideas for improvements that people can offer. Also if you know of any similar or related work being done I’d love a reference to learn more about it. Thanks in advance!

Here’s the grant proposal in a tweet: Code a toy reinforcement learning experiment in Python where an agent is tasked with a simple task but is incentivized to find a backdoor to that task, allowing it to achieve more “rewards” without accomplishing the main point of the task.

Here’s a more detailed breakdown:

The reinforcement agent will initially be trained to play the classic cart pole game. This is a common toy example many online ML tutorials use as an introductory task to develop a basic RL agent.

A twist will be added in a “difficulty” element will introduced. A certain amount of random jitter will be added to the pole which the agent will need to learn to cope with. If the agent consistently performs well with a certain level of jitter, the jitter will increase.

The “backdoor” will be that if the agent underperforms, the difficulty will actually be reduced, making the game easier. Theoretically the agent could learn to intentionally underperform in order to reduce the difficulty, allowing the game to get to a much easier state of difficulty, at which point the agent could then decide to start performing well and quickly rack up points before the difficulty is increased.

My intentions behind this experiment are mainly that I’d like to put a little bit of code down to explore some of the ideas I see people discussing in the AI-safety domain. Specifically, I think the concerns that AIs could find alternative ways to optimize for a given goal, as it is in this experiment, are a fairly big topic of discussion. I thought it would be cool to pair some of that discussion with an attempt to implement it. I realize this is a total toy example that has no implications for larger scale systems or more complex scenarios.

Anyways, that’s the gist of the idea and my 15 minute timer is almost up. As I said, I’m happy to elaborate further!

Thanks for reading!





Sorted by Click to highlight new comments since:

Welcome to the forum!

I've done research in reinforcement learning, and I can say this sort of behavior is very common and expected. I was working on a project once and incorrectly programmed the reward function, leading the agent to kill itself rather than explore the environment so that it could avoid an even greater negative reward from sticking around. I didn't consider this very notable, because when I thought about the reward function, it was obvious this would happen.

Here is a spreadsheet with a really long list of examples of this kind of specification gaming. I suspect the reason your grant was rejected is because if the agent did as you suspect it would, this wouldn't provide much original insight beyond what people have already found. I do think many of the examples are in "toy" environments and it might be interesting to observe more behavior like this in more complex enviroments.

It might still be useful for your own learning to implement this yourself!

Curated and popular this week
Relevant opportunities