Hide table of contents

We’re starting a new reading group for people interested in applying mechanism design tools to technical AI alignment. If you’re interested in joining, you can apply here by the end of the day Berkeley time on August 22nd (applying takes less than five minutes). If you have recommendations for papers to discuss, please mention them in the comments.

What We’re Doing

Mechanism design is the study of how to reach desirable outcomes or equilibria in the face of differing incentives and incomplete information. Many AI safety researchers have expressed enthusiasm about the potential of using these tools for work in alignment, but relatively little work has been done in the intersection. We believe this is partially due to a lack of potential researchers with expertise in both technical AI safety and mechanism design, and partially due to a lack of shovel-ready problems. The goal of this reading group is to make progress on both fronts.

There are three main areas that this reading group will cover:

  1. Work at the intersection of technical AI safety and mechanism design, and where it can be expanded
  2. Current work in technical AI safety, and how mechanism design tools can help
  3. Current work on mechanism design, and how it can be applied to technical AI safety

The plan is to start with papers in the intersection, then alternate between papers on technical AI safety and papers on mechanism design while keeping the broader perspective in mind. Note that although we believe AI governance work is important and contains many applications for mechanism design, that will not be the focus of this reading group.

Who We Are and Who We Want

I (Rubi) am entering the 2nd year of a PhD in Economics this fall, and am currently working on technical AI safety in Berkeley through the SERI MATS program. Other likely participants include three Economics PhD students at top schools and a Math undergraduate student currently taking part in the SERI Summer Research Fellowship. Our hope for this reading group is to connect with people who have similar interests and create the potential for future collaborations.

Based on current expressions of interest, we expect the modal participant in the reading group to be a PhD student in economics, focusing on economic theory, who has read through the AGI Safety Fundamentals curriculum (or an equivalent, such as Eleuther's). If that sounds like you, definitely apply! However, these should not be considered necessary qualifications. Talented undergraduates with an interest in both areas or experts in one area who would like to learn more about the other should also apply. 

If you’re unsure whether you have the background necessary to keep up with this reading group, a good test is to try skimming The Off-Switch Game. It’s a short paper, and on the more accessible end of papers we will be discussing. If you understand it or predict you would be able to understand it within an hour, then you are likely to be able to process the papers that we will discuss without too much additional work. If you find yourself struggling to understand the mathematical notation and proofs, then that is likely a bottleneck and you should consider prioritizing work to advance your comfort level there.

Participants will be expected to commit approximately eight hours a week for this reading group, which consists of five to seven hours reading the week’s paper and an hour and a half to discuss it. If it becomes apparent that a participant is repeatedly not reading or only skimming the papers, they will be removed from the reading group. Please ensure that you can dedicate the required time before applying.

Logistics

The application form can be found here. The only mandatory fields are a link/upload of your CV and confirmation that you are willing to make the necessary time commitment, although there are also optional fields if you would like to elaborate on your background in either mechanism design or technical AI safety. 

Applications will close on Monday August 22nd at midnight PST, and acceptances will be sent out by August 28th. Discussions will begin in the first week of September and continue weekly for twelve weeks. Meetings will be held online, at a time chosen based on the schedules of participants.

We currently expect one discussion group of between five to eight people. However, if there is sufficient interest then we will run however many groups are required to include all qualified applicants.

Exceptional candidates who cannot commit to attending all meetings can contact me directly about sitting in on the subset of meetings that are relevant to their work.

What We’ll be Reading

A number of people have asked to be provided with the reading list that we will be using. This list will be public once it has been finalized, but due to the small nature of the reading group we plan to customize the papers discussed to the interests of the participants. 

To give a taste of the curriculum and to give potential participants a head start on readings, here is what we have planned for the first two weeks:

Week 1 

(Double session, 2.5 hours) 

Incomplete Contracting and AI Alignment by Dylan Hadfield-Menell and Gillian Hadfield,

The Principal-Agent Alignment Problem in Artificial Intelligence by Dylan Hadfield-Menell

This week will begin with introductions and a short icebreaker. The first paper discusses applying mechanism design to AI safety in broad terms, while the second delves more into specifics. In addition to the two papers, this week’s discussion will cover the areas of AI safety where mechanism could be useful, the limitations of the approach, and the potential upside from success

Week 2

(Normal session, 1.5 hours)

Risks from Learned Optimization in Advanced Machine Learning Systems by Evan Hubinger, Chris van Merwijk, Vladimir Mikulik, Joar Skalse, and Scott Garrabrant

We expect most participants will have already read this paper, which covers the differences between outer and inner alignment. This week’s discussion will involve a brief review of the paper, followed by consideration of which mechanism design tools can help with each form of alignment.

For the following weeks, a list of relevant topics which may be covered (subject to participant interest) include: the principal-agent problem, cheap talk, multi-agent systems, dynamic mechanisms, robust mechanism design, corrigibility, multi-agent reinforcement learning, cooperative AI and communication, adversarial training and zero-sum mechanisms, causal incentives, and algorithmic mechanism design. Other topics may also be covered, if requested.

Future Plans

Our intention for this reading group is to transition to a working group upon completion. With a shared background, we will be in a good position to provide feedback on each others’ work or collaborate on projects. In addition to a working group, we would also like to have the group produce an agenda in which we lay out what we feel are the most promising research directions, the potential challenges, and the next steps to work on. Ideally (i.e. conditional on funding) this agenda would be hammered out over multiple days at a retreat that includes subject matter experts in both mechanism design and technical AI safety. 

Between a reading list, a research agenda, and an active community of researchers, we would be in a position where new members could quickly get up to speed. The long-term goal is to increase the number of people working on technical AI safety by making it easy for mechanism design researchers to contribute, and to improve the quality of technical AI safety research by expanding the set of available tools.

The application form is here, and applications are due by Monday August 22nd at 11:59pm PDT. Please pass along this post to anyone who you think would be interested in this reading group.

Thanks to Cecilia Wood and Amelia Michael for reviewing earlier drafts of this post.


 

36

0
0

Reactions

0
0

More posts like this

Comments1


Sorted by Click to highlight new comments since:

oof, super bummed to have missed this and just now find out about it. 

Curated and popular this week
 ·  · 8m read
 · 
Around 1 month ago, I wrote a similar Forum post on the Easterlin Paradox. I decided to take it down because: 1) after useful comments, the method looked a little half-baked; 2) I got in touch with two academics – Profs. Caspar Kaiser and Andrew Oswald – and we are now working on a paper together using a related method.  That blog post actually came to the opposite conclusion, but, as mentioned, I don't think the method was fully thought through.  I'm a little more confident about this work. It essentially summarises my Undergraduate dissertation. You can read a full version here. I'm hoping to publish this somewhere, over the Summer. So all feedback is welcome.  TLDR * Life satisfaction (LS) appears flat over time, despite massive economic growth — the “Easterlin Paradox.” * Some argue that happiness is rising, but we’re reporting it more conservatively — a phenomenon called rescaling. * I test this hypothesis using a large (panel) dataset by asking a simple question: has the emotional impact of life events — e.g., unemployment, new relationships — weakened over time? If happiness scales have stretched, life events should “move the needle” less now than in the past. * That’s exactly what I find: on average, the effect of the average life event on reported happiness has fallen by around 40%. * This result is surprisingly robust to various model specifications. It suggests rescaling is a real phenomenon, and that (under 2 strong assumptions), underlying happiness may be 60% higher than reported happiness. * There are some interesting EA-relevant implications for the merits of material abundance, and the limits to subjective wellbeing data. 1. Background: A Happiness Paradox Here is a claim that I suspect most EAs would agree with: humans today live longer, richer, and healthier lives than any point in history. Yet we seem no happier for it. Self-reported life satisfaction (LS), usually measured on a 0–10 scale, has remained remarkably flat over the last f
 ·  · 4m read
 · 
Summary I’m excited to announce a “Digital Sentience Consortium” hosted by Longview Philanthropy, in collaboration with The Navigation Fund and Macroscopic Ventures, to support research and applied projects focused on the potential consciousness, sentience, moral status, and experiences of artificial intelligence systems. The opportunities include research fellowships, career transition fellowships, and a broad request for proposals for applied work on these topics.  For years, I’ve thought this area was seriously overlooked. It now has growing interest. Twenty-two out of 123 pages of  Claude 4’s model card are about its potential moral patienthood. Scientific experts increasingly say that near-term AI sentience is a real possibility; even the skeptical neuroscientist Anil Seth says, “it is unwise to dismiss the possibility altogether.” We’re hoping to bring new people and projects into the field to increase the chance that society deals with the possibility of digital sentience reasonably, and with concern for all involved. * Apply to Research Fellowship * Apply to Career Transition Fellowship * Apply to Request for Proposals Motivation & Focus For about as long as I’ve been reading about transformative AI, I’ve wondered whether society would face critical decisions involving AI sentience. Until recently, I thought there was not much to be done here besides perhaps more philosophy of mind and perhaps some ethics—and I was not sure these approaches would make much progress.  Now, I think there are live areas where people can contribute: * Technically informed research on which AI systems are sentient, like this paper applying existing theories of consciousness to a few AI architectures. * Innovative approaches to investigate sentience, potentially in a way that avoids having to take a stand on a particular theory of consciousness, like work on  AI introspection. * Political philosophy and policy research on the proper role of AI in society. * Work to ed
 ·  · 11m read
 · 
Confidence: Medium, underlying data is patchy and relies on a good amount of guesswork, data work involved a fair amount of vibecoding.  Intro:  Tom Davidson has an excellent post explaining the compute bottleneck objection to the software-only intelligence explosion.[1] The rough idea is that AI research requires two inputs: cognitive labor and research compute. If these two inputs are gross complements, then even if there is recursive self-improvement in the amount of cognitive labor directed towards AI research, this process will fizzle as you get bottlenecked by the amount of research compute.  The compute bottleneck objection to the software-only intelligence explosion crucially relies on compute and cognitive labor being gross complements; however, this fact is not at all obvious. You might think compute and cognitive labor are gross substitutes because more labor can substitute for a higher quantity of experiments via more careful experimental design or selection of experiments. Or you might indeed think they are gross complements because eventually, ideas need to be tested out in compute-intensive, experimental verification.  Ideally, we could use empirical evidence to get some clarity on whether compute and cognitive labor are gross complements; however, the existing empirical evidence is weak. The main empirical estimate that is discussed in Tom's article is Oberfield and Raval (2014), which estimates the elasticity of substitution (the standard measure of whether goods are complements or substitutes) between capital and labor in manufacturing plants. It is not clear how well we can extrapolate from manufacturing to AI research.  In this article, we will try to remedy this by estimating the elasticity of substitution between research compute and cognitive labor in frontier AI firms.  Model  Baseline CES in Compute To understand how we estimate the elasticity of substitution, it will be useful to set up a theoretical model of researching better alg