An Activist View of AI Governance

In this sequence, I share my personal experiences as an AI safety advocate at the Center for AI Policy (CAIP), and then explain why the effective altruist movement as a whole should attempt to fund more direct political advocacy.

Post 1: Please Donate to CAIP. This was partly a fundraiser for my organization, but mostly a firsthand account of what we've been working on and why, so that readers have some concrete context for the rest of the sequence.

Post 2: The Need for Political Advertising. In this post, I explain why good AI governance ideas aren't self-spreading or self-enacting. Without active work by dedicated political advocates, policy ideas have almost no chance of overcoming political opposition and political inertia.

Post 3: We're Not Advertising Enough. By my estimate, we have 3 AI governance researchers for every AI governance advocate. In this post, I explain why this ratio is backwards and why most governance research is too abstract to offer much help in the political arena.

Post 4: Shift Resources to Advocacy Now. A certain amount of abstract research is vital to building a new field, but we've largely completed that task, and we can't afford to delay any longer before shifting over to advocacy if we expect to prevent the deployment of misaligned superintelligence.

Post 5: Orphaned Policies. Because we've over-invested in research at the expense of advocacy, there is a long list of 'orphaned projects' -- good policy ideas that have never been fleshed out and that have no champions in the political arena. In this post, I describe these projects and encourage people to adopt them.

Post 6: Political Funding Expertise. In my opinion, part of why we under-invested in advocacy is that the major x-risk funders are mostly staffed by academic researchers. We need to aggressively hire many more grant-makers with political expertise so that they will see the value in advocacy projects.

Post 7: Mainstream Grantmaking Expertise. Another major reason why we under-invested in advocacy is that AI safety funders use excessively informal procedures to evaluate grants. To fix this, we need to hire grantmakers with expertise from mainstream philanthropic organizations so that they can teach us their best practices.