JGL

Jason Green-Lowe

Executive Director @ Center for AI Policy
319 karmaJoined Working (15+ years)

Sequences
1

An Activist View of AI Governance

Comments
9

Marcus, I agree with your first paragraph -- part of my frustration is that it's not obvious to me that any AI governance donors have more than a vibes-based metric for what it means for "other orgs to be better at it for the cost." Donors did eventually tell us that they thought other orgs were more cost-effective, but they couldn't or wouldn't say which effects they thought were cheaper elsewhere or how they were measuring that.

In hindsight, you may be right that we should have posted earlier on Less Wrong and EA Forums. I will say that we did not see raising significant amounts of money directly from the public (rather than from large donors and institutional donors) as a promising strategy, and that we turned to it only as a last resort when institutional funding unexpectedly ran dry.

However, I object to your implication that until recently we haven't been communicating or sending updates about our work. You may recall that in October 2023, you and I met in person at EA Global Boston, where I pitched you about our work. At your request, I sent you a 15-page document in March 2024 in which I quantitatively estimated the microdooms averted by CAIP's work. You seemed to agree that our work was promising, but at EA Global Boston October 2024, you told me that you were donating only to animal welfare charities. You did not share any criticisms of our work at that time other than that it was not in your preferred cause area.

Our communications team sends out a weekly newsletter to 1,400 subscribers, which is also publicly posted on LinkedIn and X. We also sent out various documents to past and prospective donors, including a detailed annual report that included a dozen examples of us being cited favorably in mainstream media, along with links. We have hosted regular in-person events that are open to the public, where anyone can come and see for themselves which Congressional staffers are coming and speaking with us. If any of the donors had at any time expressed any concerns about our accomplishments being "hard to verify," we would have gladly taken them along to one of our meetings or events or sent them whatever additional information they might have wanted. However, I have never before heard any such complaints.

I'm not aware of any such organizations! This is an example of one of the 'holes' that I'm trying to highlight in our ecosystem. 

We have so many people proposing and discussing general ideas, but there's no process in place to rigorously compare those ideas to each other, choose a few of those ideas to move forward, write up legislation for the ideas that are selected, and advertise that legislation to policymakers.

I don't object to the community proposing 5-10x more ideas than it formally writes up as policies; as you say, some filtering is appropriate. I do object to the community spending 5-10x more time proposing ideas than it spends on drafting them. The reason why it makes sense to have lots of ideas is that proposing an idea is (or should be) quick and easy compared to the hard work of drafting it into an actual policy document. If we spend 70% of our resources on general academic discussion of ideas without anyone ever making a deliberate effort to select and promote one or two of those ideas for legislative advocacy, then something's gone badly wrong.

OK, let me know when you're back, and I'll be happy to chat more! You can also email me at jason@aipolicy.us if you like.

So the 299 corporate lobbyists is less about measuring total influence and more about how many pairs of eyes there are on the playing field who are potentially able to notice a bill moving forward -- the odds that all 299 of them miss the bill for months on end are essentially zero.

You're right to be skeptical of "number of lobbyists" as a measure of influence; a better metric would be the total amount of money spent on in-house government relations experts, outside lobbyists, advertising, PR firms, social media experts, and campaign donations. I don't have access to those figures for tech companies, but I still feel confident that the total industry budget for DC influence is much higher than the total AI safety budget for DC influence, especially if we discount money that's going to academic AI governance research that's too abstract to buy much political influence.

I'm not sure why you're saying that a conflict framing doesn't represent the facts on the ground -- it's true that many lobbyists are friendly and are willing to work for a variety of different causes, but if they're currently being employed to work against AI safety, then I would think we're in conflict with them. Do you see it differently? What kinds of conflicts (if any) do you see in the political arena, and how do you think about them?

Yes, that's a great insight! People assume that if they're high up the stack, then they must have a lot of leverage -- and this can be true, sometimes. If you are the first person to run a study on which curable diseases are neglected, and there are a million doctors and nurses and epidemiologists who could benefit from the results of that study, your leverage is enormous.

However, it's not always true. If you're the 200th person to run a study on the risks of AI, but there are only 60 AI advocates who can benefit from the results of that study, then your leverage is weak.

I don't want to insist on any particular number of levels for any particular kind of work -- the key point is that on average, AI governance is way too high up the stack given our current staffing ratios.

Our model legislation does allow the executive to update the technical specifics as the technology advances. 

The very first text in the section on rulemaking authority is "The Administrator shall have full power to promulgate rules to carry out this Act in accordance with section 553 of title 5, United States Code. This includes the power to update or modify any of the technical thresholds in Section 3(s) of this Act (including but not limited to the definitions of “high-compute AI developer,” “high-performance AI chip,” and “major AI hardware cluster”) to ensure that these definitions will continue to adequately protect against major security risks despite changes in the technical landscape such as improvements in algorithmic efficiency." This is on page 12 of our bill.

I'm not sure how we could make this clearer, and I think it's unreasonable to attack the model legislation for not having this feature, because it very much does have this feature.

I think these are great criteria, Neel. If one or more of the funders had come to me and said, "Hey, here are some people who you've offended, or here are some people who say you're sucking up their oxygen, or here's why your policy proposals are unrealistic," then I probably would have just accepted their judgment and trusted that the money is better spent elsewhere. Part of why I'm on the forum discussing these issues is that so far, nobody has offered me any details like that; essentially all I have is their bottom-line assessment that CAIP is less valuable than other funding opportunities.

I wish I could! Unfortunately, despite having several conversations and emails with the various AI safety donors, I'm still confused about why they are declining to fund CAIP. The message I've been getting is that other funding opportunities seem more valuable to them, but I don't know exactly what criteria or measurement system they're using.

At least one major donor said that they were trying to measure counterfactual impact -- something like, try to figure out how much good the laws you're championing would accomplish if they passed, and then ask how close they got to passing. However, I don't understand why this analysis disfavors CAIP. Compared to most other organizations in the space, the laws we're working on are less likely to pass, but would do much more good if they did pass.

Another possible factor is that my co-founder, Thomas Larsen, left CAIP in the spring of 2024, less than a year after starting the organization. As I understand it, Thomas left because he learned that political change is harder than he had initially thought, and because he felt frustrated that CAIP was not powerful enough to accomplish its mission within the short time that he expects we have left before superintelligence is deployed, and because he did not see a good fit between the skills he wanted to use (research, longform writing, forecasting) and CAIP's day-to-day needs.

Thomas's early departure is obviously an important piece of information that weighs against donating to CAIP, but given the context, I don't think it's reasonable for institutional donors to treat it as decisive. I actually agree with Thomas's point that CAIP's mission is very ambitious relative to our resources and that we most likely will not succeed. However, I think it's worth trying anyway, because the stakes are so high that even a small chance of success is very valuable.

Your point about the Nucleic Acid Synthesis Act is well-taken; while writing this post, I confused the Nucleic Acid Synthesis Act with Section 4.4(b)(iii) of Biden's 2023 Executive Order, which did have that requirement. I'll correct the error.

We care a lot about future-proofing our legislation. Section 6 of our model legislation takes the unusual step of allowing the AI safety office to modify all of the technical definitions in the statute via regulation, because we know that the paradigms that are current today might be outdated in 2 years and irrelevant in 5. Our bill would also create a Deputy Administrator for Standards whose section's main task would be to keep abreast of "the fast moving nature of AI" and to update the regulatory regime accordingly. If you have specific suggestions for how to make the bill even more future-proof without losing its current efficacy, we'd love to hear them.