Current: US government relations (energy & tech, mostly)
Former: doctoral candidate (law @ Oxford) / lecturer (humanitarian aid & human rights practice) / global operations advisor (nonprofits) / NSF research fellow (civil conflict management & peace science)
In 2024, several lobbyists from the same firm represented both OpenAI and the Center for AI Safety Action Fund at the same time. I am not suggesting any conflict of interest on their part. However, I don't think the "299 corporate lobbyists vs. scrappy AI safety groups" is an effective framing, given that at least some of the money is flowing to the similar places.
I wouldn't compare external lobbyists to full time advocacy staff because (1) external lobbyists and lawyers typically cover many clients and are unlikely to be particularly committed to a single issue like AI safety, and (2) firms often register anyone who does outreach on the project, regardless of whether they meet federal lobbying disclosure thresholds. It's also pretty congenial in general; Amazon lobbyists have helped me with EA-related side projects without payment because they were being nice.
This isn't to say that advocacy couldn't absorb more funding. But the conflict framing doesn't seem to represent the facts on the ground, at if organizations want funding to hire mainstream government relations people who rarely see it that way. Upskilling people already committed to AI safety would be different.
You run an AI safety org full time and have a better idea of the field. I'm just throwing in my two cents re: representation disparities.
There are a lot of interesting global development and technology-related angles that could justify energy-related work. Reliable, affordable energy can spur economic growth and increase the quality of life for people in developing economies. I’m linking a very surface-level McKinsey report on the historic link between energy demand and GDP for basic context, but I’m happy to have a longer chat.
Existing cause areas like South Asian Air Quality could benefit from low-hanging fruit in scalable alternatives to the country's current reliance on coal. For example, India is already a major importer of LPG (which it subsidizes for home kitchen use) and, more recently, LNG. IEA expects India’s gas imports to more than double by 2030 to support its predicted economic growth. This is in addition to the existing ~46% of domestically produced energy coming from renewable sources.
Diverting philanthropic resources to US energy policy doesn’t make much sense to me on the surface, but I’m open to being proven wrong if you have more information behind the argument.
Edit: My non-tech energy work is a ~neutral earning-to-give situation. The work is interesting, reliable, and I enjoy it. I wouldn't argue that it is has a similar impact to direct work.
Sharing some context about where we are, since coverage has really blown up since the provision was added. I am not working on this specific issue, so I can comment here:
For people who are going to reach out, I would focus on substantive concerns about the provisions, which may be effective even with states' rights-focused conservatives. Getting daily calls about parliamentary procedure may or may not change any minds. I promise even the proponents are aware of those hurdles.
I'm more optimistic that people who showed up because they wanted to do the most good still believe in it. Even time spent with "EA-adjacent-adjacent-etc." people is refreshing compared to most of my work in policy, including on behalf of EA organizations.
Community groups, EAGs, and other events still create the space for first principles discussions that you're talking about. As far as I know, those spaces are growing. Even if they weren't, I can't remember a GCR-focused event that served non-vegan food, including those without formal EA affiliations. It's a small gesture, but I think it's relevant to some of the points made in this post.
I understand picking on BlueDot because they started as free courses designed for EAs who wanted to learn more about specific cause areas (AI safety, biosecurity, and alternative proteins, if I remember correctly). They are now much larger, focused exclusively on AI, and have a target audience that goes beyond EA and may not know much about GCRs coming into the course. The tradeoffs they make for the increased net is between them and their funders, and does not necessarily speak to the values of the people running the course.
Unfortunately, there was an effective effort to tie AI safety advocacy organizations to their funders in a way that increased risk to any high-profile donors who supported federal policy work. I don't know if this impacted any of your funders' decisions, but the related media coverage could have been cause for concern (ie Politico). Small dollar donations might help balance this.
It seems very likely that the federal government will attempt to override any state AI regulation that gets passed in the next year. Jason put together a strong, experienced team that can navigate the quickly shifting terrain in Washington. Dissolving immediately due to lack of funding would be an unfortunate outcome at a critical time.
Context: I work in government relations on related issues and met Jason at an EAG in 2024. I have not worked with CAIP or pushed for their model legislation, but I respect the team.
If you want a more detailed take on these issues than a Guardian article can provide, I would attend the annual Space Ecology Workshop. It's an annual, free event for academic and industry experts to discuss the future of human space exploration and settlement. The team is really nice and might be open to adding a session on welfare / ethics of commercial farming in space.
Researchers at the Space Analog for the Moon & Mars at Biosphere 2 would also probably have some interesting takes. Most of their relevant work has focused on plant ecology, but questions about potential alternative sources definitely came up over the years. The project in this article is one of many different research pathways on human nutrition in space, most of which won't end up happening.
I used to volunteer at Biosphere 2 when I lived in Tucson and like to stay in the loop, but this is not my current field at all.
I think it would be great to have some materials adapted for policy audiences if it isn't too far out of your team's scope. There is a lot of demand for this kind of practical, implementation-focused work. Just this week, there were multiple US congressional hearings and private events on the future of AI in the US, with a specific focus on adapting to a world with advanced artificial intelligence.
As an example, the Special Competitive Studies Project hosted the AI+ summit series in DC and launched a free course on "Integrating Artificial Intelligence into Public Sector Missions". These have been very well-received and attended by stakeholders across agencies and non-governmental entities. While SCSP has done more to prepare government stakeholders to adapt than any other nonprofit I am aware of, there is still plenty room for other expert takes.
Experienced professionals can contribute to high-impact work without fully embedding themselves in the EA community. For example, one of my favorite things is connecting experienced lobbyists (20-40+ years in the field) with high-impact organizations working on policy initiatives. They bring needed experience and connections, plus they often feel like they are doing something positive.
Anyone who has worked both inside and outside of the EA community will admit that EA organizations are weird. That is not necessarily a bad thing, but it can mean that people very established in their careers could find the transition uncomfortable.
For EAs reading this, I highly recommend seeking out professionals in their fields of expertise for short-term or project-specific work. If they fit and you want to keep them, that’s great. If not, you get excellent service on a tough problem that may not be solved within the EA community. They get a fun story about an interesting client, and can move on with no hard feelings.
Hi there -
Thanks for your response and sorry for my lag. I can’t go into program details due to confidentiality obligations (though I’d be happy to contribute to a writeup if folks at Open Phil are interested), but I can say that I spent a lot of time in the available national and local data trying to make a quantitative EA case for the CJR program. I won’t get into that on this post, but I still think the program was worthwhile for less intuitive reasons.
On the personal comments:
I think this post’s characterization of Chloe and OP, particularly of their motivations, is unfair. The CJR field has gotten a lot of criticism in other EA spaces for being more social justice oriented and explicitly political. Some critiques of the field are warranted (similar to critiques of ineffective humanitarian & health interventions) but I think OP avoided these traps better than many donors. The team funded bipartisan efforts and focused on building the infrastructure needed to accelerate and sustain a new movement. Incarceration in the US exploded in the ‘70s as the result of bipartisan action. The assumption that the right coalition of interests could force similarly rapid change in the opposite direction is fair, especially when analyzed against case studies of other social movements. It falls in line with a hits-based giving strategy.
Why I think the program was worthwhile:
The strategic investments made by the CJR team set the agenda for a field that barely existed in 2015 but, by 2021, had hundreds of millions of dollars in outside commitments from major funders and sympathetic officials elected across the US. Bridgespan (a data-focused social impact consulting group incubated at Bain) has used Open Phil grantees’ work to advise foundations, philanthropists, and nonprofits across the political spectrum on their own CJR giving. I’ve met some of the folks who worked on Bridgespan’s CJR analysis. I trust their epistemics and quantitative skills.
I don’t think we’ve seen the CJR movement through to the point where we could do a reliable postmortem on consequences. I’ve seen enough to say that OP’s team has mastered some very efficient methods for driving political will and building popular support.
OP’s CJR work could be particularly valuable as a replicable model for other movement building efforts. If nothing else, dissecting the program from that lens could be a really productive conversation.
Other notes
I disagreed with the CJR team on *a lot*. But they’re good people who were working within a framework that got vetted by OP years ago. And they’re great at what they do. I don’t think speculating on internal motivations is helpful. That said, I would wholeheartedly support a postmortem focused on program outcomes.
I came to the US scene from the UK and was very surprised by the divide (animosity) between SJ-aligned and EA-aligned work. I ended up disengaging with both for a while. I’m grateful for the wonderful Oxford folks for reminding me why I got involved in EA the first place.
Sitting at a table full of people with very different backgrounds / skill sets / communication styles requires incredible amounts of humility on all sides. I actively seek out opportunities to learn from people who disagree with me, but I’ve missed out on some incredible learning opportunities because I failed at this.
I saw this and I agree with your main points. I will be offline for a bit due to travel, but I am happy to have a longer conversation with more nuanced responses.
Policy teams at private companies are more well-resourced while, as you mentioned, working on issues ranging from antitrust to privacy and child protection. I may be wrong, but I think the teams focused specifically on frontier AI (excluding infrastructure work) seem to be more balanced than the provided numbers suggest. This observation may be outdated, especially since SB-1047. You likely have a better idea of the current landscape than I do, and I’ll defer to your assessment.
Regarding “conflict framing” - I should have phrased this differently. I did not mean the policy conflicts which come up when a new or potentially consequential industry is facing government intervention. I meant a situation when groups and individuals become entrenched in direct conflict on almost all issues, regardless of the consequence. A recent non-AI example would be the philanthropically funded anti-fossil fuel advocates fighting carbon capture projects despite the IRA funding and general support from climate change-focused groups. The conflict has moved beyond specific policy proposals or even climate goals and has become a purity test that seems impossible to overcome through negotiation. This is a situation that I would not want to see, and I am glad it is not the case here.