PauseAI US needs your donations! We were very fortunate not to have to do much dedicated fundraising up until this point, but I was caught off-guard by receiving nothing in the SFF main round (after receiving multiple speculation grants), so we're in a crunch and only fully-funded through the end of 2024.
If you're sold, you can donate right now via PauseAI's general support Manifund project, the text of which I'll share here below the dots.
If you're open but you have questions, or you just thought of a great question you know other people are wondering about, ask them in the comments below! I'll answer them before or on 11/19/24.
Project summary
PauseAI US's funding fell short of expected, and now we are only funded until the end of 2024! Money donated to this project will go to fund the operations of PauseAI US until midyear 2025.
What are this project's goals? How will you achieve them?
PauseAI US advocates for an international treaty to pause frontier AI development. But we don't need to achieve that treaty to have positive impact-- most of our positive impact will likely come from moving the Overton window and making more moderate AI Safety measures more possible. Advocating straightforwardly for what we consider the best solution is an excellent frame for educating the general public and elected officials on AI danger-- we don't know what we're doing building powerful AI, so we should wait until we do to proceed-- compared to tortured and confusing discussions of other solutions like alignment that have no clear associated actions for those outside the technical field.
To fulfill our goal of moving the Overton window in the direction of simply not building AGI while it is dangerous to do so, PauseAI US has two major areas of programming: protesting and lobbying.
Protests (like this upcoming one) are the core of our irl volunteer organizing, local social community, and social media presence. Protests send the general overarching message to Pause frontier AI training, in line with the PauseAI proposal. Sometimes protests take issue with the AI industry and take place at AGI company offices like Meta, OpenAI, or Anthropic (RSVP for 11/22!). Sometimes protests are in support of international cooperative efforts. Protests get media attention which communicates not only that the protestors want to Pause AI, but shows in a visceral and easily understood way the stakes of this problem, filling the bizarre missing mood surrounding AI danger ("If AI companies are doing something so dangerous, how come there aren't people in the streets?"). Protests are a highly neglected angle in the AI Safety fight. Ultimately, the impact of protests is in moving the Overton window for the public, which in turn affects what elected officials think and do.
Organizing Director Felix De Simone is based in DC and does direct lobbying on the Hill as well as connecting constituents to their representatives for grassroots lobbying. Felix holds regular email- and letter-writing workshops for the general public on the PauseAI US Discord (please join!) aimed at specific events, such as emailing and calling the California Assembly and Senate during the SB-1047 hearings and, more recently, workshops coordinating supportive emails expressing hope about the possibility of a global treaty to pause frontier AI development to attendees of the US AI Safety Conference. We work with SAG-AFTRA representatives to coordinate with their initiatives and add an x-risk dimension to their primarily digital identity and provenance-related concerns. PauseAI US is part of a number of other more speculative legal interventions to Pause AI, such as working with Gabriel Weil to develop a strict liability ballot initiative version of SB-1047 and locate funders to get it on the 2026 ballot. We are members of Coalition for a Baruch Plan for AI and Felix attended the UN Summit for the Future Activist Days. We hope to be able to serve as a plaintiff in lawsuits against AI companies that our attorney allies are developing, a role which very few others would be willing or able to fill. Lobbying is more of a nitty gritty approach, but the goal of our lobbying is the same as our protesting: to show our elected officials that cooperation to simply not build AGI is possible, because the will and the ways are there.
How will this funding be used?
Salaries - $260k/year
Specific Events - ~$7.5-15k/year
Operating costs - ~$24k/year (this includes bookkeeping, software, insurance, payroll tax, etc. and may be an overestimate for next year because there were so many startup costs this year-- if it is, consider it slack)
Through 2025 Q2 -- $150k.
Our programming mainly draws on our labor and the labor of our volunteers, so salaries are our overwhelmingly largest cost.
Q1&Q2 programming:
- quarterly protest
- monthly flyering
- monthly local community social event
- 2+ lobbying events for public education
- PauseAI US Discord (please join!) for social times, AI Safety conversation, and help with running your own local PauseAI US community
- PauseAI US newsletter
- expansion of Felix's lobbying plan, improving his relationships with key offices
Org infrastructure work by Q2:
(This one is massive. We just hired Lee Green to run ops.)
- massively improved ops and legal compliance leading us to be able to scale up much more readily
- website with integrated event platform streamlining our volunteer discovery and training processes and allowing us to hold more frequent and larger protests
- Executive Director able to focus on strategy and fundraising and not admin
- improved options for donating and continuous fundraising
Incidental work likely to happen by Q2:
- strict liability ballot initiative will have progressed as far as it can
- We respond to media requests for comment on major news events, may muster small immediate demonstrations and/or orchestrate calls into key offices
- supporting other AI Safety organizations with our knowledge and connections, bringing an understanding of inside-outside game dynamics in AI Safety
- lots of behind the scenes things I unfortunately can't discuss but which are a valuable part of what our org does
Who is on your team? What's your track record on similar projects?
Executive Director - Holly Elmore
Founded this org, long history of EA organizing (2014-2020 at Harvard) and doing scientific research as an evolutionary biologist and then as a wild animal welfare researcher at Rethink Priorities.
Director of Operations - Lee Green
+20 years experience in Strategy Consulting, Process Engineering, and Efficiency across many industries, specifically supporting +40 Nonprofit and Impact-Driven Organizations
Organizing Director - Felix De Simone
Organized U Chicago EA and climate canvassing campaigns.
I'm confident in PauseAI US's ability to run protests and I think the case for doing protests is pretty strong. You're also doing lobbying, headed by Felix De Simone. I'm less confident about that so I have some questions.
One other thing I forgot to mention re: value-add. Some of the groups you mentioned (Center for AI Policy & Center for AI Safety; not sure about Palisade) are focused mostly on domestic AI regulation. PauseAI US is focused more on the international side of things, making the case for global coordination and an AI Treaty. In this sense, one of our main value-adds might be convincing members of Congress that international coordination on AI is both feasible and necessary to prevent catastrophic risk. This also serves to counter the "arms race" narrative ("the US needs to develop AGI first in order to beat China!") which risks sabotaging AI policy in the coming years.
Happy to weigh in here with some additional information/thoughts.
Before I started my current role at PauseAI US, I worked on statewide environmental campaigns. While these were predominantly grassroots (think volunteer management, canvassing, coalition-building etc.) they did have a lobbying component, and I met with statewide and federal offices to advance our policy proposals. My two most noteworthy successes were statewide campaigns in Massachusetts and California, where I met with a total of ~60 state Congressional offices and helped to persuade the legislatures of both states to pass our bills (clean energy legislation in MA; pollinator protection in CA) despite opposition from the fossil fuel and pesticide industries.
I have been in D.C. since August working on PauseAI US’ lobby efforts. So far, I have spoken to 16 Congressional offices — deliberately meeting with members of both parties, with a special focus on Congressmembers in relevant committees (i.e. House Committee on Science, Space, and Technology; Senate Committee on Commerce, Science, and Transportation; House Bipartisan AI Task Force).
I plan to speak with another >50 offices over the next 6 months, as well as deepen relationships with offices I’ve already met. I also intend to host a series of Congressional briefings— on (1) AI existential risk, (2) Pausing as a solution, and (3) the importance and feasibility of international coordination— inviting dozens of Congressional staff to each briefing.
I do coordinate with a few other individuals from aligned AI policy groups, to share insights and gain feedback on messaging strategies.
Here are a few takeaways from my lobbying efforts so far, explaining why I believe PauseAI US lobbying is important:
Framing and vocabulary matter a lot here — it’s important to find the best ways to make our arguments palatable to Congressional offices. This includes, for instance, framing a Pause as “pro-safe innovation” rather than generically “anti-innovation,” anticipating and addressing reasonable objections, making comparisons to how we regulate other technologies (i.e. aviation, nuclear power), and providing concrete risk scenarios that avoid excessive technical jargon.
As such, I spend a lot of time emphasizing loss-of-control scenarios, making the case that this technology should not be thought of as a “weapon” to be controlled by whichever country builds it first, but instead as a “doomsday device” that could end our world regardless of who builds it.
I also make the case for the feasibility of an international pause, by appealing to historical precedent (i.e. nuclear non-proliferation agreements) and sharing information about verification and enforcement mechanisms (i.e. chip tracking, detecting large-scale training runs, on-chip reporting mechanisms.)
The final reason for the importance of PauseAI US lobbying is a counterfactual one: If we don’t lobby Congress, we risk ceding ground to other groups who push the “arms race” narrative and convince the US to go full-speed ahead on AGI development. By being in the halls of Congress and making the most persuasive case for a Pause, we are at the very least helping prevent the pendulum from swinging in the opposite direction.
1. Our lobbying is more “outside game” than the others in the space. Rather than getting our lobbying authority from prestige or expense, we get it from our grassroots support. Our message is simpler and clearer, pushing harder on the Overton window. (More on the radical flank effect here.) Our messages can complement more constrained lobbying from aligned inside gamers by making their asks seem more reasonable and safe, which is why us lobbying is not redundant with those other orgs but synergistic.
2. Felix has experience on climate campaigns and climate canvassing and was a leader in U Chicago EA. He's young, so he hasn't had many years of experience at anything, but he has the relevant kinds of experience that I wanted and is demonstrably excellent at educating, building bridges, and juggling a large network. He. has the tact and sensitivity you want in a role like this while also being very earnest. I'm very excited to nurture his talent and have him serve as the foundation for our lobbying program going forward.
Politics
We are an avowedly bipartisan org and we stan the democratic process. Our messaging is strong because of its simplicity and appeal to what the people actually think and feel. But our next actions remain the same no matter who is in office: protest to share our message and lobby for the PauseAI proposal. We will revise our lobbying strategy based on who has what weight, as we would with any change of the guard, and different topics and misconceptions will likely dominate the education side of our work than before.
This is why it's all the more important that we be there.
The EA instinct is to do things that are high leverage and to quickly give up causes that are hard or involve tugging the rope against an opponent to find something easier (higher leverage). There is no substitute for doing the hard work of grassroots growth and lobbying here. There will be a fight for hearts and minds, conflicts between moneyed industry interests and the population at large, and shortcuts in that kind of work are called "astroturfing". Messaging getting harder is not a reason to leave-- it's a crucial reason to stay.
If grassroots protesting and lobbying were impossible, we would something else. But this is just what politics looks like, and AI Safety needs to be represented in politics.
Adverse selection
I am thinking a bit about adverse selection in longtermist grantmaking and how there are pros and cons to having many possible funders. Someone else not funding you could be evidence I/others shouldn’t either, but conversely updating too much on what a small number of grantmakers think could lead to missing lots of great opportunities as a community.
Going to flag that a big chunk of the major funders and influencers in the EA/Longtermist community have personal investments in AGI companies, so this could be a factor in lack of funding for work aimed at slowing down AGI development. I think that as a community, we should be divesting (and investing in PauseAI instead!)
Jaan Tallinn, who funds SFF, has invested in DeepMind and Anthropic. I don't know if this is relevant because AFAIK Tallinn does not make funding decisions for SFF (although presumably he has veto power).
If this is true, or even just likely to, and someone has data on this, making this data public, even in anonymous form will be extremely high impact. I do recognize that such moves could come at great personal cost but in case it is true I just wanted to put it out there that such disclosures could be a single action that might by far outstrip even the lifetime impact of almost any other person working to reduce x-risk from AI. Also, my impression is that any evidence of this going on is absent form public information. I really hope absence of such information is actually just because nothing of this sort is actually going on but it is worth being vigilant.
What do you mean by "if this is true"? What is "this"?
It's literally at the top of his Wikipedia page: https://en.m.wikipedia.org/wiki/Jaan_Tallinn
Could you spell out why you think this information would be super valuable? I assume something like you would worry about Jaan's COIs and think his philanthropy would be worse/less trustworthy?
Since we passed the speculation round, we will receive feedback on the application, but haven't yet. I will share what I can here when I get it.
On Pauses
(As you note much of the value may come from your advocacy making more 'mainstream' policies more palatable, in which case the specifics of Pause itself matter less, but are still good to think about.)
I would also be interested in your thoughts on @taoburga's push back here. (Tao, I think I have a higher credence than you that Pause advocacy is net positive, but I agree it is messy and non-obvious.)
I'm highly skeptical about the risk of AI extinction, and highly skeptical that there will be singularity in our near-term future.
However, I am concerned about near-term harms from AI systems such as misinformation, plagiarism, enshittification, job loss, and climate costs.
How are you planning to appeal to people like me in your movement?
Yes, very much so. PauseAI US is a coalition of people who want to pause frontier AI training, for whatever reason they may have. This is the great strength of the Pause position— it’s simply the sensible next step when you don’t know what you’re doing playing with a powerful unknown, regardless of what your most salient feared outcome is. The problem is just how much could go wrong with AI (that we can and can’t predict), not only one particular set of risks, and Pause is one of the only general solutions.
Our community includes x-risk motivated people, artists who care about abuse of copyright and losing their jobs, SAG-AFTRA members whose primary issue is digital identity protection and digital provenance, diplomats whose chief concern is equality across the Global North and Global South, climate activists, anti-deepfake activists, and people who don’t want an AI Singularity to take away all meaningful human agency. My primary fear is x-risk, ditto most of the leadership across the PauseAIs, but I’m also very concerned about digital sentience and think that Pause is the only safe next step for their own good. Pause comfortably accommodates the gamut of AI risks.
And the Pause position accommodates this huge set of concerns without conflict. The silly feud between AI ethics and AI x-risk doesn’t make sense through the lens of Pause: both issues would be helped by not making even more powerful models before we know what we’re doing, so they aren’t competing. Similarly, with Pause, there’s no need to choose between near-term and long-term focus.
Donation mechanics
Fundraising scenarios
A comment not a question (but feel free to respond): let's imagine Pause AI US doesn't get much funding and the org dies, but then in two years someone wants to start something similar - this would seem quite inefficient and bad. Or conversely that Pause AI US get's lots of funding and hires more people, and then funding dries up in a year and they need to shrink. My guess is there is an asymmetry where an org shrinking for lack of funding is more bad than growing with extra funding, which I suppose leans towards growing slower with a larger runway, but not sure about this.
I wonder why this has been downvoted. Is it breaking some norm?
Ooooo, I shared it on twitter facepalm
Wait, is that an explanation? Can new accounts downvote this soon?
Yes strange, maybe @Will Howard🔹 will know re new accounts?
Or maybe a few EAF users just don't like PauseAI and downvoted, probably the simplest explanation.
And while we are talking about non-object level things, I suggest adding Marginal Funding Week as a tag.
Yup new accounts can downvote immediately, unlike on LessWrong where you need a small amount of karma to do so. Can't confirm whether this happened on this post
Is PauseAI US a 501(c)(3)?
We are fiscally sponsored by Manifund and just waiting for the IRS to process our 501(c)(3) application (which could still take several more months). So, for the donor it's all the same-- we have 501(c)(3) status via Manifund, and in exchange we give 5% of our income to them. Sometimes these arrangements are meant to be indefinite, and the fiscal sponsor does a lot of administration and handles the taxes and bookkeeping. PauseAI US has its own bookkeeper and tax preparer and we will end the fiscal sponsor relationship as soon as the IRS grants us our own 501(c)(3) status.
Additionally, we've applied for 501(c)(4) status for PauseAI US Action Fund, which will likely take even longer. Because Manifund (and PauseAI US, in our c3 application) have an election h, we are able to do lobbying as a c3 as long as it doesn't exceed ~20% (actual formula is more complicated) of our expenditures, so we probably will not need the c4 for the lobbying money for a while, but the structure is being set up now so we can raise unrestricted lobbying money.