UPDATE (12/13/24): Zeffy is now our primary method of receiving donations! You can access it through the “Donate” button on PauseAI-us.org or directly here.
UPDATE (12/12/24): PauseAI US has received it's own 501(c)(3) status (!) so Manifund will no longer be our fiscal sponsor. So, I closed our Manifund fundraiser to save on fees from using that platform that we no longer have to pay. I'll update with other platforms we use that we can now sign up for on our own, and you can always email donations at pauseai-us dot org to arrange a donation.
PauseAI US needs your donations! We were very fortunate not to have to do much dedicated fundraising up until this point, but I was caught off-guard by receiving nothing in the SFF main round (after receiving multiple speculation grants), so we're in a crunch and only fully-funded through the end of 2024.
If you're sold, you can donate right now via PauseAI's general support Manifund project, the text of which I'll share here below the dots.
If you're open but you have questions, or you just thought of a great question you know other people are wondering about, ask them in the comments below! I'll answer them before or on 11/19/24.
Project summary
PauseAI US's funding fell short of expected, and now we are only funded until the end of 2024! Money donated to this project will go to fund the operations of PauseAI US until midyear 2025.
What are this project's goals? How will you achieve them?
PauseAI US advocates for an international treaty to pause frontier AI development. But we don't need to achieve that treaty to have positive impact-- most of our positive impact will likely come from moving the Overton window and making more moderate AI Safety measures more possible. Advocating straightforwardly for what we consider the best solution is an excellent frame for educating the general public and elected officials on AI danger-- we don't know what we're doing building powerful AI, so we should wait until we do to proceed-- compared to tortured and confusing discussions of other solutions like alignment that have no clear associated actions for those outside the technical field.
To fulfill our goal of moving the Overton window in the direction of simply not building AGI while it is dangerous to do so, PauseAI US has two major areas of programming: protesting and lobbying.
Protests (like this upcoming one) are the core of our irl volunteer organizing, local social community, and social media presence. Protests send the general overarching message to Pause frontier AI training, in line with the PauseAI proposal. Sometimes protests take issue with the AI industry and take place at AGI company offices like Meta, OpenAI, or Anthropic (RSVP for 11/22!). Sometimes protests are in support of international cooperative efforts. Protests get media attention which communicates not only that the protestors want to Pause AI, but shows in a visceral and easily understood way the stakes of this problem, filling the bizarre missing mood surrounding AI danger ("If AI companies are doing something so dangerous, how come there aren't people in the streets?"). Protests are a highly neglected angle in the AI Safety fight. Ultimately, the impact of protests is in moving the Overton window for the public, which in turn affects what elected officials think and do.
Organizing Director Felix De Simone is based in DC and does direct lobbying on the Hill as well as connecting constituents to their representatives for grassroots lobbying. Felix holds regular email- and letter-writing workshops for the general public on the PauseAI US Discord (please join!) aimed at specific events, such as emailing and calling the California Assembly and Senate during the SB-1047 hearings and, more recently, workshops coordinating supportive emails expressing hope about the possibility of a global treaty to pause frontier AI development to attendees of the US AI Safety Conference. We work with SAG-AFTRA representatives to coordinate with their initiatives and add an x-risk dimension to their primarily digital identity and provenance-related concerns. PauseAI US is part of a number of other more speculative legal interventions to Pause AI, such as working with Gabriel Weil to develop a strict liability ballot initiative version of SB-1047 and locate funders to get it on the 2026 ballot. We are members of Coalition for a Baruch Plan for AI and Felix attended the UN Summit for the Future Activist Days. We hope to be able to serve as a plaintiff in lawsuits against AI companies that our attorney allies are developing, a role which very few others would be willing or able to fill. Lobbying is more of a nitty gritty approach, but the goal of our lobbying is the same as our protesting: to show our elected officials that cooperation to simply not build AGI is possible, because the will and the ways are there.
How will this funding be used?
Salaries - $260k/year
Specific Events - ~$7.5-15k/year
Operating costs - ~$24k/year (this includes bookkeeping, software, insurance, payroll tax, etc. and may be an overestimate for next year because there were so many startup costs this year-- if it is, consider it slack)
Through 2025 Q2 -- $150k.
Our programming mainly draws on our labor and the labor of our volunteers, so salaries are our overwhelmingly largest cost.
Q1&Q2 programming:
- quarterly protest
- monthly flyering
- monthly local community social event
- 2+ lobbying events for public education
- PauseAI US Discord (please join!) for social times, AI Safety conversation, and help with running your own local PauseAI US community
- PauseAI US newsletter
- expansion of Felix's lobbying plan, improving his relationships with key offices
Org infrastructure work by Q2:
(This one is massive. We just hired Lee Green to run ops.)
- massively improved ops and legal compliance leading us to be able to scale up much more readily
- website with integrated event platform streamlining our volunteer discovery and training processes and allowing us to hold more frequent and larger protests
- Executive Director able to focus on strategy and fundraising and not admin
- improved options for donating and continuous fundraising
Incidental work likely to happen by Q2:
- strict liability ballot initiative will have progressed as far as it can
- We respond to media requests for comment on major news events, may muster small immediate demonstrations and/or orchestrate calls into key offices
- supporting other AI Safety organizations with our knowledge and connections, bringing an understanding of inside-outside game dynamics in AI Safety
- lots of behind the scenes things I unfortunately can't discuss but which are a valuable part of what our org does
Who is on your team? What's your track record on similar projects?
Executive Director - Holly Elmore
Founded this org, long history of EA organizing (2014-2020 at Harvard) and doing scientific research as an evolutionary biologist and then as a wild animal welfare researcher at Rethink Priorities.
Director of Operations - Lee Green
+20 years experience in Strategy Consulting, Process Engineering, and Efficiency across many industries, specifically supporting +40 Nonprofit and Impact-Driven Organizations
Organizing Director - Felix De Simone
Organized U Chicago EA and climate canvassing campaigns.
Happy to weigh in here with some additional information/thoughts.
Before I started my current role at PauseAI US, I worked on statewide environmental campaigns. While these were predominantly grassroots (think volunteer management, canvassing, coalition-building etc.) they did have a lobbying component, and I met with statewide and federal offices to advance our policy proposals. My two most noteworthy successes were statewide campaigns in Massachusetts and California, where I met with a total of ~60 state Congressional offices and helped to persuade the legislatures of both states to pass our bills (clean energy legislation in MA; pollinator protection in CA) despite opposition from the fossil fuel and pesticide industries.
I have been in D.C. since August working on PauseAI US’ lobby efforts. So far, I have spoken to 16 Congressional offices — deliberately meeting with members of both parties, with a special focus on Congressmembers in relevant committees (i.e. House Committee on Science, Space, and Technology; Senate Committee on Commerce, Science, and Transportation; House Bipartisan AI Task Force).
I plan to speak with another >50 offices over the next 6 months, as well as deepen relationships with offices I’ve already met. I also intend to host a series of Congressional briefings— on (1) AI existential risk, (2) Pausing as a solution, and (3) the importance and feasibility of international coordination— inviting dozens of Congressional staff to each briefing.
I do coordinate with a few other individuals from aligned AI policy groups, to share insights and gain feedback on messaging strategies.
Here are a few takeaways from my lobbying efforts so far, explaining why I believe PauseAI US lobbying is important:
Framing and vocabulary matter a lot here — it’s important to find the best ways to make our arguments palatable to Congressional offices. This includes, for instance, framing a Pause as “pro-safe innovation” rather than generically “anti-innovation,” anticipating and addressing reasonable objections, making comparisons to how we regulate other technologies (i.e. aviation, nuclear power), and providing concrete risk scenarios that avoid excessive technical jargon.
As such, I spend a lot of time emphasizing loss-of-control scenarios, making the case that this technology should not be thought of as a “weapon” to be controlled by whichever country builds it first, but instead as a “doomsday device” that could end our world regardless of who builds it.
I also make the case for the feasibility of an international pause, by appealing to historical precedent (i.e. nuclear non-proliferation agreements) and sharing information about verification and enforcement mechanisms (i.e. chip tracking, detecting large-scale training runs, on-chip reporting mechanisms.)
The final reason for the importance of PauseAI US lobbying is a counterfactual one: If we don’t lobby Congress, we risk ceding ground to other groups who push the “arms race” narrative and convince the US to go full-speed ahead on AGI development. By being in the halls of Congress and making the most persuasive case for a Pause, we are at the very least helping prevent the pendulum from swinging in the opposite direction.