Apart Research recently started an ambitious fundraising campaign—they say that the funding is urgently needed to keep the lights on. According to their Manifund post, they're trying to raise $954,800 (almost $1 million!) in the next 21 days, which is equivalent to about 12 months' runway:
Staff compensation for 8 FTE ($691,200, 73%)
Program-related costs ($156,000, 16%), such as lab infrastructure, research software, and conference travel
Indirect costs and fiscal sponsorship ($107,600, 11%)
Caveat: the post also says that the budget should be "scale[d] down accordingly", presumably based on the amount of money they raise. For instance, if they raise $500,000, that's enough to last 6 months.
Apart has said that they might be forced to shut down (or at least drastically downsize their team) if they can't raise the funds:
Insufficient funding: Without adequate resources, we would be forced to disband a high-functioning team built over 2.5 years, losing a proven talent pipeline at a critical time for AI safety and canceling valuable talent capital and research projects.
Mitigation: We have already diversified our funding drastically, including partnerships and sponsorships.
I donated about $300 to them last December because I think their hackathons serve as an important entry point for people interested in getting into AI safety, and I think the AI safety field benefits from a robust talent pipeline. So it'd be a shame if they had to shut down. (I participated in a hackathon myself and was inspired to donate to them by this Forum post—donating to a community project that has increased your impact provides a strong signal that the program is impactful.)
But I can't help but wonder why Apart Research is in this situation in the first place. Did they abruptly lose grants from major funders? If so, why? And why do they not seem to have a cash reserve? They've hired 8 FTEs—did they scale up too fast without building up enough savings to sustain it? I'm not criticizing, I just think we'd all benefit from more context so that donors can determine the best ways to keep Apart going.
Also, they've only raised $346 on Manifund so far, and they need to raise $10k before any of the money gets paid out to them. They could be raising a lot more money through other channels, like Every.org, but why is the threshold so high? The fundraiser seems to be moving pretty slowly—campaigns on Manifund often see large donations from regrantors, to the tune of $1–2k.
I was extremely grateful for your donation and the impact Apart has had on individuals' personal stories are what makes all this work worth it! So we really really appreciate this.
This is an in-depth answer to your questions (reasons behind this campaign, why the timeline, what we do, how this situation relates to the general AIS funding ecosystem, what mistakes we made, and a short overview of the impact report and newer numbers).
We're extremely grateful for the response we've received on this campaign, such as the many personal comments and donations further down on the /donate page and on Manifund, and this is really what makes it exciting to be at Apart!
We have one of the more diverse funding pools of organizations in our position[1] but org-wide community-building funding mostly depends on OpenPhil, LTFF, and SFF. This situation comes after a pass from LTFF that was high confidence for us because we outperformed our last grant with them, but we misjudged that LTFF itself was underfunded, unfortunately. Additionally, OpenPhil has been a smaller part of our funding than we would have hoped.
The last-minute part of this campaign is largely a consequence of delayed response timelines (something that is pretty normal in the field, see further down for elaboration) along with somewhat limited engagement from OpenPhil's GCR team on our grants throughout our lifetime.
I'll also mention that non-profits generally spend immense amount of time with fundraising campaigns and what we feel is important to share transparently as part of this campaign are all the parts of our work that otherwise gets overlooked in a "max 200 words" grant application focused on field-building.
We've been surprised at how important anecdotes actually are and have prioritized them too little in our applications - everyone has shared their personal stories now and they are included across the campaign here as a result. Despite this, Apart was still the second highest-rated grant for our PI at LTFF and they simply had to reject it due to the size since they were themselves underfunded.
With OpenPhil, I think we've been somewhat unlucky with the depth of grant reviews and feedback from their side and missing the opportunity to respond to their uncertainties. Despite receiving some top-tier grants at SFF and LTFF in 2024, an organization like ours are dependent on larger OP grants unless we have successful long-term public campaigning similar to Wikimedia Foundation or direct interfacing with high net worth individuals, something every specialized non-profit outside AI safety need as they scale.
Hope that clarifies things a bit! We've consistently pivoted towards more and more impactful areas and I think Apart is now harvesting the impact of growing as an independent research lab. Our latest work is very exciting, the research is getting featured across media, backend software is in use now, and governments are calling us for help, so it's unfortunate to find the organization in this situation.
For others raising funds, what Apart could have done to improve the situation is:
Stay more informed about funding shortfalls for specific foundations we rely on. This was especially important for the current situation.
Rely less on expectations that larger funding pools would follow AI capabilities advancement and the urgency of AI safety.
Related to the above point, avoid scaling based on the YoY trend line of funding Apart received from 2022 to 2023 to 2024 (conditional projected growth capacity) since this wasn't followed up in 2025 and the mid-2024 scale may have been better to stay on for longer (though the speed of AI development leads to somewhat different conclusions and we didn't grow more than 50% from last year's budget).
Be more present in SF and London to interface face-to-face with the funders and provide answers to the uncertainties (they usually come back after the grant process and we often have relatively good answers to them but don't have the chance to provide them before a decision is made).
Communicate more of our impact and our work throughout the EA and AIS community, beyond large academic participation and our communication to direct beneficiaries of our work, participants, newsletter subscribers, partners, etc. This is already under way but I would guess there's a six month lead time or so on this.
Engage other visionary donors outside AIS to join in on the funding rounds, potentially under other attractive narratives (something that usually takes two years to activate and that I'm certain will be possible by 2026).
Rely less on previous results as evidence for forecasts of grant-making decisions.
With that said, I think we've acted as well as we could, and this campaign is part of our contingency plans, so here we are! We could've launched it earlier but that is a minor point. I'm confident the team will pull through, but I'll be the first to say that the situation could be better.
The team and I believe a lot in the work that happens at Apart, and I'm happy that it seems our researchers and participants agree with us - we could of course solve it all by pivoting to something less impactful, but that would be silly.
So overall, this is a relatively normal situation for non-profits outside AI safety and we're just in a place where the potential funders for AI safety community-building are few and far between. This is not a good situation for Apart, but it is what it is!
Some notes on what Apart does
Since this is a longer answer, it may also be worth it to clarify a few misunderstandings that sometimes come up around our work due to what seems like an early grounding of the 'Apart narrative' in the community that we haven't worked enough to update:
"Apart is an undergraduate talent pipeline": Apart's impact happens mostly for mid-career adjacent technical talent while we of course give anyone the chance to be a part of AI safety simply based on the quality of their work (re: Mathias' comment). E.g., the majority of participation in our research challenges / sprints are from graduate students and over (can double check the numbers but it's about 30%-40% mid-career technical talent)
"Apart just does hackathons": Our hackathons seem quite effective at engaging people directly with research topics without CV discrimination and we believe a lot in its impact. However, most of our labor hours are spent on our lab and research accelerator that helps people all over the world become actual researchers on their own time and engage with academia, something that makes us less visible on e.g. LessWrong than we maybe should've strategically been for the key funders to take notice. E.g. we had 9-15 researchers present at each of the latest three major AI research conferences (ICML, NeurIPS, ICLR) with multiple awards and features. See more in the Impact Report.
"Apart is focusing on mechanistic interpretability": Most of our earliest papers were related to mechanistic interpretability due to the focus of our former research director, but our agenda is really to explore the frontier of under-explored agendas, which also may lead to less attention for our work within AI safety. Our earliest work in 2022 was on the LLM psychology agenda and model-human alignment with much of our latest work being in metrology (science of evals), "adjacent field X AGI", and dark patterns; agendas that we generally believe are very under-explored. If we optimized for funding-friendliness, one could argue that we should focus even more on the agendas that receive attention and have open RFPs, but that is counter to our theory of change.
Funding ecosystem
The situation for Apart speaks to broader points about the AI safety funding ecosystem that I'll leave here for others who may be curious about how an established organization like Apart may run a public campaign with such a short runway:
Other organizations within AI safety have a similarly high dependence on a few funders and also face significant funding cliffs - due to Apart's focus on transparency and community engagement, Jason, our team, and I decided to approach it with this campaign since we believe a lot in open engagement with the community. I won't name any names but this is one of those facts that is known among this small circle - all throughout 2023/24/25.
The current administration's retraction of national science funding sources mean that there's currently an even larger pool of grantees that request funding and even academic labs have to close, with many US academics moving to other countries. A key example includes $3B in funding terminated for Harvard University's research.
Apart receives enough restricted funding for specific research projects and this is generally quite generous and available since there's many foundations providing this (from OpenAI Inc. 501(c)(3) to Foresight Institute to Schmidt Sciences to aligned private companies) - this is not a big problem and you'll see orgs focused on a single technical agenda have an "easier" time raising.
Our goal is the global talent engagement that we have seen a lot of success in (see the testimonials down on the https://apartresearch.com/donate page for a few examples) which is of course "field-building." For this, there's OpenPhil, LTFF, and SFF, besides a few new opportunities that are popping up that are more available for people with close networks to specific high net worth foundations (some of which we miss out on from being globally represented but not as well represented within the Bay Area).
The fact that funders aren't extremely excited about specific work should generally be prioritized much lower than it is in EA and AIS. E.g. see the number of early investors passing on AirBnB and Amazon (40/60) and the repeated grant rejections and lack of funding for Dr. Katalin Karikó research. Rejections are a signal, but not necessarily about the underlying ideas. Every time we've had the chance to have longer conversations with grantmakers, they've been quite excited. However, this is not the standard partially due to grantmaking staffing shortage and lack of public RFPs the last few years.
The fact that EA and AIS has had very early funding from Dustin may have made the field relatively complacent and accepting of closing down projects if OP doesn't fund it (which to be fair is sometimes a better signal than if e.g. NSF doesn't) but the standard across non-profits are very aggressive multi-year courting campaigns for high net worth donors that can provide diversified funding along with large public campaigns engages very broadly on large missions. Most of the non-profits you know outside of AI safety have large teams focused solely on raising money, some of them earning $1.2M per year because they bring in much more. This is not known within AI safety because it has been disconnected from other non-profits. This would preferably not be the case, but it's reality.
With that said, I am eternally grateful for the funding ecosystem around AIS since it is still much better than what e.g. the Ford Foundation or Google org provides in speed and feedback (e.g. 1 year response times, zero feedback, no response deadlines, etc.).
Appendix: Apart Research Impact Report V1.0
Since you've made it this far...
Our impact report makes Apart's impact even clearer and it's definitely worth a read!
If you'd like to hear about the personal impact we've had on the people who've been part of our journey, I highly recommend checking out the following:
Since V1.0, we've also fine-tuned and re-run parts of our impact evaluation pipeline even more and here's a few more numbers:
Citations of our research from (excluding universities):Centre for the Governance of AI, IBM Research, Salesforce, Institute for AI Policy and Strategy, Anthropic, Centre for the Study of Existential Risk, UK Health Security Agency, EleutherAI, Stability AI, Meta AI Research, Google Research, Alibaba, Tencent, Amazon, Allen Institute for AI, Institute for AI in Medicine, Chinese Academy of Sciences, Baidu Inc., Indian Institute of Technology, State Key Laboratory of General Artificial Intelligence, Thomson-Reuters Foundational Research, Cisco Research, Oncodesign Precision Medicine, Institute for Infocomm Research, Vector Institute, Canadian Institute for Advanced Research, Meta AI, Google DeepMind, Microsoft Research NYC, MIT CSAIL, ALTA Institute, SERI, AI Quality & Testing Hub, Hessian Center for Artificial Intelligence, National Research Center for Applied Cybersecurity ATHENE, Far AI, Max Planck Institute for Intelligent Systems, Institute for Artificial Intelligence and Fundamental Interactions, National Biomarker Centre, Idiap Research Institute, Microsoft Research India, Ant Group, Alibaba Group, OpenAI, Adobe Research, Microsoft Research Asia, Space Telescope Science Institute, Meta GenAI, Cynch.ai, AE Studio, Language Technologies Institute, Ubisoft, Flowers TEAM, Robot Cognition Laboratory, Lossfunk, Munich Center for Machine Learning, Center for Information and Language Processing, São Paulo Research Foundation, National Council for Scientific and Technological Development
Job placements: Cofounder of stealth AI safety startup @ $20M valuation, METR, GDM, Anthropic, Martian (four placements as research leads of new mech-int team plus staff), Cooperative AI Foundation, Gray Swan AI, HiddenLayer, Succesif, AIforAnimals, Sentient Foundation, Leap Labs, EluetherAI, Suav Tech, Aintelope, AIS Cape Town, Human Intelligence, among others
Program placements: MATS, ERA Cambridge, ARENA, LASR, AISC, Pivotal Research, Constellation, among others
In terms of our funding pool diversity, it spans from our Lambda Labs sponsorship of $5k compute / team to tens of sponsorships from partners for research events, many large-scale (restricted) research grants, paid research collaborations, and quite a few $90k-$400k general org support grants from every funder you know and love.
I'm in a cause area most of the big funds are not yet on board with yet (6x only, not yet to GiveWell's 10x bar)...so we have to go out to the traditional philanthropy world to find funding. That can have many good benefits for both sides.
Is it for the same reason CAIP appears to have gone bankrupt? That a “major funder” (read: Open Phil) pulled support and that triggered a cascade of funders pulling out?
EDIT: This is my unconfirmed understanding of the situation.
That's not my understanding of what happened with CAIP, there's various funders who are very happy to disagree with OpenPhil who I know have considered giving to CAIP and decided against. My understanding is that it's based on actual reasons, not just an information cascade from OpenPhil
Assuming this is true, why would OP pull funding? I feel Apart's work strongly aligns with OP's goals. The only reason I can imagine is that they want to move money away from the early career talent building pipeline to more mid/late-stage opportunities.
OP has not pulled any funding. They've provided a few smaller grants over the last years that have been pivotal to Apart's journey and I'm extremely grateful for this. OP has been a minority of Apart's funding and the lack of support for the growth has been somewhat hard to decipher for us. Generally happy to chat more with OP staff about this, if anyone wish to reach out, of course.
(conflict of interest note, I'm pretty good friends with Apart's founder)
One thing I really like about Apart is how meritocratic it is. Anyone can sign up for a hackathon, and if your project is great, win a prize. They then help prize winners with turning their project into publishable research. This year two prize winners even ended up presenting their work orally at ICLR (!!).
Nobody cares what school you went to. Nobody is looking at your gender age or resume. What matters is the quality of your work and nothing but.
And it turns out that when you look just at quality of the work, you'll find that it comes from all over the world - often countries that are otherwise underrepresented in the EA and AI safety community. I think that is really really cool.
I think apart could do a much better job at communicating just how different their approach is to the vast majority of AI upskilling programmes, which heavily rely on evaluating your credentials to decide if you're worthy of doing seriousresearch.
I don't know anything about the cost-per-participant and whether that justifies funding apart over AI safety projects, but there is something very beautiful and special about Apart's approach to me.
Sad to see this. I agree that Apart adds something distinctive to the ecosystem (an extremely easy entry point), so it would be a shame to see it disappear.
I wonder whether this is because there's so much competition for competitive fellowships like MATS (and perhaps even for some of the unpaid AI safety opportunities) that funders feel less need to fund projects earlier in the pipeline?
The situation these days seems pretty brutal. I'm really hoping that some other large funders enter this space soon, the situation now feels very funding-constrained.