Hide table of contents

Nonlinear spoke to dozens of earn-to-givers and a common sentiment was, "I want to fund good AI safety-related projects, but I don't know where to find them." At the same time, applicants don’t know how to find them either. And would-be applicants are often aware of just one or two funders - some think it’s “LTFF or bust” - causing many to give up before they’ve started, demoralized, because fundraising seems too hard.

As a result, we’re trying an experiment to help folks get in front of donors and vice versa. In brief: 

Looking for funding? 

Why apply to just one funder when you can apply to dozens? 

If you've already applied for EA funding, simply paste your existing application. We’ll share it with relevant funders (~50 so far) in our network. 

You can apply if you’re still waiting to hear from other funders. This way, instead of having to awkwardly ask dozens of people and get rejected dozens of times (if you can even find the funders), you can just send in the application you already made.  

We’re also accepting non-technical projects relevant to AI safety (e.g. meta, forecasting, field-building, etc.)

Application deadline: May 17, 2023. [Edit: new deadline is the 24th, to accommodate the NeurIPS deadline]

Looking for projects to fund?

Apply to join the funding round by May 24, 2023. Soon after, we'll share access to a database of applications relevant to your interests (e.g. interpretability, moonshots, forecasting, field-building, novel research directions, etc).

If you'd like to fund any projects, you can reach out to applicants directly, or we can help coordinate. This way, you avoid the awkwardness of directly rejecting applicants, and don’t get inundated by people trying to “sell” you.

Inspiration for this project

When the FTX crisis broke, we quickly spun up the Nonlinear Emergency Fund to help provide bridge grants to tide people over until the larger funders could step in. 

Instead of making all the funding decisions ourselves, we put out a call to other funders/earn-to-givers. Scott Alexander connected us with a few dozen funders who reached out to help, and we created a Slack to collaborate.

We shared every application (that consented) in an Airtable with around 30 other donors.  This led to a flurry of activity as funders investigated applications. They collaborated on diligence, and grants were made that otherwise wouldn’t have happened. 

Some funders, like Scott, after seeing our recommendations, preferred to delegate decisions to us, but others preferred to make their own decisions. Collectively, we rapidly deployed roughly $500,000 - far more than we initially expected.

The biggest lesson we learned: openly sharing applications with funders was high leverage - possibly leading to four times as many people receiving funding and 10 times more donations than would have happened if we hadn’t shared.

If you’ve been thinking about raising money for your project idea, we encourage you to do it now. Push through your imposter syndrome because, as Leopold Aschenbrenner said, nobody’s on the ball on AGI alignment.

Another reason to apply: we’ve heard from EA funders that they don’t get enough applications, so you should have a low bar for applying - many fund over 50% of applications they receive (SFFLTFFEAIF).

Since the Nonlinear Network is a diverse set of funders, you can apply for a grant size anywhere between single digit thousands to single digit millions.

Note: We’re aware of many valid critiques of this idea, but we’re keeping this post short so we actually ship it. We’re starting with projects related to AI safety because our timelines are short, but if this is successful, we plan to expand to the other cause areas.

Apply here.

Reminder that you can listen to LessWrong and EA Forum posts like this on your podcast player using the Nonlinear Library.

Comments18


Sorted by Click to highlight new comments since:

Hey there! Are you interested in funding people to do upskilling in AIS? If yes, I might send this opportunity to a few people, but if not, then I would not want them to apply and have you go through the applications! Thanks!

Yes, that counts as AI-safety related :)

How to prevent that projects with large downside risk get funded?

LTFF might be able to detect and turn down those projects. But some members of your funder network might not.

Good question! So, that's important, but I'm less worried about this because:

  1. All these donors were giving anyways. This just gives them more / better options to choose from.
  2. Donors are only one step in the chain for the unilaterlist curse. If people fund a bad idea, then it'll get ripped to shreds on the Forum :P 
  3. LTFF is also composed of fallible humans who might miss large downside risk projects. 
  4. I'm far more worried about the bureaucrat's curse in AI safety

In most endeavors, you expect to receive many nos before receiving a yes (eg applying to schools, jobs, publishing papers/books, startups, etc). In EA it's common to receive one no and for people to give up. 

I think this would only make sense if it was in a field where talent / value was easy to spot and evaluate and there were good feedback loops. But AI safety is far more like evaluating startup founders than evaluating bridge-builders. 

Except even more difficult to evaluate, because at least with for-profit founders, you find out years later if they made money or not! With ethics, you can't even tell if you're going in the right direction! 

If that's the case, we should have more evaluators, so that there's less people who slip through the cracks. 

I discuss something similar in another comment thread here.

Thanks for this. Seems important to provide more funding to folks who want to do good work!

vin
14
1
0

Two worries:

  1. Doesn't applying to many funds at once end up taking more grantmaker time which there is already too little of?
  2. Doesn't this lead to some kind of funder bystander-ish effect? Ie. funders thinking "better not fund this, lots of other funders know about this, better fund the people who just applied to us and are less likely to get other funding counterfactually"

Cool initiative! 

I am not very involved in AI Safety but have heard multiple times from big funders in the ecosystem over time something like "everything in AI Safety that SHOULD be funded IS being funded. What we really need are good people working on it." I'd expect e.g. OP to be excited to fund great people working in the space, so curious why you think the people who will apply to your network aren't getting funded otherwise.

Just for context: I am very FOR a more diverse funding ecosystem so I think getting more people to fund more projects who have different strategies for funding and risk tolerances is going in the right direction. 

Good question! Here are a few thoughts on that:

  • Evaluating charities is more like evaluating startups than evaluating bridge-builders

You can tell if somebody is a good bridge builder. We have good feedback loops on bridges and we know why bridges work. For bridges, you can have a small number of experts making the decisions and it will work out great.

However, with startups, nobody really knows what works or why. Even with Y Combinator, potentially the best startup evaluator in the world, the vast majority of their bets don’t work out. We don’t know why startups work and the feedback loops are slow and ambiguous. 

Charity startups and projects are more like startups, but they’re actually worse. At least with for-profits you can tell eventually if something is profitable or not. With impact, you can never know for sure. Like, we can still discuss whether Eliezer has been net positive or not because of his potential influence on the launch of OpenAI. And we can even question whether AMF is net positive, because of its flow-through effects on factory farmed animals. Heck, we can even question the whole framework of consequentialism, and maybe it’s better to be a deontologist, etc. 

So, given that Y Combinator misses tons of opportunities in a field with better feedback loops and a better understanding of how things work, we should expect that to be even more the case for large EA funders. 

  • People have different values

With YC, at least everybody’s trying to maximize the same goal - money. With nonprofits, you might actually be pursuing different goals. Even if everybody’s a utilitarian, there’s a bunch of different sorts of utilitarians you can be. 

  • People can spot different talent

Different people can spot different types of talent or theories of change based on their background. For example, people who’ve spent their entire lives in academia might be better at spotting academic talent but less good at spotting entrepreneurial talent, and vice versa. 

  • It allows for more geographical diversity

Right now it’s much harder to get funding if you’re not based in the Bay Area or London. This will help fix that.

  • The big funders often only accept certain grant sizes

Big funders usually don’t have the time to process smaller grants, leading to a lot of people missing out.

  • Often it’s just one person evaluating a grant, leading to increased odds of missed opportunities

Due to time constraints, big EA funders often only have one person review an application before making a decision. This can lead to all sorts of noise in the assessments, like them making worse decisions because they’re hungry, tired, distracted, feeling emotional, don’t know much about the field, misunderstood the application, had a bias towards the applicant, etc etc. 

I remember reading an article here about grant applications being noisy but can't find it. Kat-points to anybody who finds it and links it in a reply! 


Finally, I’ve definitely seen a lot of people rejected for funding who I think were doing good work or went on to do it anyways. It’s really easy for people to be refused funding for all sorts of reasons

In general, I really want to push back against the meme in our community that if you don’t get funding from one of the big EA funders, that must mean your project isn’t good. 

For most things in this sort of category, even the absolute best have to try many times before they get accepted. Even the best scientists have to apply to a lot of different schools and grants. Even the best authors get rejected from publishing companies. Even the best founders have to ask dozens to hundreds of investors before they get funded. Many people who’ve been rejected by tons of EA orgs for jobs or grants have gone on to do great things. 

There’s room for disagreement on how to do the most good, and that’s what I love about EA. And now, hopefully, with more diverse funders, we can turn that productive disagreement into action, and then impact. 

I agree with you and also want to push back on the meme that “all the good stuff gets funded”.

There are many factors going into that issue, but I think the biggest are the bottlenecks within the pipeline that brings money from OP to individual donation opportunities. Most directly, OP has a limited staff and a lot of large, important grants to manage. They often don't have the spare attention, time, or energy to solicit, vet, and manage funding to the many individuals and small organizations that need funding.

LTFF and other grantmakers have similar issues. The general idea is that just there are many inefficiencies in the grantmaker -> ??? -> grantee market. The market is especially inefficient for funding opportunities that are small (because the fixed costs of granting remain high) and weird (because the downside risk is magnified for large grantmakers).

Worse, I hear that a big issue is that everyone asks this same question "Why aren't you already funded by [funders that are not me]?" of new ventures who lack existing personal connections to the big funders, which leads them to never get off the ground.

I’m an entrepreneur looking to pivot to a more impactful enterprise. I’d love to work on AI safety but do not have the technical expertise.

Essentially I can be a cofounder that help with strategy, operations, management, sales, recruitment etc.

I’m thinking people might apply for ideas and need profiles like mine to get things off the ground quicker if there is a match between us.

Should I “apply”? Is there any way to get in touch with people who might need a cofounder in this space? Thanks!

[anonymous]4
0
0

Hi, the General Longtermism Team at Rethink Priorities is currently looking to facilitate faster and better creation of entrepreneurial longtermist projects – that is, new organizations, infrastructure, programs, and services that we believe will cost-effectively contribute to reducing existential risk. Some of these projects are likely to be oriented around Ai safety.

I'll DM you our expression of interest form to be a founder/co-founder for one of these projects.

[anonymous]6
1
0

Hi, any update on when funders can see applications?

We aim to send it out to funders within the next 48 hours.

Hello there,

Are you interested of funding this theory of mine that I submitted to AI alignment awards? I am able to make this work in GPT2 and now writing the results. I was able to make GPT2 shutdown itself (100% of the time) even if it's aware of the shutdown instruction called "the Gauntlet" embedded through fine-tuning an artificially generated archetype called "the Guardian" essentially solving corrigibility, outer and inner alignment. https://twitter.com/whitehatStoic/status/1645758144537034752?t=ps-Ccu42tcScTmWg1qYuqA&s=19

Let me know if you guys are interested. I want to test it in higher parameter models like Llama and Alpaca but don't have the means to finance the equipment.

I also found out that there is a weird setting in the temperature for GPT2 where in the range of .498 to .50 my shutdown code works really well, I still don't know why though. But yeah I believe that there is an incentive to review what's happening inside the transformer architecture.

Here was my original proposal: https://www.whitehatstoic.com/p/research-proposal-leveraging-jungian

I'll post my paper for the corrigibility solution too once finished probably next week.

Looking forward to hearing from you.

Best regards,

Miguel

I have submitted an application no need to reply!

Also using fine-tuning with traditional Jungian archetypes allowed GPT2 to tell stories either of depressing or motivational in nature 100%of the time. Thanks for reading!

Curated and popular this week
 ·  · 10m read
 · 
Regulation cannot be written in blood alone. There’s this fantasy of easy, free support for the AI Safety position coming from what’s commonly called a “warning shot”. The idea is that AI will cause smaller disasters before it causes a really big one, and that when people see this they will realize we’ve been right all along and easily do what we suggest. I can’t count how many times someone (ostensibly from my own side) has said something to me like “we just have to hope for warning shots”. It’s the AI Safety version of “regulation is written in blood”. But that’s not how it works. Here’s what I think about the myth that warning shots will come to save the day: 1) Awful. I will never hope for a disaster. That’s what I’m trying to prevent. Hoping for disasters to make our job easier is callous and it takes us off track to be thinking about the silver lining of failing in our mission. 2) A disaster does not automatically a warning shot make. People have to be prepared with a world model that includes what the significance of the event would be to experience it as a warning shot that kicks them into gear. 3) The way to make warning shots effective if (God forbid) they happen is to work hard at convincing others of the risk and what to do about it based on the evidence we already have— the very thing we should be doing in the absence of warning shots. If these smaller scale disasters happen, they will only serve as warning shots if we put a lot of work into educating the public to understand what they mean before they happen. The default “warning shot” event outcome is confusion, misattribution, or normalizing the tragedy. Let’s imagine what one of these macabrely hoped-for “warning shot” scenarios feels like from the inside. Say one of the commonly proposed warning shot scenario occurs: a misaligned AI causes several thousand deaths. Say the deaths are of ICU patients because the AI in charge of their machines decides that costs and suffering would be minimize
 ·  · 14m read
 · 
This is a transcript of my opening talk at EA Global: London 2025. In my talk, I challenge the misconception that EA is populated by “cold, uncaring, spreadsheet-obsessed robots” and explain how EA principles serve as tools for putting compassion into practice, translating our feelings about the world's problems into effective action. Key points:  * Most people involved in EA are here because of their feelings, not despite them. Many of us are driven by emotions like anger about neglected global health needs, sadness about animal suffering, or fear about AI risks. What distinguishes us as a community isn't that we don't feel; it's that we don't stop at feeling — we act. Two examples: * When USAID cuts threatened critical health programs, GiveWell mobilized $24 million in emergency funding within weeks. * People from the EA ecosystem spotted AI risks years ahead of the mainstream and pioneered funding for the field starting in 2015, helping transform AI safety from a fringe concern into a thriving research field. * We don't make spreadsheets because we lack care. We make them because we care deeply. In the face of tremendous suffering, prioritization helps us take decisive, thoughtful action instead of freezing or leaving impact on the table. * Surveys show that personal connections are the most common way that people first discover EA. When we share our own stories — explaining not just what we do but why it matters to us emotionally — we help others see that EA offers a concrete way to turn their compassion into meaningful impact. You can also watch my full talk on YouTube. ---------------------------------------- One year ago, I stood on this stage as the new CEO of the Centre for Effective Altruism to talk about the journey effective altruism is on. Among other key messages, my talk made this point: if we want to get to where we want to go, we need to be better at telling our own stories rather than leaving that to critics and commentators. Since
 ·  · 1m read
 · 
Shape and lead the future of effective altruism in the UK — apply to be the Director of EA UK. The UK has the world's second-largest EA community, with London having the highest concentration of EAs globally. This represents a significant opportunity to strengthen and grow the effective altruism movement where it matters most. The EA UK board is recruiting for a new Director, as our existing Director is moving on to another opportunity. We believe that the strongest version of EA UK is one where the Director is implementing a strategy that they have created themselves, hence the open nature of this opportunity. As Director of EA UK, you'll have access to: * An established organisation with 9 years of London community building experience * An extensive network and documented history of what works (and what doesn't) * 9+ months of secured funding to develop and implement your vision, and additional potential funding and connections to funders * A supportive board and engaged community eager to help you succeed Your task would be to determine how to best leverage these resources to maximize positive impact through community building in the UK. This is a unique opportunity for a self-directed community-builder to shape EA UK's future. You'll be responsible for both setting the strategic direction and executing on that strategy. This is currently a one-person organisation (you), so you'll need to thrive working independently while building connections across the EA ecosystem. There is scope to pitch expansion to funders and the board. We hope to see your application! Alternatively, if you know anyone who might be a good fit for this role, please email madeleine@goodstructures.co.