Hide table of contents

Nonlinear spoke to dozens of earn-to-givers and a common sentiment was, "I want to fund good AI safety-related projects, but I don't know where to find them." At the same time, applicants don’t know how to find them either. And would-be applicants are often aware of just one or two funders - some think it’s “LTFF or bust” - causing many to give up before they’ve started, demoralized, because fundraising seems too hard.

As a result, we’re trying an experiment to help folks get in front of donors and vice versa. In brief: 

Looking for funding? 

Why apply to just one funder when you can apply to dozens? 

If you've already applied for EA funding, simply paste your existing application. We’ll share it with relevant funders (~50 so far) in our network. 

You can apply if you’re still waiting to hear from other funders. This way, instead of having to awkwardly ask dozens of people and get rejected dozens of times (if you can even find the funders), you can just send in the application you already made.  

We’re also accepting non-technical projects relevant to AI safety (e.g. meta, forecasting, field-building, etc.)

Application deadline: May 17, 2023. [Edit: new deadline is the 24th, to accommodate the NeurIPS deadline]

Looking for projects to fund?

Apply to join the funding round by May 24, 2023. Soon after, we'll share access to a database of applications relevant to your interests (e.g. interpretability, moonshots, forecasting, field-building, novel research directions, etc).

If you'd like to fund any projects, you can reach out to applicants directly, or we can help coordinate. This way, you avoid the awkwardness of directly rejecting applicants, and don’t get inundated by people trying to “sell” you.

Inspiration for this project

When the FTX crisis broke, we quickly spun up the Nonlinear Emergency Fund to help provide bridge grants to tide people over until the larger funders could step in. 

Instead of making all the funding decisions ourselves, we put out a call to other funders/earn-to-givers. Scott Alexander connected us with a few dozen funders who reached out to help, and we created a Slack to collaborate.

We shared every application (that consented) in an Airtable with around 30 other donors.  This led to a flurry of activity as funders investigated applications. They collaborated on diligence, and grants were made that otherwise wouldn’t have happened. 

Some funders, like Scott, after seeing our recommendations, preferred to delegate decisions to us, but others preferred to make their own decisions. Collectively, we rapidly deployed roughly $500,000 - far more than we initially expected.

The biggest lesson we learned: openly sharing applications with funders was high leverage - possibly leading to four times as many people receiving funding and 10 times more donations than would have happened if we hadn’t shared.

If you’ve been thinking about raising money for your project idea, we encourage you to do it now. Push through your imposter syndrome because, as Leopold Aschenbrenner said, nobody’s on the ball on AGI alignment.

Another reason to apply: we’ve heard from EA funders that they don’t get enough applications, so you should have a low bar for applying - many fund over 50% of applications they receive (SFFLTFFEAIF).

Since the Nonlinear Network is a diverse set of funders, you can apply for a grant size anywhere between single digit thousands to single digit millions.

Note: We’re aware of many valid critiques of this idea, but we’re keeping this post short so we actually ship it. We’re starting with projects related to AI safety because our timelines are short, but if this is successful, we plan to expand to the other cause areas.

Apply here.

Reminder that you can listen to LessWrong and EA Forum posts like this on your podcast player using the Nonlinear Library.

Comments18


Sorted by Click to highlight new comments since:

Hey there! Are you interested in funding people to do upskilling in AIS? If yes, I might send this opportunity to a few people, but if not, then I would not want them to apply and have you go through the applications! Thanks!

Yes, that counts as AI-safety related :)

How to prevent that projects with large downside risk get funded?

LTFF might be able to detect and turn down those projects. But some members of your funder network might not.

Good question! So, that's important, but I'm less worried about this because:

  1. All these donors were giving anyways. This just gives them more / better options to choose from.
  2. Donors are only one step in the chain for the unilaterlist curse. If people fund a bad idea, then it'll get ripped to shreds on the Forum :P 
  3. LTFF is also composed of fallible humans who might miss large downside risk projects. 
  4. I'm far more worried about the bureaucrat's curse in AI safety

In most endeavors, you expect to receive many nos before receiving a yes (eg applying to schools, jobs, publishing papers/books, startups, etc). In EA it's common to receive one no and for people to give up. 

I think this would only make sense if it was in a field where talent / value was easy to spot and evaluate and there were good feedback loops. But AI safety is far more like evaluating startup founders than evaluating bridge-builders. 

Except even more difficult to evaluate, because at least with for-profit founders, you find out years later if they made money or not! With ethics, you can't even tell if you're going in the right direction! 

If that's the case, we should have more evaluators, so that there's less people who slip through the cracks. 

I discuss something similar in another comment thread here.

Thanks for this. Seems important to provide more funding to folks who want to do good work!

Cool initiative! 

I am not very involved in AI Safety but have heard multiple times from big funders in the ecosystem over time something like "everything in AI Safety that SHOULD be funded IS being funded. What we really need are good people working on it." I'd expect e.g. OP to be excited to fund great people working in the space, so curious why you think the people who will apply to your network aren't getting funded otherwise.

Just for context: I am very FOR a more diverse funding ecosystem so I think getting more people to fund more projects who have different strategies for funding and risk tolerances is going in the right direction. 

Good question! Here are a few thoughts on that:

  • Evaluating charities is more like evaluating startups than evaluating bridge-builders

You can tell if somebody is a good bridge builder. We have good feedback loops on bridges and we know why bridges work. For bridges, you can have a small number of experts making the decisions and it will work out great.

However, with startups, nobody really knows what works or why. Even with Y Combinator, potentially the best startup evaluator in the world, the vast majority of their bets don’t work out. We don’t know why startups work and the feedback loops are slow and ambiguous. 

Charity startups and projects are more like startups, but they’re actually worse. At least with for-profits you can tell eventually if something is profitable or not. With impact, you can never know for sure. Like, we can still discuss whether Eliezer has been net positive or not because of his potential influence on the launch of OpenAI. And we can even question whether AMF is net positive, because of its flow-through effects on factory farmed animals. Heck, we can even question the whole framework of consequentialism, and maybe it’s better to be a deontologist, etc. 

So, given that Y Combinator misses tons of opportunities in a field with better feedback loops and a better understanding of how things work, we should expect that to be even more the case for large EA funders. 

  • People have different values

With YC, at least everybody’s trying to maximize the same goal - money. With nonprofits, you might actually be pursuing different goals. Even if everybody’s a utilitarian, there’s a bunch of different sorts of utilitarians you can be. 

  • People can spot different talent

Different people can spot different types of talent or theories of change based on their background. For example, people who’ve spent their entire lives in academia might be better at spotting academic talent but less good at spotting entrepreneurial talent, and vice versa. 

  • It allows for more geographical diversity

Right now it’s much harder to get funding if you’re not based in the Bay Area or London. This will help fix that.

  • The big funders often only accept certain grant sizes

Big funders usually don’t have the time to process smaller grants, leading to a lot of people missing out.

  • Often it’s just one person evaluating a grant, leading to increased odds of missed opportunities

Due to time constraints, big EA funders often only have one person review an application before making a decision. This can lead to all sorts of noise in the assessments, like them making worse decisions because they’re hungry, tired, distracted, feeling emotional, don’t know much about the field, misunderstood the application, had a bias towards the applicant, etc etc. 

I remember reading an article here about grant applications being noisy but can't find it. Kat-points to anybody who finds it and links it in a reply! 


Finally, I’ve definitely seen a lot of people rejected for funding who I think were doing good work or went on to do it anyways. It’s really easy for people to be refused funding for all sorts of reasons

In general, I really want to push back against the meme in our community that if you don’t get funding from one of the big EA funders, that must mean your project isn’t good. 

For most things in this sort of category, even the absolute best have to try many times before they get accepted. Even the best scientists have to apply to a lot of different schools and grants. Even the best authors get rejected from publishing companies. Even the best founders have to ask dozens to hundreds of investors before they get funded. Many people who’ve been rejected by tons of EA orgs for jobs or grants have gone on to do great things. 

There’s room for disagreement on how to do the most good, and that’s what I love about EA. And now, hopefully, with more diverse funders, we can turn that productive disagreement into action, and then impact. 

I agree with you and also want to push back on the meme that “all the good stuff gets funded”.

There are many factors going into that issue, but I think the biggest are the bottlenecks within the pipeline that brings money from OP to individual donation opportunities. Most directly, OP has a limited staff and a lot of large, important grants to manage. They often don't have the spare attention, time, or energy to solicit, vet, and manage funding to the many individuals and small organizations that need funding.

LTFF and other grantmakers have similar issues. The general idea is that just there are many inefficiencies in the grantmaker -> ??? -> grantee market. The market is especially inefficient for funding opportunities that are small (because the fixed costs of granting remain high) and weird (because the downside risk is magnified for large grantmakers).

Worse, I hear that a big issue is that everyone asks this same question "Why aren't you already funded by [funders that are not me]?" of new ventures who lack existing personal connections to the big funders, which leads them to never get off the ground.

Two worries:

  1. Doesn't applying to many funds at once end up taking more grantmaker time which there is already too little of?
  2. Doesn't this lead to some kind of funder bystander-ish effect? Ie. funders thinking "better not fund this, lots of other funders know about this, better fund the people who just applied to us and are less likely to get other funding counterfactually"

I’m an entrepreneur looking to pivot to a more impactful enterprise. I’d love to work on AI safety but do not have the technical expertise.

Essentially I can be a cofounder that help with strategy, operations, management, sales, recruitment etc.

I’m thinking people might apply for ideas and need profiles like mine to get things off the ground quicker if there is a match between us.

Should I “apply”? Is there any way to get in touch with people who might need a cofounder in this space? Thanks!

[anonymous]4
0
0

Hi, the General Longtermism Team at Rethink Priorities is currently looking to facilitate faster and better creation of entrepreneurial longtermist projects – that is, new organizations, infrastructure, programs, and services that we believe will cost-effectively contribute to reducing existential risk. Some of these projects are likely to be oriented around Ai safety.

I'll DM you our expression of interest form to be a founder/co-founder for one of these projects.

[anonymous]6
1
0

Hi, any update on when funders can see applications?

We aim to send it out to funders within the next 48 hours.

Hello there,

Are you interested of funding this theory of mine that I submitted to AI alignment awards? I am able to make this work in GPT2 and now writing the results. I was able to make GPT2 shutdown itself (100% of the time) even if it's aware of the shutdown instruction called "the Gauntlet" embedded through fine-tuning an artificially generated archetype called "the Guardian" essentially solving corrigibility, outer and inner alignment. https://twitter.com/whitehatStoic/status/1645758144537034752?t=ps-Ccu42tcScTmWg1qYuqA&s=19

Let me know if you guys are interested. I want to test it in higher parameter models like Llama and Alpaca but don't have the means to finance the equipment.

I also found out that there is a weird setting in the temperature for GPT2 where in the range of .498 to .50 my shutdown code works really well, I still don't know why though. But yeah I believe that there is an incentive to review what's happening inside the transformer architecture.

Here was my original proposal: https://www.whitehatstoic.com/p/research-proposal-leveraging-jungian

I'll post my paper for the corrigibility solution too once finished probably next week.

Looking forward to hearing from you.

Best regards,

Miguel

I have submitted an application no need to reply!

Also using fine-tuning with traditional Jungian archetypes allowed GPT2 to tell stories either of depressing or motivational in nature 100%of the time. Thanks for reading!

Curated and popular this week
 ·  · 25m read
 · 
Epistemic status: This post — the result of a loosely timeboxed ~2-day sprint[1] — is more like “research notes with rough takes” than “report with solid answers.” You should interpret the things we say as best guesses, and not give them much more weight than that. Summary There’s been some discussion of what “transformative AI may arrive soon” might mean for animal advocates. After a very shallow review, we’ve tentatively concluded that radical changes to the animal welfare (AW) field are not yet warranted. In particular: * Some ideas in this space seem fairly promising, but in the “maybe a researcher should look into this” stage, rather than “shovel-ready” * We’re skeptical of the case for most speculative “TAI<>AW” projects * We think the most common version of this argument underrates how radically weird post-“transformative”-AI worlds would be, and how much this harms our ability to predict the longer-run effects of interventions available to us today. Without specific reasons to believe that an intervention is especially robust,[2] we think it’s best to discount its expected value to ~zero. Here’s a brief overview of our (tentative!) actionable takes on this question[3]: ✅ Some things we recommend❌ Some things we don’t recommend * Dedicating some amount of (ongoing) attention to the possibility of “AW lock ins”[4]  * Pursuing other exploratory research on what transformative AI might mean for animals & how to help (we’re unconvinced by most existing proposals, but many of these ideas have received <1 month of research effort from everyone in the space combined — it would be unsurprising if even just a few months of effort turned up better ideas) * Investing in highly “flexible” capacity for advancing animal interests in AI-transformed worlds * Trying to use AI for near-term animal welfare work, and fundraising from donors who have invested in AI * Heavily discounting “normal” interventions that take 10+ years to help animals * “Rowing” on na
 ·  · 3m read
 · 
About the program Hi! We’re Chana and Aric, from the new 80,000 Hours video program. For over a decade, 80,000 Hours has been talking about the world’s most pressing problems in newsletters, articles and many extremely lengthy podcasts. But today’s world calls for video, so we’ve started a video program[1], and we’re so excited to tell you about it! 80,000 Hours is launching AI in Context, a new YouTube channel hosted by Aric Floyd. Together with associated Instagram and TikTok accounts, the channel will aim to inform, entertain, and energize with a mix of long and shortform videos about the risks of transformative AI, and what people can do about them. [Chana has also been experimenting with making shortform videos, which you can check out here; we’re still deciding on what form her content creation will take] We hope to bring our own personalities and perspectives on these issues, alongside humor, earnestness, and nuance. We want to help people make sense of the world we're in and think about what role they might play in the upcoming years of potentially rapid change. Our first long-form video For our first long-form video, we decided to explore AI Futures Project’s AI 2027 scenario (which has been widely discussed on the Forum). It combines quantitative forecasting and storytelling to depict a possible future that might include human extinction, or in a better outcome, “merely” an unprecedented concentration of power. Why? We wanted to start our new channel with a compelling story that viewers can sink their teeth into, and that a wide audience would have reason to watch, even if they don’t yet know who we are or trust our viewpoints yet. (We think a video about “Why AI might pose an existential risk”, for example, might depend more on pre-existing trust to succeed.) We also saw this as an opportunity to tell the world about the ideas and people that have for years been anticipating the progress and dangers of AI (that’s many of you!), and invite the br
 ·  · 12m read
 · 
I donated my left kidney to a stranger on April 9, 2024, inspired by my dear friend @Quinn Dougherty (who was inspired by @Scott Alexander, who was inspired by @Dylan Matthews). By the time I woke up after surgery, it was on its way to San Francisco. When my recipient woke up later that same day, they felt better than when they went under. I'm going to talk about one complication and one consequence of my donation, but I want to be clear from the get: I would do it again in a heartbeat. Correction: Quinn actually donated in April 2023, before Scott’s donation. He wasn’t aware that Scott was planning to donate at the time. The original seed came from Dylan's Vox article, then conversations in the EA Corner Discord, and it's Josh Morrison who gets credit for ultimately helping him decide to donate. Thanks Quinn! I met Quinn at an EA picnic in Brooklyn and he was wearing a shirt that I remembered as saying "I donated my kidney to a stranger and I didn't even get this t-shirt." It actually said "and all I got was this t-shirt," which isn't as funny. I went home and immediately submitted a form on the National Kidney Registry website. The worst that could happen is I'd get some blood tests and find out I have elevated risk of kidney disease, for free.[1] I got through the blood tests and started actually thinking about whether to do this. I read a lot of arguments, against as well as for. The biggest risk factor for me seemed like the heightened risk of pre-eclampsia[2], but since I live in a developed country, this is not a huge deal. I am planning to have children. We'll just keep an eye on my blood pressure and medicate if necessary. The arguments against kidney donation seemed to center around this idea of preserving the sanctity or integrity of the human body: If you're going to pierce the sacred periderm of the skin, you should only do it to fix something in you. (That's a pretty good heuristic most of the time, but we make exceptions to give blood and get pier