Hide table of contents

(This is a crosspost from LessWrong; here's the original)

I'm considering applying for some kind of a grant from the effective altruism community. A quick sketch of the specifics is here. Raemon replied there with a list of possibilities. In this post, I'll look into each of those possibilities, to make this process easier for whoever comes next. In the order Raemon gave them, those are:

OpenPhil

"Can I apply for a grant?
In general, we expect to identify most giving opportunities via proactive searching and networking. We expect to fund very few proposals that come to us via unsolicited contact. As such, we have no formal process for accepting such proposals and may not respond to inquiries. If you would like to suggest that we consider a grant — whether for your project or someone else’s — please contact us. "

This looks like a case where it's at least partially about "who you know". I do in fact have some contacts I could approach in this regard, and I may do so as this search proceeds.But this does seem like a bias that it would be good to try to reduce. I understand that there are serious failure modes for "too open" as well as "too closed", but based on the above I think it currently tilts towards the latter. Perhaps a publicly-announced process for community vetting? I suspect there are people who are qualified and willing to help sort the slush-pile that would create.

CEA (Center for Effective Altruism)

Applications for the current round have now closed. If you’d like to be notified when applications next open, please submit your contact information through this form.
The goal of Effective Altruism Grants is to enable people to work on projects that will contribute to solving some of the world’s most important problems. We are excited to fund projects that directly contribute to helping others, as well as projects that will enable individuals to gain the skills needed to do so.
...
CEA only funds projects that further its charitable objects.[1] However, we welcome applications that may be of interest to our partners who are also looking to support promising projects. Where appropriate, we have sometimes passed applications along to those partners.

This would seem to be a dead end for my purposes in two regards. First, applications are not currently open, and it's not clear when they will be. And second, this appears to focus on projects with immediate benefits, and not meta-level basic research like what I propose.

BERI (Berkeley Existential Risks Initiative) individual grants

BERI’s Individual Grants program focuses on making grants to individuals or teams of individuals, rather than to organizations. There are several types of individual grants programs that BERI expects to run, such as:
Individual Project Grants are awarded to individuals to carry out projects directly in service of BERI’s mission.
Individual Level-Up Grants are awarded to individuals to carry out projects or investigations to improve the skills and knowledge of the grantee, with hopes that they will carry out valuable work for BERI’s mission in the future.
What is the process for obtaining an individual grant from BERI?
Typically, BERI will host “rounds” for its various individual grants programs. Details about how to apply will be in the announcement of the round.... If you would like to be notified when BERI is running one of the above grants rounds, please send an email to individual-grants@existence.org noting which type of grant round you are interested in.

Another dead end, at the moment, as applications are not open.

EA funds

There are 4 funds (Global Development, Animal Welfare, Long-Term Future, and Effective Altruism Meta). Of these 4, only Long-Term Future appears to have a process for individual grant applications, linked from its main page. (Luckily for me, that's the best fit for my plan anyway.)

We are particularly interested in small teams and individuals that are trying to get projects off the ground, or that need less money than existing grant-making institutions are likely to give out (i.e. less than ~$100k, but more than $10k). Here are a few examples of project types that we're open to funding an individual or group for (note that this list is not exhaustive):
+ To spend a few months (perhaps during the summer) to research an open problem in AI alignment or AI strategy and produce a few blog posts or videos on their ideas + To spend a few months building a web app with the potential to solve an operations bottleneck at x-risk organisations + To spend a few months up-skilling in a field to prepare for future work (e.g. microeconomics, functional programming, etc). + To spend a year testing an idea that has the potential to be built into an org.

This is definitely the most promising for my purposes. I will be applying with them in the near future.

Conclusions

I'm looking for funds in the $10K-$100K range for a short-term project that would probably fall through the gaps of traditional funding mechanisms — an individual basic research project. It seems the EA community is trying to address this issue of funding this kind of project in a way that has fewer arbitrary gaps while still having rigorous standards. Nevertheless, I think that the landscape I surveyed above is still fragmented in arbitrary ways, and worthy projects are still probably falling through the gaps.

Raemon suggested in a comment on my earlier post that "something I'm hoping can happen sometime soon is for those grantmaking bodies to build more common infrastructure so applying for multiple grants isn't so much duplicated effort and the process is easier to navigate, but I think that'll be awhile". I think that such "common infrastructure" would help a more-unified triage process so that the best proposals wouldn't fall through the cracks. I think this benefit would be even greater than the ones Raemon mentioned (less duplicated effort and easier navigation). I understand that this refactoring takes time and work and probably won't be ready in time for my own proposal.

31

0
0

Reactions

0
0

More posts like this

Comments13


Sorted by Click to highlight new comments since:

(x-posting my comment)

Hi Jameson,

I lead the EA Grants program at CEA and anyone should feel free to contact me (nicole.ross@centreforeffectivealtruism.org) if they have any questions or if a time sensitive opportunity comes up before the next grant round opens. Please feel very free to reach out!

Also, in case it's helpful: I looked at your other post briefly, and I don't think the topic automatically excludes it from EA Grants.

More generally, I'd be interested in hearing your thoughts about the types of projects that might be falling through the cracks. I only recently started at CEA and am still thinking through what EA Grants should look like in the future (e.g. what niche it should fill within the funding space, how it can be better and more efficient). If you (or others) have thoughts on this topic, please email me: nicole.ross@centreforeffectivealtruism.org.

Hi Jameson,

I'm a fund manager for the EA Meta Fund. Your assessment in your post is incorrect - we are also open to individual grant applications, though applications for the February distribution have now closed. I'd expect them to open again in a couple of months.

I'm curious how you got the impression that we aren't open to applications. It's important to us that we are able to reach all interested individuals so any insight into where we may have failed to communicate that is useful to us.

The Long Term Future Fund has a section for "Can you fund my project?" on its main page, while your fund has no section that parsed to me as asking/answering that question.

Let's Fund has recently set up to try to get funding for neglected and speculate projects in effective altruism. They seem to particularly focus on research. It could be worth reaching out to them about whether your project is the kind they'd be interested in fundraising for.

Check out Tyler Cowen's Emergent Ventures.

We want to jumpstart high-reward ideas—moonshots in many cases—that advance prosperity, opportunity, liberty, and well-being. We welcome the unusual and the unorthodox.
Projects will either be fellowships or grants: fellowships involve time in residence at the Mercatus Center in Northern Virginia; grants are one-time or slightly staggered payments to support a project.
Think of the goal of Emergent Ventures as supporting new ideas and projects that are too difficult, too hard to measure, too unusual, too foreign, too small, or…too something to make their way through the usual foundation and philanthropic process.

Here's the first cohort of grant recipients. I think your project would fit what they're looking for, and it's a pretty low cost to apply.

For sure. Also check with Tyler before applying because there's some stuff he definitely won't fund (and he replies to his email).

(Your crossposting link goes to the edit page of your post, not the post itself.)

Thanks, fixed.

YC 120 isn't quite a funding source, but getting in would connect you with a bunch of possible funders. Applications close on Feb 18th.

If anybody here thinks they could help me find a source of funding, or just help understand how funding works in this area and what I need to do to get it, I'd love to have a deeper conversation. I'd also be grateful if you could suggest other people I should be talking to. My email is jameson dot quinn on google's well-known email service.

Due to your proyects being on the area of voting, theory, you might want to contact: https://instituteforcompgov.org/about/

Thanks.

They don't appear to have a regular grant-making process. Is there someone inside that organization whom you'd recommend I talk to? If not, I'm still grateful; I've had positive correspondence with Alex Tabarrok in the past, though I doubt he'd remember it, so I could start with him.

I myself don't have any contacts inside that organization, I just happened to know that they exist.

Curated and popular this week
Forethought[1] is a new AI macrostrategy research group cofounded by Max Dalton, Will MacAskill, Tom Davidson, and Amrit Sidhu-Brar. We are trying to figure out how to navigate the (potentially rapid) transition to a world with superintelligent AI systems. We aim to tackle the most important questions we can find, unrestricted by the current Overton window. More details on our website. Why we exist We think that AGI might come soon (say, modal timelines to mostly-automated AI R&D in the next 2-8 years), and might significantly accelerate technological progress, leading to many different challenges. We don’t yet have a good understanding of what this change might look like or how to navigate it. Society is not prepared. Moreover, we want the world to not just avoid catastrophe: we want to reach a really great future. We think about what this might be like (incorporating moral uncertainty), and what we can do, now, to build towards a good future. Like all projects, this started out with a plethora of Google docs. We ran a series of seminars to explore the ideas further, and that cascaded into an organization. This area of work feels to us like the early days of EA: we’re exploring unusual, neglected ideas, and finding research progress surprisingly tractable. And while we start out with (literally) galaxy-brained schemes, they often ground out into fairly specific and concrete ideas about what should happen next. Of course, we’re bringing principles like scope sensitivity, impartiality, etc to our thinking, and we think that these issues urgently need more morally dedicated and thoughtful people working on them. Research Research agendas We are currently pursuing the following perspectives: * Preparing for the intelligence explosion: If AI drives explosive growth there will be an enormous number of challenges we have to face. In addition to misalignment risk and biorisk, this potentially includes: how to govern the development of new weapons of mass destr
jackva
 ·  · 3m read
 · 
 [Edits on March 10th for clarity, two sub-sections added] Watching what is happening in the world -- with lots of renegotiation of institutional norms within Western democracies and a parallel fracturing of the post-WW2 institutional order -- I do think we, as a community, should more seriously question our priors on the relative value of surgical/targeted and broad system-level interventions. Speaking somewhat roughly, with EA as a movement coming of age in an era where democratic institutions and the rule-based international order were not fundamentally questioned, it seems easy to underestimate how much the world is currently changing and how much riskier a world of stronger institutional and democratic backsliding and weakened international norms might be. Of course, working on these issues might be intractable and possibly there's nothing highly effective for EAs to do on the margin given much attention to these issues from society at large. So, I am not here to confidently state we should be working on these issues more. But I do think in a situation of more downside risk with regards to broad system-level changes and significantly more fluidity, it seems at least worth rigorously asking whether we should shift more attention to work that is less surgical (working on specific risks) and more systemic (working on institutional quality, indirect risk factors, etc.). While there have been many posts along those lines over the past months and there are of course some EA organizations working on these issues, it stil appears like a niche focus in the community and none of the major EA and EA-adjacent orgs (including the one I work for, though I am writing this in a personal capacity) seem to have taken it up as a serious focus and I worry it might be due to baked-in assumptions about the relative value of such work that are outdated in a time where the importance of systemic work has changed in the face of greater threat and fluidity. When the world seems to
 ·  · 6m read
 · 
*Disclaimer* I am writing this post in a personal capacity; the opinions I express are my own and do not represent my employer. I think that more people and orgs (especially nonprofits) should consider negotiating the cost of sizable expenses. In my experience, there is usually nothing to lose by respectfully asking to pay less, and doing so can sometimes save thousands or tens of thousands of dollars per hour. This is because negotiating doesn’t take very much time[1], savings can persist across multiple years, and counterparties can be surprisingly generous with discounts. Here are a few examples of expenses that may be negotiable: For organizations * Software or news subscriptions * Of 35 corporate software and news providers I’ve negotiated with, 30 have been willing to provide discounts. These discounts range from 10% to 80%, with an average of around 40%. * Leases * A friend was able to negotiate a 22% reduction in the price per square foot on a corporate lease and secured a couple months of free rent. This led to >$480,000 in savings for their nonprofit. Other negotiable parameters include: * Square footage counted towards rent costs * Lease length * A tenant improvement allowance * Certain physical goods (e.g., smart TVs) * Buying in bulk can be a great lever for negotiating smaller items like covid tests, and can reduce costs by 50% or more. * Event/retreat venues (both venue price and smaller items like food and AV) * Hotel blocks * A quick email with the rates of comparable but more affordable hotel blocks can often save ~10%. * Professional service contracts with large for-profit firms (e.g., IT contracts, office internet coverage) * Insurance premiums (though I am less confident that this is negotiable) For many products and services, a nonprofit can qualify for a discount simply by providing their IRS determination letter or getting verified on platforms like TechSoup. In my experience, most vendors and companies
Recent opportunities in Building effective altruism