Hide table of contents

Background

In a previous post, Brendon Wong discussed how to improve the early-stage funding landscape in the EA community. His ideas strongly resonated with me, so I worked with Brendon to research and develop some of them into more concrete strategy.

Historically, it appeared that early-stage EA projects were funded by one or more “angels”–a term borrowed from the for-profit startup investment space that refers to individuals that evaluate and invest in companies with their own money.

When angels cooperate in an “angel group,” they improve the investment process in three meaningful ways. First, each angel becomes more aware of investment opportunities because there’s a central place for companies to apply to for funding and each angel can refer potential investments to the group. Second, each angel can draw from the expertise of fellow angels, gaining different perspectives on an opportunity, making better investment decisions and improving their skill set. Third, each angel performs less investment overhead on average by sharing the work or hiring outside management to carry it out.

Deciding to test an angel group

Brendon and I arrived at three independent proposals for improving the early-stage funding environment:

  1. An online portal to enable the broader community to discover grant opportunities, add their thoughts on the relative merits and risks of grant proposals, and directly fund grants without an intermediary
  2. An angel group to improve how angel funding currently works in EA
  3. A distributed group of volunteer grant evaluators with expertise across many different areas of EA to improve upon the traditional model of a small centralized group evaluating a tremendous range of grants

Our R&D process to improve the early-stage EA project environment starts with collecting feedback from community members, then launching an initial solution, and finally iteratively improving the solution according to qualitative and quantitative feedback.

To collect feedback, Brendon and I reached out to EA Grants, BERI, and additional people/groups in the grantmaking space. Based on what we learned, we have decided to launch the angel group first because it has the clearest value add for the community and is straightforward to establish and operate. We will iteratively improve the angel group as needed in response to data and feedback.

How to get involved

Become an angel: We are seeking effective altruists that are prepared to spend significant time and money to evaluate and fund early-stage grants. If this sounds like you or someone you know, please direct yourself or your contacts to this angel application form: https://goo.gl/forms/WCt7hC9IuMJJT3qZ2

Apply for funding: We have created a preliminary application form for the primary purpose of learning which individuals and projects are applying for funding: https://goo.gl/forms/Gg8AgfUISJd21x9v2

Provide our initiative with funding: Our initiative to improve early-stage funding is, ironically enough, operating without any funding. We do not require funding to launch and administer this angel group, but funding would be really helpful for supporting our operations and the launch and testing of additional measures to improve the early-stage funding landscape.

Provide grant applicants with funding (without evaluating grants yourself): We intend on launching a more streamlined system for accepting public donations in the future, but if you are interested in funding promising early-stage projects in EA and have our angels/evaluators evaluate the projects, please get in touch with us about contributing.

Volunteer for us: If you are interested in assisting with our initiative, please get in touch! In the future, we will support evaluation volunteers that do not have capital but are talented at evaluating projects. We may not be able to accommodate evaluation volunteers at this time, but if you are passionate about advising or evaluating projects, please get in touch and we will see what the best arrangement is for your situation.

Comments8


Sorted by Click to highlight new comments since:

Great idea -- this is very much the way I want to use my philanthropy!

To support this, I asked in the EA forum post about EA grants whether unsuccessful applications to EA grants could be made public (with the applicant's permission, of course) so that others could look into those funding opportunities. http://effective-altruism.com/ea/1t9/ea_grants_applications_are_now_open/fq9

It seems this has had no reply

I imagine it's complicated to release details about projects that aren't selected to receive a grant. Presumably there are reasons the project wasn't selected. If EA Grants wanted to publicize their rejects, seems like they have two main options:

1) Make the project public without explaining their reasons for rejecting it. In this case, EA Grants might be making it more likely that a bad project is funded, by bringing it to the attention of other funders without warning them of possible pitfalls.

2) Make the project public AND explain their reasons. This might work, but takes up more time and social capital. EA Grants would have to spend time figuring out how to phrase their reasons diplomatically, so that a) their rejects weren't too discouraged or angry, b) other excellent candidates aren't discouraged from applying, c) other funders are appropriately warned, etc. It's hard to balance all of this. CEA also wouldn't be able to disclose confidential information that their decision might rely on, making it even trickier.

In summary: I think this is a lot harder than it might initially appear. (That said, there might still be good ways to do it?)

Thanks for engaging with this itty. I agree that option (2) would be onerous for EA grants.

However I don't see how option (1) makes things worse? They could simply publish the grant applications without endorsement or indeed any comment beyond the fact that those projects didn't make the cut.

If they don't do this, funders like me are simply left to find funding opportunities on their own.

Thanks Sanjay!

We are still working on the grant application form. I will add an option to the form that allows us to pass it on to EA Grants, BERI, etc. if our angels are unable to fund it.

Thanks for the supportive words Sanjay!

We also believe that early-stage grant opportunities should be made more transparent, and we even proposed a system in our post to create an "online portal to enable the broader community to discover grant opportunities, add their thoughts on the relative merits and risks of grant proposals, and directly fund grants without an intermediary." Making an online portal is more involved than making an angel group but it is possible we may launch something like this in the coming months.

We have already reached out to CEA regarding getting access to EA Grants' grant opportunities. Once our angel group gets going, I intend on resuming our contact with CEA to see what we can do regarding sharing grant opportunities in the early-stage funding space.

CEA doesn't seem to be as responsive on the EA Forum but we have been able to communicate with them via direct outreach.

I'd love to subscribe to a blog where you publish what grants you've recommended. Are you planning to run something like that?

It's interesting to see that we are receiving community responses that line up with three areas of demand: EAs who just want to fund grants, EAs who want to fund and evaluate grants, and EAs that just want to evaluate grants. There is also the category of EAs that want to perform auxiliary functions like help people assess the impact of working on an EA project and provide advising/support to EA projects that are running.

In our post, we mentioned a system targeted towards the third group (EAs that solely want to evaluate projects) involving the creation of a "distributed group of volunteer grant evaluators with expertise across many different areas of EA to improve upon the traditional model of a small centralized group evaluating a tremendous range of grants." This system will operate meritocratically and I anticipate that it will operate very transparently (barring concerns about project confidentiality).

As Ben mentioned, for the angel group which aims to target the middle group of EAs that want to fund and evaluate grants, we will cater to what angels in the group want. It's hard to tell if there will be strong consensus either way, or a divided group. I anticipate that at least some angels, particularly those that are confident in their grantmaking ability and process, will publicize their grants, and we definitely don't have a problem with that.

We have acquired the domain altruism.vc as a preliminary brand name and website for our initiative. We may use https://altruism.vc/, Medium, or the new EA Forum to post grant recommendations and grants we have issued.

Probably.

Considering that individuals will be using their own money to fund projects in the angel group, if for some reason our angels overwhelmingly request to have their grants private, we may decide not to publish. However, I predict the opposite case will occur, wherein angels take pride in the grants they've made, the projects they've funded.

Curated and popular this week
 ·  · 10m read
 · 
Regulation cannot be written in blood alone. There’s this fantasy of easy, free support for the AI Safety position coming from what’s commonly called a “warning shot”. The idea is that AI will cause smaller disasters before it causes a really big one, and that when people see this they will realize we’ve been right all along and easily do what we suggest. I can’t count how many times someone (ostensibly from my own side) has said something to me like “we just have to hope for warning shots”. It’s the AI Safety version of “regulation is written in blood”. But that’s not how it works. Here’s what I think about the myth that warning shots will come to save the day: 1) Awful. I will never hope for a disaster. That’s what I’m trying to prevent. Hoping for disasters to make our job easier is callous and it takes us off track to be thinking about the silver lining of failing in our mission. 2) A disaster does not automatically a warning shot make. People have to be prepared with a world model that includes what the significance of the event would be to experience it as a warning shot that kicks them into gear. 3) The way to make warning shots effective if (God forbid) they happen is to work hard at convincing others of the risk and what to do about it based on the evidence we already have— the very thing we should be doing in the absence of warning shots. If these smaller scale disasters happen, they will only serve as warning shots if we put a lot of work into educating the public to understand what they mean before they happen. The default “warning shot” event outcome is confusion, misattribution, or normalizing the tragedy. Let’s imagine what one of these macabrely hoped-for “warning shot” scenarios feels like from the inside. Say one of the commonly proposed warning shot scenario occurs: a misaligned AI causes several thousand deaths. Say the deaths are of ICU patients because the AI in charge of their machines decides that costs and suffering would be minimize
 ·  · 14m read
 · 
This is a transcript of my opening talk at EA Global: London 2025. In my talk, I challenge the misconception that EA is populated by “cold, uncaring, spreadsheet-obsessed robots” and explain how EA principles serve as tools for putting compassion into practice, translating our feelings about the world's problems into effective action. Key points:  * Most people involved in EA are here because of their feelings, not despite them. Many of us are driven by emotions like anger about neglected global health needs, sadness about animal suffering, or fear about AI risks. What distinguishes us as a community isn't that we don't feel; it's that we don't stop at feeling — we act. Two examples: * When USAID cuts threatened critical health programs, GiveWell mobilized $24 million in emergency funding within weeks. * People from the EA ecosystem spotted AI risks years ahead of the mainstream and pioneered funding for the field starting in 2015, helping transform AI safety from a fringe concern into a thriving research field. * We don't make spreadsheets because we lack care. We make them because we care deeply. In the face of tremendous suffering, prioritization helps us take decisive, thoughtful action instead of freezing or leaving impact on the table. * Surveys show that personal connections are the most common way that people first discover EA. When we share our own stories — explaining not just what we do but why it matters to us emotionally — we help others see that EA offers a concrete way to turn their compassion into meaningful impact. You can also watch my full talk on YouTube. ---------------------------------------- One year ago, I stood on this stage as the new CEO of the Centre for Effective Altruism to talk about the journey effective altruism is on. Among other key messages, my talk made this point: if we want to get to where we want to go, we need to be better at telling our own stories rather than leaving that to critics and commentators. Since
 ·  · 3m read
 · 
A friend of mine who worked as a social worker in a hospital told me a story that stuck with me. She had a conversation with an in-patient having a very difficult time. It was helpful, but as she was leaving, they told her wistfully 'You get to go home'. She found it hard to hear—it felt like an admonition. It was hard not to feel guilt over indeed getting to leave the facility and try to stop thinking about it, when others didn't have that luxury. The story really stuck with me. I resonate with the guilt of being in the fortunate position of being able to go back to my comfortable home and chill with my family while so many beings can't escape the horrible situations they're in, or whose very chance at existence depends on our work. Hearing the story was helpful for dealing with that guilt. Thinking about my friend's situation it was clear why she felt guilty. But also clear that it was absolutely crucial that she did go home. She was only going to be able to keep showing up to work and having useful conversations with people if she allowed herself proper respite. It might be unfair for her patients that she got to take the break they didn't, but it was also very clearly in their best interests for her to do it. Having a clear-cut example like that to think about when feeling guilt over taking time off is useful. But I also find the framing useful beyond the obvious cases. When morality feels all-consuming Effective altruism can sometimes feel all consuming. Any spending decision you make affects how much you can donate. Any activity you choose to do takes time away from work you could be doing to help others. Morality can feel as if it's making claims on even the things which are most important to you, and most personal. Often the narratives with which we push back on such feelings also involve optimisation. We think through how many hours per week we can work without burning out, and how much stress we can handle before it becomes a problem. I do find that