Hide table of contents

Background

I’ve been actively involved in EA since 2020, when I started EA Romania. In my experience, one problem that frustrates many grant applicants is the limited feedback offered by grantmakers. In 2022, at the EAG in London, while trying to get more detailed feedback regarding my own application at the EAIF office hours, I realized that many other people had similar complaints. EAIF’s response seemed polite but not very helpful. Shortly after this experience, I also read a forum post where Linch, a junior grantmaker at the time, argued that it’s “rarely worth your time to give detailed feedback.” The argument was:

[F]rom a grantmaking perspective, detailed feedback is rarely worthwhile, especially to rejected applicants. The basic argument goes like this: it’s very hard to accurately change someone’s plans based on quick feedback (and it’s also quite easy to do harm if people overupdate on your takes too fast just because you’re a source of funding). Often, to change someone’s plans enough, it requires careful attention and understanding, multiple followup calls, etc. And this time investment is rarely enough for you to change a rejected (or even marginal) grant to a future top grant. Meanwhile, the opportunity cost is again massive.

Similarly, giving useful feedback to accepted grants can often be valuable, but it just isn’t high impact enough compared to a) making more grants, b) making grants more quickly, and c) soliciting creative ways to get more highest-impact grants out.

Since then I have heard many others complain about the lack of feedback when applying for grants in the EA space. My specific experience was with the EAIF, but based on what I’ve heard I have the feeling this problem might be endemic in the EA grantmaking culture in general.

The case for more feedback

Linch’s argument that “the opportunity cost of giving detailed feedback is massive” is only valid if by “detailed feedback” he means something really time consuming. However, it cannot be used to justify EAIF’s current policy of giving no feedback at all by default, and giving literally a one-sentence piece of feedback upon request. Using this argument to justify something so extreme would be an example of what some might call “act utilitarianism”, “naive utilitarianism”, or “single-level” utilitarianism: it may seem that, in certain cases, giving feedback is a waste of resources compared to other counterfactual actions. If you only consider first-order consequences, however, killing a healthy checkup patient and using his organs to save five is also effective. In reality, we need to also consider higher order consequences. Is it healthy for a movement to adopt a policy of not giving feedback to grant applicants?

Personally, I feel such a policy runs the risk of seeming disrespectful towards grant applicants who spend time and energy planning projects that end up never being implemented. This is not to say that the discomfort of disappointed applicants counts more than the suffering of Malaria infected children. But we are human and there is a limit to how much we can change via emotional resilience workshops. Besides, there is such a thing as too much resilience. I have talked to other EAs who applied for funds, 1:1 advice from 80k, etc, and many of them felt frustrated and somewhat disrespected after being rejected multiple times with no feedback or explanation. I find this particularly worrisome in the case of founders of national groups, since our experience may influence the development of the local movement. There is a paragraph from an article by The Economist which I think adds to my point:

As the community has expanded, it has also become more exclusive. Conferences, seminars and even picnics held by the Centre for Effective Altruism are application-only. Simon Jenkins was an early member of the community and founded an effective-altruism group in Birmingham in Britain. He has since drifted somewhat away from the movement, after years of failing to get a job at its related institutions. It has become both more “rigorously controlled”, he said, and more explicitly elitist. During an event at a Birmingham pub he once heard someone announce that “any Oxbridge grad can get involved”. “I was like, hold on a sec, is that the standard?”

Of course such events can be interpreted in many ways, but the point here is that EA has a reputation for harboring certain problematic attitudes, and that harms the movement. Giving feedback that is longer than one line can be a good step in the direction of correcting that.

An argument from virtue ethics

I’m a typical male software developer who scores highish on autistic traits (33/50). I can relate to the hyper-systematizing way of thinking that is dominant in EA circles. In fact, this is one of the things that attracted me to EA. However, even I have started to see how this way of thinking about ethics can be problematic or extreme in certain cases.

In an article titled “Effective altruism is logical, but too unnatural to catch on”, psychology professor Alan Jern argues that, if you’re an EA escaping from a burning building and you get to save either a child or a Picasso worth millions of dollars, you should save the Picasso because then you can sell it and donate the proceeds to effective charities that will save many children. When I first read the article, I thought this scenario was a strawman, a naive interpretation of what EAs actually believe. In 2022, however, I attended a Giving What We Can meetup, organized after EAG London, and had this exact discussion with a couple of people. I was surprised to find out that many EAs actually agreed that the right thing to do was to save the Picasso. 

Personally, I’d save the child rather than the Picasso, and I don’t think this is necessarily a violation of EA principles. EA is right when it points out that much of the charity done in the world is based on emotion, but I don’t think EA should promote the complete elimination of emotion from moral decision making. EA should not be seen as a project that replaces emotions with a hyper-rational approach. Aristotle said that virtue is the sweet spot between two vices. I believe that, as much as being overly emotional is a vice, so is being overly robotic in our moral calculations. As Joshua Greene argues in Moral Tribes:

If what utilitarianism asks of you seems absurd, then it’s not what utilitarianism actually asks of you. Utilitarianism is, once again, an inherently practical philosophy, and there’s nothing more impractical than commanding free people to do things that strike them as absurd and that run counter to their most basic motivations. Thus, in the real world, utilitarianism is demanding, but not overly demanding. It can accommodate our basic human needs and motivations, but it nonetheless calls for substantial reform of our selfish habits.

In The Life You Can Save, Peter Singer similarly argues that:

Asking people to give more than almost anyone else gives risks turning them off. It might cause some to question the point of striving to live an ethical life at all. Daunted by what it takes to do the right thing, they may ask themselves why they are bothering to try. To avoid that danger, we should advocate a level of giving that will lead to the greatest possible positive response.

Of course, where to draw the line between overly emotional and overly robotic is ultimately an empirical question. As a consequentialist, I would argue that the sweet spot between the emotional and the rational is the spot that maximizes the total longterm well-being of sentient life. Unfortunately, it’s impossible to know for sure where this spot actually is. As members of EA, we can be sure, however, that if we promote an attitude that is too robotic, too cold and calculated, too mathematical and unemotional, EA will become an excessively narrow movement that attracts only a specific kind of personality. If extreme enough, there is the risk that EA views will be so shocking to the outside world that the reputation of the movement will be even more damaged than it has already been. These repercussions are the kinds of second-order consequences that multi-level utilitarianism asks us to consider when coming up with heuristic rules to guide a community.

In some ways, not giving satisfactory feedback to grant applicants is like saving a Picasso and letting the child die. It could be the best decision in a hypothetical scenario with no higher order consequences, but this decision is not the best in the real world. People need feedback. People need to know their time and effort are valued. People need to know how to improve before they apply for funds again. They need to know whether trying again is worth it or not. The culture of “when in doubt, apply” combined with the culture of “we can do better things with our time than give feedback,” combined with lack of transparency regarding the statistical odds of getting funded, is a dangerous mix that creates resentment and harms the community.

The case for more democracy

Of course, I may be wrong. Perhaps the sample of people I spoke to who expressed resentment is not representative. Maybe there are so many individuals and groups applying for funds that it doesn’t matter if some become frustrated and abandon the movement. Perhaps keeping the current feedback policy is actually better for the long-term well-being of sentient life. Or maybe I am right and it would be better to give more feedback. How can we know? That’s the problem with multi-level utilitarianism: it becomes speculative very fast. It’s impossible to know whether one set of rules and social norms really would be better than another. However, one solution to this epistemic conundrum is democracy. We can appeal to the wisdom of crowds and ask people to vote on which option they think would empirically turn out to be better.

In my experience, one of the aspects of EA that is generally viewed as problematic is the lack of democratic values and accountability. In the secular humanist movement, where I’ve been involved for longer than I’ve been involved in EA, democracy is an explicit value, enshrined in Humanists International’s statute. Although I appreciate the culture of asking for feedback in EA, sometimes I wonder what happens to that feedback. In the secular humanist movement, if people are frustrated with the administration, they can express their criticism at conferences or in other communication channels, and if those frustrations are not addressed, members can vote leaders out in the next elections. If EAs are frustrated with the movement’s organizational structures and decision-making processes, what can we do?

I understand that democracy has its dangers, and that sometimes we should defer to experts rather than crowds. Still, we must find a balance between oligarchy and mob rule. I think EA is erring on the side of elitism and overlooking the value democracy can have as a mechanism for error-correction, and thus progress.

Conclusion

To summarize my argument:

  1. There have been several cases of grantmakers giving limited feedback when rejecting proposals. This lack of feedback harms the community.
  2. If grantmakers commit to a policy of giving more feedback, this will improve community health and the effect of this change will be net positive for the movement and the world.
  3. If we define our policies more democratically, they’re more likely to have a net positive impact because the wisdom of crowds will make our empirical assumptions more accurate.

What do you think? Do you agree that grantmakers don’t give enough feedback? Do you agree that EAs should be more suspicious of speculative arguments about the potential impact of certain policies? Do you think more democracy could improve our decision making? In what ways do you think my reasoning might be wrong? Looking forward to hearing your thoughts :)

77

8
9

Reactions

8
9

More posts like this

Comments23
Sorted by Click to highlight new comments since:

By now I think people are well aware of the basic arguments for and against grant application feedback. To move the conversation forward it might be helpful for people to try to quantify how valuable and/or costly it would be to them. For example, if you are a grant applicant, how much lower a probability of funding would you be willing to accept in return for brief or detailed feedback? If you are a grant evaluator, how much extra time would it take to provide feedback, and how often would the feedback be so critical the applicant would likely find it unpleasant to receive?

When I have read grants, most have (unfortunately) fallen closer to: "This idea doesn't make any sense" than "This idea would be perfect if they just had one more thing". When a grant falls into the latter, I suspect recipients do often get advice.

I think the problem is that most feedback would be too harsh and fundamental -- these are very difficult and emotionally costly conversations to have. It can also make applicants more frustrated and spread low fidelity advice on what the grant maker is looking for. A rejection (hopefully) encourages the applicant to read and network more to form better plans. 

I would encourage rejected applicants to speak with accepted ones for better advice. 

"these are very difficult and emotionally costly conversations to have"

I don't think this has to be the case. These things can usually be circumvented with sufficiently impersonal procedures, such as rating the application and having a public guide somewhere with tips on how to interpret the rating and a suggested path for improvement (e.g. talking to a successful grant recipient in a similar domain). A "one star" or "F" or "0" rating would probably still be painful, but that's inevitable. Trying to protect people from that strikes me as paternalistic and "ruinously empathetic".

combined with lack of transparency regarding the statistical odds of getting funded

LTFF received about 911 grants and funded 196 grants in 2023. EAIF received about 492 applications and funded 121 grants. (I'm not committing to the exact numbers, this is just what I can quickly pull from the database).

The actual success rate should be significantly rosier than the above numbers, as a) sometimes applicants withdraw applications, b) sometimes we refer applications to other funders, and c) some applications are clearly irrelevant (eg if a homelessness shelter applies to LTFF). 

The success rate is lower than past years, in part due to us rising the bar (which is mostly due to our own funding constraints) and in part because I think the number of spam applications we've received has gone up over time.

Note that the numbers presented shouldn't be taken too seriously. Distributions of applications to different funds should vary widely, eg the ill-fated Future Fund had something like a 4% acceptance rate in 2022, and if anything had a substantially lower bar than 2023's LTFF.

Hope that helps! Let me know if there are other stats that could be helpful.

As someone considering applying to LTFF, I found even rough numbers here very useful. I would have guessed success rates 10x lower.

If it is fairly low-cost for you (e.g.: can be done as an automated database query), publishing this semi-regularly might be very helpful for potential applicants.

Thanks for the feedback! Do you have thoughts on what platform would be most helpful for you and other (potential) applicants? Independent EAF shortform, a point attached somewhere as part of our payout reports, listed on our website, or somewhere else?

I don't have a strong opinion here. I would guess having the information out and findable is the most important. My initial instinct is directly or linked from the fund page or applicant info.

The actual success rate should be significantly rosier than the above numbers, as a) sometimes applicants withdraw applications, b) sometimes we refer applications to other funders, and c) some applications are clearly irrelevant (eg if a homelessness shelter applies to LTFF).

Doesn't (a) point the other way?

To clarify, I think withdrawn applications counted in the denominator when I was pulling data, but not the numerator. Additionally, I expect common reasons for withdrawal includes being funded elsewhere; I'd weakly guess that withdrawn applications are more likely than baseline to counterfactually be funded.

Thanks for clarifying! I'd been thinking they weren't in the denominator.

(I also hadn't been thinking about why someone might withdraw, and being funded elsewhere makes a lot of sense.)

I suspect that one of the issues is that grantmakers don’t want to provide false hope. They can’t even tell you the obvious improvements because in most cases there's a bunch of other issues that would have to be fixed as well to make it competitive.

As someone who has been rejected multiple times and sometimes received a bit of feedback I can understand the frustration. At the same time, why is the feedback of that grantmaker in particular so important? I would advise to just ask for feedback from anyone in one's EA network you think have some understanding of grantmaker perspectives. For example, if 80k hrs advisors, your local EA group leadership and someone you know working at an EA org all think you and your idea are good and you should apply for funding, then I would just do that. If you get rejected by all grantmakers than you probably were not too far below the funding bar. In that case perhaps wait until there is hopefully more funding or look at donors' priorities and try to align with that. I also think Linch and other grantmakers do a good job of saying what types of projects currently fall just above and below their funding bar, giving a pretty good sense of donors' priorities.

I should say that I have also gotten some of my grant applications approved, so perhaps I am a bit dismissive of views from those that never got a grant application accepted.

I would advise to just ask for feedback from anyone in one's EA network you think have some understanding of grantmaker perspectives. For example, if 80k hrs advisors, your local EA group leadership and someone you know working at an EA org

Most people in EA don't have anyone in their network with a good understanding of grant makers perspective. 

I think that "your local EA group leadership" usually don't know. The author of this post is a national group founder, and they don't have a good understanding of what grant makers want. 

A typical lunch conversation with people who are working in AI Safety (paid researchers, who got money from somewhere) is venting over that everyone is confused by OpenPhils funding policy.

Good point, perhaps I have been especially lucky then as a newcomer to direct EA work and grant applications. I guess that makes me feel even more gratitude for all the support I have received including people helping both discuss project ideas as well as help review grant applications.

And even if you happen to have access to people with relevant knowledge, all the arguments against the actual grantmakers offering feedback applies more strongly to them:

  • its time consuming, more so because they're reading the grant app in addition to their job rather than as part of it
  • giving "it makes no sense" feedback is hard, more so when personal relationships are involved and the next question is going to be "how do I make it make sense?"
  • people might overoptimize for feedback, which is a bigger problem when the person offering the feedback has more limited knowledge of current grant selection priorities

I get that casually discussing at networking events might eliminate the bottom 10% of ideas (if everyone pushes back on your idea that ballet should be a cause area or that building friendly AI in the form of human brain emulation is easy, you probably shouldn't pursue it), but I'm not sure how "networking" can possibly be the most efficient way of improving actual proposals. Unless - like in industrial funding - there's a case for third party grant writer / project manager types that actually help people turn half decent ideas into well-defined fundable projects for a share of the fund? 

I don't think this post engages with the core argument Linch makes, much less refutes it. You have some reasons more feedback is nicer than less feedback, but don't quantify the benefits, much less the costs.

That said, I had a rejection from SFF that implies a system I'd love to see replicated. From memory, it was ~"you are not in the top N% of rejections, and therefor we will not be giving detailed feedback". This took no extra work to generate (because SFF already ranks applications), and gave me a fair amount of information about where I stood. I ended up giving up on that project in that form, and that was the right decision. 

But I agree with your point that no-info rejections combine poorly with "when in doubt, apply", and would love to see people stop doing the latter. 

Yeah there were some other useful recommendations in my original post on how to do scalable feedback. We recently worked with the Manifund team to implement a new dashboard/technical way to communicate to grantees. I'm optimistic that we can find a way to extend that dashboard to provide some high-level, non-granular feedback in ways that's low-cost to grantmakers but still useful. I don't expect us to prioritize that in the short term (as opposed to hiring, fundraising, work on improving grantee experience, and processes that speed up grant evals further) but I am optimistic we can get something reasonable this year (this is a prediction not a commitment).

While I expect the process-driven feedback to be mildly object-level useful, I'm skeptical about the cultural/"warmness" benefits however. I think most (honest) forms of "warmness" signaling comes from a credible signal that you are willing to devote your time to address concerns, and any clear signs of automation in the pipeline would undercut that. 

But I agree with your point that no-info rejections combine poorly with "when in doubt, apply", and would love to see people stop doing the latter. 

I think I'm skeptical that people apply at above the optimal rate, especially for grants. I think the numbers mostly don't add up, unless people are extremely close to the indifference point between getting a grant and their next-best option. (I'm more sympathetic to the case for job applications, particularly ones with extensive early stages). 

I reached out to Linch about doing a dialogue about grant applications. Hopefully we'll get to do so after eag.

The culture of “when in doubt, apply” combined with the culture of “we can do better things with our time than give feedback,” combined with lack of transparency regarding the statistical odds of getting funded, is a dangerous mix that creates resentment and harms the community.

Agree!
I believe this is a big contributor or burnout and people leaving EA.

See also: The Cost of Rejection — EA Forum (effectivealtruism.org)

 

However, I don't think the solution is more feedback from grant makers. The vetting boatneck is a big part of the problem. Requiring more feedback will just make it worse. 

Giving feedback on applications is hard in a way that I'm not sure how to communicate to people who have not been on the other side of an application process. Sorry for not explaining better. If someone wants to help with this, I can have a call with you, where we talk it though, and then you write it down to share on EA Forum. I think this could be high value since it's a point I see coming up over and over.

The reason we're vetting bottlenecked is that very few people are trusted to do this job, but the people who control the money. If want to help solve this, don't give to EA Funds. Either make your own donation decision, or delegate to literally anyone else. Centralising funding this way was a mistake. (This is not a critique of the people running these funds!)

 

As the quote says, the situation is created by a combination of factors. I'd like to change the culture of “when in doubt, apply”. Writing an application that actually have a chance of succeeding is a lot of work, and for most people, rejection hurts. Also, maybe if there where fewer applications, maybe grant makers could give feedback. 

A lot of EAs that don't have experience with the grant system thinks that it's much easier to get funding than it actually is. This is very bad for several reason:

  1. It's extra demoralising to get rejected when this is the culture around me, even when I know better.
  2. If these people every apply and get rejected, they will get more hurt and demoralised than they would have been with a more accurate picture.
  3. It makes it harder to fundraise outside the established grant system, because everyone immediate reaction is "why not just apply for a grant". This makes everyone even more reliable on these grants, making the vetting boatneck even worse. 

If someone wants to help with this, I can have a call with you, where we talk it though, and then you write it down to share on EA Forum. I think this could be high value since it's a point I see coming up over and over.

DM'd!

I mostly agree with you on the 2nd order consequences. But also, I think a bit of feedback is usually justified even considering the first-order consequences, as I mainly argued in the comment here to Linch's post, and others had similar comments.

Another perspective: many grant applicants and potentially impactful entrepreneurial EAs may waste a lot of time exploring a very dark space. They may spend a lot of time writing and rewriting proposals. 

They do not know whether they are 'close to being fundable' or very far from it, so they don't know:

- When to give up
- How much to make backup/fallback plans
- How to change their plans/proposal
- How much to 'jump' in adjusting their proposal in this dark space ... whether to make small or large adjustment
- In what direction to adjust

 

An interesting point of comparison might be grant award processes which do offer feedback.

InnovateUK, for example, ask bidders to answer defined questions, has up to five anonymous assessors score each project, and recommends the highest scoring projects. You get those scores and comments back whether you succeed or not, occasionally with a note an outlier score has been removed as unrepresentative. I wouldn't call that "democratic" even though you can see what are effectively votes, but it does create the impression of sensible process and accountable assessors. 

This might be more convoluted than EA grantmaking (some projects are re-scored after interview too...) but the basic idea of a scoring system against a set of criteria gives a reasonable indication of whether you were very close and may wish to resubmit to other funding rounds, need to find better impact evidence or drop it even though the basic plan is plausible or whether you should forget abut the whole thing. And that last bit absolutely is useful feedback, even if people don't like it. 

In some cases where EA orgs have very clear funding bars it might be even more concrete (project not in scope as clearly outside focus area, below $DALY threshold etc). I guess if you're too explicit about metrics there's a risk Goodhart's law applies, but they can save good faith applicants a lot of time. 

I get the idea of avoiding confrontation, and that the EA world is smaller than govt grants and so people might actually guess who gave them 1/5 for "team" and see them on social occasions, but I think there are benefits to both parties from checkbox-level feedback on what general areas are fine, need detail or need a total rethink. 

Executive summary: The author argues that grantmakers in EA should provide more detailed feedback to rejected applicants in order to improve community health, make better funding decisions through crowdsourcing, and address concerns about EA seeming elitist or exclusionary.

Key points:

  1. Many grant applicants feel frustrated by the limited feedback given on rejected proposals, which can seem disrespectful.
  2. While more feedback has opportunity costs, completely avoiding it harms community cohesion and fuels resentment.
  3. EA risks developing an excessively rationalist culture detached from human motivations if it discounts applicants' need for explanatory feedback.
  4. More democratic input into funding decisions could improve them by correcting errors and balancing expert judgment with collective wisdom.
  5. Grantmakers should commit to providing actionable feedback to nurture talent, maintain applicant enthusiasm, and gather critical perspectives.

 

 

This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.

Curated and popular this week
Relevant opportunities