Recently someone made a post expressing their unease with EA's recent wealth. I feel uncomfortable too. The primary reason I feel uncomfortable is that a dozen people are responsible for granting out hundreds of millions of dollars, and that as smart and hardworking as these people are, they will have many blindspots. I believe there are other forms of grantmaking structures that should supplement our current model of centralised grantmaking, as it would reduce the blindspots and get us closer to optimal allocation of resources.
In this post I will argue:
- That we should expect centralised grantmaking to lead to suboptimal allocation of capital.
- That there exists other grantmaking structures that will get us closer to the best possible allocation.
Issues with centralised funding
Similarly to the USSR's economic department that struggled with determining the correct price of every good, I believe EA grantmaking departments will struggle for similar reasons. Grantmakers have imperfect information! No matter how smart the grantmaker, they can't possibly know everything.
To overcome their lack of omniscience grantmakers must rely on heuristics such as:
- Is there someone in my network who can vouch for this person/team?
- Do they have impressive backgrounds?
- Does their theory of change align with my own?
These heuristics can be perfectly valid for grantmakers to use, and result in the best allocation they can achieve given their limited information. But the heuristics are biased and result in sub-optimal allocation to what could theoretically be achieved with perfect information.
For example, people who have spent significant time in EA hubs are more likely to be vouched for by someone in the grantmakers network. Having attended an ivy league university is a great signal that someone is talented, but there is a lot of talent that did not.
My issue is not that grantmakers use these proxies. My issue is that if all of our grantmaking uses the same proxies, then there will be a great deal of talented people with great projects that should have been funded but were overseen. I'm not sure about this, but I imagine that some complaints about EA's perceived elitism stem from this. EA grantmakers are largely cut from the same cloth, live in the same places, and have similar networks. Two anti-virus systems that detect the same 90% of viruses is no more useful than a single anti-virus system, two systems that are uncorrelated will instead detect 99% of all viruses. Similarly we should strive for our grantmakers's biases to be uncorrelated if we want the best allocation of our capital.
In the long run, overreliance on these proxies can also lead to bad incentives and increased participation in zero-sum games such as pursuing expensive degrees to signal talent.
We shouldn't expect for our current centralised grantmaking to be optimal in theory, and I don't think it is in practice either. But fortunately I think there's plenty we can do to improve it.
What we can do to improve grantmaking
The issue with centralised grantmaking is that it operates off imperfect information. To improve grantmaking we need to take steps to introduce more information into the system. I don't want to propose anything particularly radical. The system we have in place is working well, even if it has its flaws. But I do think we should be looking into ways to supplement our current centralised funding with other forms of grantmaking that have other strengths and weaknesses.
Each new type of grantmaking and grantmaker will spot talent that other grantmaking programs would have overseen. Combined they create a more accurate and robust ecosystem of funding.
FTX Future fund's regranting programme is a great example of the type of supplementing grantmaking structure I think we should be experimenting with. I feel slightly queasy that their system to decide new grantmakers may perpetuate the biases of the current grantmakers. But I don't want to let perfect be the enemy of the good, and their grantmaker programme is yet another reason I'm so excited about the FTX future fund.
Below are a few off-the-cuff ideas that could supplement our current centralised structure:
- Quadratic funding
- Grantmaker rotation system
- regranting programmes
- Incubator programs to discover projects and talent worth funding
- More grantmakers
Hundreds of people spent considerable time writing applications to FTX Future fund's first round of funding. It seems inefficient to me that there aren't more sources of funding looking over these applications and funding the projects they think look the most promising.
Given that many are receiving answers from their FTX Grant, I think the timing of this post is unfortunate. I worry that our judgement will be clouded by emotions over whether we received a grant, and if we didn't whether we approved of the reasoning and so fourth. My goal is not to criticise our current grantmakers. I think they are doing an excellent job considering their constraints. My goal is instead to point out that it's absurd to expect them to be superhuman and somehow correctly identify every project worth funding!
No grantmaker is superhuman, but we should strive for a grantmaking ecosystem that is.
To be honest, the overall (including non-EA) grantmaking ecosystem is not so centralized that people can't get funding for possibly net-negative ideas elsewhere. Especially given they have already put work in, have a handful of connections, or will be working in a sort of "sexy" cause area like AI that even some rando UHNWI would take interest in.
Given that, I don't think that keeping grantmaking very centralized yields enough of a reduction in risk that it is worth protecting centralized grantmaking on that metric. And frankly, sweeping such risky applications under the rug hoping they disappear because they aren't funded (by you, that one time) seems a terrible strategy. I'm not sure that is what is effectively happening, but if it is:
I propose a 2 part protocol within the grantmaking ecosystem to reduce downside risk:
1. Overt feedback from grantmakers in the case that they think a project is potentially net-negative.
2. To take it a step further, EA could employ someone whose role it is to try to actively sway a person from an idea, or help mitigate the risks of their project if the applicants affirm they are going to keep trying.
Imagine, as an applicant, receiving an email saying:
"Hello [Your Name],
Thank you for your grant application. We are sorry to bear the bad news that we will not be funding your project. We commend you on the effort you have already put in, but we have concerns that there may be great risks to following through and we want to strongly encourage you to consider other options.
We have CC'ed [name of unilateralist's curse expert with domain expertise], who is a specialist in cases like these who contracts with various foundations. They would be willing to have a call with you about why your idea may be too risky to move forward with. If this email has not already convinced you, we hope you consider scheduling a call on their [calendly] for more details and ideas, including potential risk mitigation.
We also recommend you apply for 80k coaching [here]. They may be able to point you toward roles that are just as good or a better fit for you, but with no big downside risk and with community support. You can list us a recommendation on your coaching application.
We hope that you do not take this too personally as this is not an uncommon reason to withhold funding (hopefully evidenced by the resources in place for such cases), and we hope to see you continuing to put your skills toward altruistic efforts.
Best,
[Name of Grantmaker]"
Should I write a quick EA forum post on this 2 part idea? (Basically I'll copy-paste this comment and add a couple paragraphs). Is there a better idea?
I realize that email will look dramatic as a response to some, but it wouldn't have to be sent in every "cursed case". I'm sure many applications are rather random ideas. I imagine that a grantmaker could tell by the applicants' resumes and their social positioning how likely the founding team are to keep trying to start or perpetuate a project.
I think giving this type of feedback when warranted also reflects well on EA. It makes EA seem less of an ivory tower/billionaire hobby and more of a conversational and collaborative movement.
*************************************
The above is a departure from the point of the post. FWIW, I do think the EA grantmaking ecosystem is so centralized that people who have potentially good ideas which stem from a bit of a different framework than those of typical EA grantmakers will struggle to get funding elsewhere. I agree decentralizing grantmaking to some extent is important and I have my reasoning here