(I realised after I wrote this that the metaphor between brains and epistemic communities is less fruitfwl than it seems like I think, but it's still a helpfwl frame in order to understand the differences anyway, so I'm posting it here. ^^)
TL;DR: I think people should consider searching for giving opportunities in their networks, because a community that efficiently capitalises on insider information may end up doing more efficient and more varied research. There are, as you would expect, both problems and advantages to this, but it definitely seems good to encourage on the margin.
Some reasons to prefer decentralised funding and insider trading
I think people are too worried about making their donations appear justifiable to others. And what people expect will appear justifiable to others, is based on the most visibly widespread evidence they can think of.[1] It just so happens that that is also the basket of information that everyone else bases their opinions on as well. The net effect is that a lot less information gets considered in total.
Even so, there are very good reasons to defer to consensus among people who know more, not act unilaterally, and be epistemically humble. I'm not arguing that we shouldn't take these considerations into account. What I'm trying to say is that even after you've given them adequate consideration, there are separate social reasons that could make it tempting to defer, and we should keep this distinction is in mind so we don't handicap ourselves just to fit in.
Consider the community from a bird's eye perspective for a moment. Imagine zooming out, and seeing EA as a single organism. Information goes in, and causal consequences go out. Now, what happens when you make most of the little humanoid neurons mimic their neighbours in proportion to how many neighbours they have doing the same thing?
What you end up with is a Matthew effect not only for ideas, but also for the bits of information that get promoted to public consciousness. Imagine ripples of information flowing in only to be suppressed at the periphery, way before they've had a chance to be adequately processed. Bits of information accumulate trust in proportion to how much trust they already have, and there are no well-coordinated checks that can reliably abort a cascade past a point.
To be clear, this isn't how the brain works. The brain is designed very meticulously to ensure that only the most surprising information gets promoted to universal recognition ("consciousness"). The signals that can already be predicted by established paradigms are suppressed, and novel information gets passed along with priority.[2] While it doesn't work perfectly for all things, consider just the fact that our entire perceptual field gets replaced instantly every time we turn our heads.
Returning to the societal perspective again, what would it look like if the EA community were arranged in a similar fashion?
I think it would be a community optimised for the early detection and transmission of market-moving information--which in a finance context refers to information that would cause any reasonable investor to immediately make a decision upon hearing it. In the case where, for example, someone invests in a company because they're friends with the CEO and received private information, it's called "insider trading" and is illegal in some countries.
But it's not illegal for altruistic giving! Funding decisions based on highly valuable information only you have access to is precisely the thing we'd want to see happening.
If, say, you have a friend who's trying to get time off from work in order to start a project, but no one's willing to fund them because they're a weird-but-brilliant dropout with no credentials, you may have insider information about their trustworthiness. That kind of information doesn't transmit very readily, so if we insist on centralised funding mechanisms, we're unknowingly losing out on all those insider trading opportunities.
Where the architecture of the brain efficiently promotes the most novel information to consciousness for processing, EA has the problem where unusual information doesn't even pass the first layer.
(I should probably mention that there are obviously biases that come into play when evaluating people you're close to, and that could easily interfere with good judgment. It's a crucial consideration. I'm mainly presenting the case for decentralisation here, since centralisation is the default, so I urge you keep some skepticism in mind.)
There are no way around having to make trade-offs here. One reason to prefer a central team of highly experienced grant-makers to be doing most of the funding, is that they're likely to be better at evaluating impact opportunities. But this needn't matter much if they're bottlenecked by bandwidth--both in terms of having less information reach them and in terms of having less time available to analyse what does come through.[3]
On the other hand, if you believe that most of the relevant market-moving information in EA is already being captured by relevant funding bodies, then their ability to separate the wheat from the chaff may be the dominating consideration.
While I think the above considerations make a strong case for encouraging people to look for giving opportunities in their own networks, I think they apply with greater force to adopting a model like impact markets.
They're a sort of compromise between central and decentralised funding. The idea is that everyone has an incentive to fund individuals or projects where they believe they have insider information indicating that the project will show itself to be impactfwl later on. If the projects they opportunistically funded at an early stage do end up producing a lot of impact, a central funding body rewards the maverick funder by "purchasing the impact" second-hand.
Once a system like that is up and running, people can reliably expect the retroactive funders to make it worth their while to search for promising projects. And when people are incentivised to locate and fund projects at their earliest bottlenecks, the community could end up capitalising on a lot more (insider) information than would be possible if everything had to be evaluated centrally.
(There are of course, more complexities to this, and you can check out the previous discussions on the forum.)
This doesn't necessarily mean that people defer to the most popular beliefs, but rather that even if they do their own thinking, they're still reluctant to use information that other people don't have access to, so it amounts to nearly the same thing.
This is sometimes called predictive processing. Sensory information comes in and gets passed along through increasingly conceptual layers. Higher-level layers are successively trying to anticipate the information coming in from below, and if they succeed, they just aren't interested in passing it along.
(Imagine if it were the other way around, and neurons were increasingly shy to pass along information in proportion to how confused or surprised they were. What a brain that would be!)
As an extreme example of how bad this can get, an Australian study on medicinal research funding estimated the length of average grant proposals to be "between 80 and 120 pages long and panel members are expected to read and rank between 50 and 100 proposals. It is optimistic to expect accurate judgements in this sea of excessive information." -- (Herbert et al., 2013)
Luckily it's nowhere near as bad for EA research, but consider the Australian case as a clear example of how a funding process can be undeniably and extremely misaligned with the goal producing good research.
Having a savings target seems important. (Not financial advice.)
I sometimes hear people in/around EA rule out taking jobs due to low salaries (sometimes implicitly, sometimes a little embarrassedly). Of course, it's perfectly understandable not to want to take a significant drop in your consumption. But in theory, people with high salaries could be saving up so they can take high-impact, low-paying jobs in the future; it just seems like, by default, this doesn't happen. I think it's worth thinking about how to set yourself up to be able to do it if you do find yourself in such a situation; you might find it harder than you expect.
(Personal digression: I also notice my own brain paying a lot more attention to my personal finances than I think is justified. Maybe some of this traces back to some kind of trauma response to being unemployed for a very stressful ~6 months after graduating: I just always could be a little more financially secure. A couple weeks ago, while meditating, it occurred to me that my brain is probably reacting to not knowing how I'm doing relative to my goal, because 1) I didn't actually know what my goal is, and 2) I didn't really have a sense of what I was spending each month. In IFS terms, I think the "social and physical security" part of my brain wasn't trusting that the rest of my brain was competently handling the situation.)
So, I think people in general would benefit from having an explicit target: once I have X in savings, I can feel financially secure. This probably means explicitly tracking your expenses, both now and in a "making some reasonable, not-that-painful cuts" budget, and gaming out the most likely scenarios where you'd need to use a large amount of your savings, beyond the classic 3 or 6 months of expenses in an emergency fund. For people motivated by EA principles, the most likely scenarios might be for impact reasons: maybe you take a public-sector job that pays half your current salary for three years, or maybe you'
As a community builder, I've started donating directly to my local EA group—and I encourage you to consider doing the same.
Managing budgets and navigating inflexible grant applications consume valuable time and energy that could otherwise be spent directly fostering impactful community engagement. As someone deeply involved, I possess unique insights into what our group specifically needs, how to effectively meet those needs, and what actions are most conducive to achieving genuine impact.
Of course, seeking funding from organizations like OpenPhil remains highly valuable—they've dedicated extensive thought to effective community building. Yet, don't underestimate the power and efficiency of utilizing your intimate knowledge of your group's immediate requirements.
Your direct donations can streamline processes, empower quick responses to pressing needs, and ultimately enhance the impact of your local EA community.
As earn to giver, I found contributing to funding diversification challenging
Jeff Kaufmann posted a different version of the same argument earlier than me.
Some have argued that earning to give can contribute to funding diversification. Having a few dozen mid-sized donors, rather than one or two very large donors, would make the financial position of an organization more secure. It allows them to plan for the future and not worry about fundraising all the time.
As earn to giver, I can be one of those mid-sized donors. I have tried. However, it is challenging.
First of all, I don't have expertise, and don't have much time to build the expertise. I spend most of my time on my day job, which has nothing to do with any cause I care about. Any research must be done in my free time. This is fine, but it has some cost. This is time I could have spent on career development, talking to others about effective giving, or living more frugally.
Motivation is not the issue, at least for me. I've found the research extremely rewarding and intellectually stimulating to do. Yet, fun doesn't necessarily translate to effectiveness.
I've seen peer earn to givers just defer to GiveWell or other charity evaluators without putting much thought into it. This is great, but isn't there more? Others said that they talked to an individual organization, thought "sounds reasonable", and transferred the money. I fell for that trap too!
There is a lot at stake. It's about hard-earned money that has the potential to help large numbers of people and animals in dire need. Unfortunately, I don't trust my own non-expert judgment to do this.
So I find myself donating to funds, and then the funding is centralized again. If others do the same, charities will have to rely on one grantmaker again, rather than a diverse pool of donors.
Ideas
What would help to address this issue? Here are a few ideas, some of them are already happening.
* funding circles. Note that most funding circles I know r
David Rubinstein recently interviewed Philippe Laffont, the founder of Coatue (probably worth $5-10b). When asked about his philanthropic activities, Laffont basically said he’s been too busy to think about it, but wanted to do something someday. I admit I was shocked. Laffont is a savant technology investor and entrepreneur (including in AI companies) and it sounded like he literally hadn’t put much thought into what to do with his fortune.
Are there concerted efforts in the EA community to get these people on board? Like, is there a google doc with a six degrees of separation plan to get dinner with Laffont? The guy went to MIT and invests in AI companies. In just wouldn’t be hard to get in touch. It seems like increasing the probability he aims some of his fortune at effective charities would justify a significant effort here. And I imagine there are dozens or hundreds of people like this. Am I missing some obvious reason this isn’t worth pursuing or likely to fail? Have people tried? I’m a bit of an outsider here so I’d love to hear people’s thoughts on what I’m sure seems like a pretty naive take!
https://youtu.be/_nuSOMooReY?si=6582NoLPtSYRwdMe
Marcus Daniell appreciation note
@Marcus Daniell, cofounder of High Impact Athletes, came back from knee surgery and is donating half of his prize money this year. He projects raising $100,000. Through a partnership with Momentum, people can pledge to donate for each point he gets; he has raised $28,000 through this so far. It's cool to see this, and I'm wishing him luck for his final year of professional play!
Effective giving quick take for giving season
This is quite half-baked because I think my social circle contains not very many E2G folks, but I have a feeling that when EA suddenly came into a lot more funding and the word on the street was that we were “talent constrained, not funding constrained”, some people earning to give ended up pretty jerked around, or at least feeling that way. They may have picked jobs and life plans based on the earn to give model, where it would be years before the plans came to fruition, and in the middle, they lost status and attention from their community. There might have been an additional dynamic where people who took the advice the most seriously ended up deeply embedded in other professional communities, so heard about the switch later or found it harder to reconnect with the community and the new priorities.
I really don’t have an overall view on how bad all of this was, or if anyone should have done anything differently, but I do have a sense that EA has a bit of a feature of jerking people around like this, where priorities and advice change faster than the advice can be fully acted on. The world and the right priorities really do change, though; I’m not sure what should be done except to be clearer about all this, but I suspect it’s hard to properly convey “this seems like the absolute best thing in the world to do, also next year my view could be that it’s basically useless” even if you use those exact words. And maybe people have done this, or maybe it’s worth trying harder. Another approach would be something like insurance.
A frame I’ve been more interested in lately (definitely not original to me) is that earning to give is a kind of resilience / robustness-add for EA, where more donors just means better ability to withstand crazy events, even if in most worlds the small donors aren’t adding much in the way of impact. Not clear that that nets out, but “good in case of tail risk” seems like an important aspect.
A more
I’d love to dig a bit more into some real data and implications for this (hence, just a quick take for now), but I suspect that (EA) donors may not take the current funding allocation within and across cause areas into account when making donation decisions - and that taking it sufficiently into account may mean that small donors shouldn’t diversify?
For example, the recent Animal Welfare vs. Global Health Debate Week posed the statement “It would be better to spend an extra $100m on animal welfare than on global health.” Now, one way to think through this question is “How would the ideal funding split between Animal Welfare vs. Global Health look like” and test whether an additional $100m on Animal Welfare would bring us closer to the ideal funding split (in this case, it appears that spending the $100m on Animal Welfare increases the share of AW from 0.41% to 0.55% - meaning that if your ideal funding split would allocate more than 0.55% to AW, you should be in favor of directing $100m there).
I am not sure if this perspective is the right or even the best to take, but I think it may often be missing. I think it’s important to think through it, because it takes into account “how much money should be spent on X vs. Y” as opposed to “how much money I should spend on X vs. Y” (or maybe even “How much money should EA spend on X vs. Y”?) - which I think closer to what we should care about. I think this is interesting, because:
* If you primarily, but not strictly and solely favor a comparably well-funded area (say, GHD or Climate Change), you may want to donate all your money towards a cause area that don’t even value particularly highly.
* Ironically, this type of thinking only applies if you value diversification in your donations in the first place. So, if you are wondering how much % of your money should go to X vs. Y, I suspect that looking at the current global funding allocation will likely (for most people, necessarily?) lead to pouring all your money into
I know that folks in EA often favor donating to more effective things rather than less effective things. With that in mind, I have mixed feelings knowing that many Harvard faculty are donating 10%, and that they are donating to the best funded and most prestigious university in the world.
On the one hand, it is really nice to know that they are willing to put their money where their mouth is when their institution is under attack. I get some warm fuzzy feelings from the idea of defending an education institution against political attacks. On the other hand, Harvard University's endowment is already very large, and Harvard earns a lot of money each year. It is like a very tailored version of a giving pledge: giving to Harvard, giving for one year. Will such a relatively small amount given toward such a relatively large institution do much good? I do wonder what the impact would be if these fairly well-known and well-respected academics announced they were donating 10% to clean water, or to deworming, or to reducing animal suffering. I wonder how much their donations will do for Harvard.
I'll include a few graphs to illustrate Harvard's financial strength.
(I realised after I wrote this that the metaphor between brains and epistemic communities is less fruitfwl than it seems like I think, but it's still a helpfwl frame in order to understand the differences anyway, so I'm posting it here. ^^)
TL;DR: I think people should consider searching for giving opportunities in their networks, because a community that efficiently capitalises on insider information may end up doing more efficient and more varied research. There are, as you would expect, both problems and advantages to this, but it definitely seems good to encourage on the margin.
Some reasons to prefer decentralised funding and insider trading
I think people are too worried about making their donations appear justifiable to others. And what people expect will appear justifiable to others, is based on the most visibly widespread evidence they can think of.[1] It just so happens that that is also the basket of information that everyone else bases their opinions on as well. The net effect is that a lot less information gets considered in total.
Even so, there are very good reasons to defer to consensus among people who know more, not act unilaterally, and be epistemically humble. I'm not arguing that we shouldn't take these considerations into account. What I'm trying to say is that even after you've given them adequate consideration, there are separate social reasons that could make it tempting to defer, and we should keep this distinction is in mind so we don't handicap ourselves just to fit in.
Consider the community from a bird's eye perspective for a moment. Imagine zooming out, and seeing EA as a single organism. Information goes in, and causal consequences go out. Now, what happens when you make most of the little humanoid neurons mimic their neighbours in proportion to how many neighbours they have doing the same thing?
What you end up with is a Matthew effect not only for ideas, but also for the bits of information that get promoted to public consciousness. Imagine ripples of information flowing in only to be suppressed at the periphery, way before they've had a chance to be adequately processed. Bits of information accumulate trust in proportion to how much trust they already have, and there are no well-coordinated checks that can reliably abort a cascade past a point.
To be clear, this isn't how the brain works. The brain is designed very meticulously to ensure that only the most surprising information gets promoted to universal recognition ("consciousness"). The signals that can already be predicted by established paradigms are suppressed, and novel information gets passed along with priority.[2] While it doesn't work perfectly for all things, consider just the fact that our entire perceptual field gets replaced instantly every time we turn our heads.
And because neurons have been harshly optimised for their collective performance, they show a remarkable level of competitive coordination aimed at making sure there are no informational short-circuits or redundancies.
Returning to the societal perspective again, what would it look like if the EA community were arranged in a similar fashion?
I think it would be a community optimised for the early detection and transmission of market-moving information--which in a finance context refers to information that would cause any reasonable investor to immediately make a decision upon hearing it. In the case where, for example, someone invests in a company because they're friends with the CEO and received private information, it's called "insider trading" and is illegal in some countries.
But it's not illegal for altruistic giving! Funding decisions based on highly valuable information only you have access to is precisely the thing we'd want to see happening.
If, say, you have a friend who's trying to get time off from work in order to start a project, but no one's willing to fund them because they're a weird-but-brilliant dropout with no credentials, you may have insider information about their trustworthiness. That kind of information doesn't transmit very readily, so if we insist on centralised funding mechanisms, we're unknowingly losing out on all those insider trading opportunities.
Where the architecture of the brain efficiently promotes the most novel information to consciousness for processing, EA has the problem where unusual information doesn't even pass the first layer.
(I should probably mention that there are obviously biases that come into play when evaluating people you're close to, and that could easily interfere with good judgment. It's a crucial consideration. I'm mainly presenting the case for decentralisation here, since centralisation is the default, so I urge you keep some skepticism in mind.)
There are no way around having to make trade-offs here. One reason to prefer a central team of highly experienced grant-makers to be doing most of the funding, is that they're likely to be better at evaluating impact opportunities. But this needn't matter much if they're bottlenecked by bandwidth--both in terms of having less information reach them and in terms of having less time available to analyse what does come through.[3]
On the other hand, if you believe that most of the relevant market-moving information in EA is already being captured by relevant funding bodies, then their ability to separate the wheat from the chaff may be the dominating consideration.
While I think the above considerations make a strong case for encouraging people to look for giving opportunities in their own networks, I think they apply with greater force to adopting a model like impact markets.
They're a sort of compromise between central and decentralised funding. The idea is that everyone has an incentive to fund individuals or projects where they believe they have insider information indicating that the project will show itself to be impactfwl later on. If the projects they opportunistically funded at an early stage do end up producing a lot of impact, a central funding body rewards the maverick funder by "purchasing the impact" second-hand.
Once a system like that is up and running, people can reliably expect the retroactive funders to make it worth their while to search for promising projects. And when people are incentivised to locate and fund projects at their earliest bottlenecks, the community could end up capitalising on a lot more (insider) information than would be possible if everything had to be evaluated centrally.
(There are of course, more complexities to this, and you can check out the previous discussions on the forum.)
This doesn't necessarily mean that people defer to the most popular beliefs, but rather that even if they do their own thinking, they're still reluctant to use information that other people don't have access to, so it amounts to nearly the same thing.
This is sometimes called predictive processing. Sensory information comes in and gets passed along through increasingly conceptual layers. Higher-level layers are successively trying to anticipate the information coming in from below, and if they succeed, they just aren't interested in passing it along.
(Imagine if it were the other way around, and neurons were increasingly shy to pass along information in proportion to how confused or surprised they were. What a brain that would be!)
As an extreme example of how bad this can get, an Australian study on medicinal research funding estimated the length of average grant proposals to be "between 80 and 120 pages long and panel members are expected to read and rank between 50 and 100 proposals. It is optimistic to expect accurate judgements in this sea of excessive information." -- (Herbert et al., 2013)
Luckily it's nowhere near as bad for EA research, but consider the Australian case as a clear example of how a funding process can be undeniably and extremely misaligned with the goal producing good research.