Hide table of contents

Update: The EA Funds has launched!


 This post introduces a new project that CEA is working on, which we’re calling the Effective Altruism Funds.

Some details about this idea are below. We’d really appreciate community feedback about whether this is the kind of thing they’d like to see CEA working on. We’ve also been getting input from our mentors at Y Combinator, who are excited about this idea.

The Idea

EAs care a lot about donating effectively, but, donating effectively is hard, even for engaged EAs. The easiest options are GiveWell-recommended charities, but many people believe that other charities offer an even better opportunity to have an impact. The alternative, for them, is to figure out: 1) which cause is most important; 2) which interventions in the cause are most effective; and 3) which charities executing those interventions are most effective yet still have a funding gap.

Recently, we’ve seen demand for options that allow individuals to donate effectively while reducing their total workload, whether by deferring their decision to a trusted expert (Nick Beckstead’s EA Giving Group) or randomising who allocates a group’s total donations (Carl Shulman and Paul Christiano’s donation lottery). We want to meet this demand and help EAs give more effectively at lower time cost. We hope this will allow the community to take advantage of the gains of labor specialization, rewarding a few EAs for conducting in-depth donation research while allowing others to specialize in other important domains.

The Structure

Via the EA Funds, people will be able to donate to one or more funds with particular focus areas. Donors will be able to allocate their donations to one or more of CEA’s EA Funds. Donations will be disbursed based on the recommendations of fund managers. If people don’t know what cause or causes they want to focus on, we'll have a tool that asks them a few questions about key judgement calls, then makes a recommendation, as well as more in-depth materials for those who want to deep-dive. Once people have made their cause choices, fund managers use their up-to-date knowledge of charities’ work to do charity selection.

We want to keep this idea as simple as possible to begin with, so we’ll have just four funds, with the following managers:

  • Global Health and Development - Elie Hassenfeld
  • Animal Welfare – Lewis Bollard
  • Long-run future – Nick Beckstead
  • Movement-building – Nick Beckstead

(Note that the meta-charity fund will be able to fund CEA; and note that Nick Beckstead is a Trustee of CEA. The long-run future fund and the meta-charity fund continue the work that Nick has been doing running the EA Giving Fund.)

It’s not a coincidence that all the fund managers work for GiveWell or Open Philanthropy.  First, these are the organisations whose charity evaluation we respect the most. The worst-case scenario, where your donation just adds to the Open Philanthropy funding within a particular area, is therefore still a great outcome.  Second, they have the best information available about what grants Open Philanthropy are planning to make, so have a good understanding of where the remaining funding gaps are, in case they feel they can use the money in the EA Fund to fill a gap that they feel is important, but isn’t currently addressed by Open Philanthropy.

The Vision

One vision I have for the effective altruism community is that its members can function like a people’s foundation: any individual donor on their own might not have that much power, but if the community acts together they can have the sort of influence that major foundations like the Gates Foundation have. The EA funds help move us toward that vision.

In the first instance, we’re just going to have four funds, to see how much demand there is. But we can imagine various ways in which this idea could grow.

If the initial experiment goes well, then in the longer run, we'd probably host a wider variety of funds. For example, we’re in discussion with Carl and Paul about running the Donor Lottery fund, which we think was a great innovation from the community. Ultimately, it could even be that anyone in the EA community can run a fund, and there's competition between fund managers where whoever makes the best grants gets more funding. This would overcome a downside of using GiveWell and Open Philanthropy staff members as fund managers, which is that we potentially lose out on benefits from a larger variety of perspectives.

Having a much wider variety of possible charities also could allow us to make donating hassle-free for effective altruism community members. Rather than every member of the effective altruism community making individual contributions to multiple charities, having to figure out themselves how to do so as tax efficiently as possible, instead they could set up a direct debit to contribute through this platform, simply write in how much they want to contribute to which charities, and we could take care of the rest. And, with respect to tax efficiency, we’ve already found that even professional accountants often misadvise donors with respect to the size of the tax relief they can get. At least at the outset, only US and UK donors will be eligible for tax benefits when donating through the fund.

Finally, we could potentially use this platform to administer moral trades between donors. At the moment, people just give to wherever they think is best. But this loses out on the potential for a community to have more impact, by everyone’s lights, than they could have otherwise.

For example, imagine that Alice and Bob both want to give $100 to charity, and see this donation as producing the following amounts of value relative to one another. (E.g. where Alice believes that a $100 donation to AMF produces 1 QALY)

 

AMF

GiveDirectly

SCI

Alice

1

0.8

0.5

Bob

0.5

0.8

1

This means that if Alice and Bob were each to give to the charities that they think are the most effective, (AMF and SCI, respectively), they would evaluate the total value as being:

1 QALY (from their donation) + 0.5 QALYs (from the other person’s donation)

= 1.5 QALYs


But if they paired their donations, they would evaluate the total value as being:

 

0.8 (from their donation) + 0.8 (from the other person’s donation)

= 1.6 QALYs

The same idea could happen with respect to the timing of donations, too, if one party prefers to donate earlier, and another prefers to invest and donate later.

We’re still exploring the EA Funds idea, so we welcome suggestions and feedback in the comments below.

 

Comments62
Sorted by Click to highlight new comments since:

Even though I have supported many EA organisations over the years (both meta, ex-risks and global poverty and some at quite early stages) and devote a great deal of time to try to do it well, I feel the EA funds could still be really useful.

There is a limit to how much high quality due diligence one could do. It takes time to build relationships, analyse opportunities and monitor them. This is also the reason I have not supported some of the EA Venture projects not necessarily because of the projects but because I did not have the bandwidth.

I am really impressed with some of the really high leverage, high impact work that Nick Beckstead supported through his donor group, I remember his catalysing the formation of CSER and early support to Founders Pledge. The possibility to participate in Elie's work not limited to top charities also sounds exciting. I have not had the time to analyse animal charities and and this will help too.

I think donating to EA funds alongside my existing donations will provide diversification and allow me to support projects that I do not have direct access to or I do not have time and/or resources to support on a standalone basis.

The EA funds could also have benchmarking and signalling values (especially on less well known projects) if they publish their donation decisions.

Thanks so much for this, Luke! If someone who spends half their working time dedicating to philanthropy, as you do, says "There is a limit to how much high quality due diligence one could do. It takes time to build relationships, analyse opportunities and monitor them" - that's pretty useful information!

I strongly agree with- "One vision I have for the effective altruism community is that its members can function like a people’s foundation: any individual donor on their own might not have that much power, but if the community acts together they can have the sort of influence that major foundations like the Gates Foundation have. " Creating a structure which enables leaders of the EA community to heavily influence the manner in which non EA-charities and other organisations operate could be immensely beneficial. I believe some form of centralisation would help achieve this. I currently donate to AMF directly, but I think it would empower CEA more if all that money was funnelled through CEA itself. For example All EA members could be encouraged to pay into the EA trust (specifying their chosen charity or fund) and the trust then sends a monthly payment to each chosen charity. This way the community appears to act as one while still retaining freedom for individuals to choose the charities should they desire. EA Funds would in turn be a useful part of this structure.

[anonymous]8
0
0

Creating a structure which enables leaders of the EA community to heavily influence the manner in which non EA-charities and other organisations operate could be immensely beneficial.

I would be mildly concerned about centralizing funding power in the hands of a small number of individuals, but strongly in favor of centralizing funding under the EA brand.

If EA Funds provides a stable, identifiable source of funding for nonprofits that focus on effectiveness I think we'll see more excellent nonprofits and more talented people working on improving the world.

Seems like a great idea!

Re Nick, I trust his analysis of charities, including meta-charities a lot. But the conflict does seem worth thinking a bit about. He is responsible for all 2-3 of the top EA-org grant-makers. From a point of view of redundancy, diverse criticism and incentives, this is not so good.

If I was CEA, I'm not sure I have very much incentive to identify new good strategies, since a lot of my expected funding in the next decade comes from Nick, and most of the other funders are less thoughtful, he is really the one that I need to work to convince. And then If I am Nick, I'm only one person, so there are limits to how much strategic thinking I can transmit, and to the degree to which I will force it to engage with other strategic thinkers. It's also hard to see how, if some of its projects failed, I would allow CEA to go unfunded?

How can we build these incentives and selection pressures, as well as on the object level, getting better ideas into EA orgs? Dviersifying funding would help, but mostly it seems like it would require CEA to care about this problem a lot and take a lot of effort.

[anonymous]6
0
0

My guess is that the optimal solution has people like Nick controlling quite a bit of money since he has a strong track record and strong connections in the space. Yet, the optimal solution probably has an upper limit on how much money he controls for purposes of viewpoint diversification and to prevent power from consolidating in too few hands. I'm not sure whether we've reached the upper limit yet, but I think we will if EA Funds moves a substantial amount of money.

How can we build these incentives and selection pressures, as well as on the object level, getting better ideas into EA orgs? Dviersifying funding would help, but mostly it seems like it would require CEA to care about this problem a lot and take a lot of effort.

I agree that this is worth being concerned about and I would also be interested in ways to avert this problem.

My hope is that as we diversify the selection of fund managers, EA Funds creates an intellectual marketplace of fund managers writing about why their funding strategies are best and convincing people to donate to them. Then our defense against entrenching the power of established groups (e.g. CEA) is that people can vote with their wallets if they think established groups are getting more money than makes sense.

Tell me about Nick's track record? I like Nick and I approve of his granting so far but "strong track record" isn't at all how I'd describe the case for giving him unrestricted funds to grant; it seems entirely speculative based on shared values and judgment. If Nick has a verified track record of grants turning out well, I'd love to see it, and it should probably be in the promotional material for EA Funds.

Cool. Yeah, I wouldn't want to be pidgeonholed into being someone concerned about concentration of power, though.

We can have powerful organizations, I just think that they are under incentives such that they will only stay big (i.e. get good staff and ongoing funding) if they perform. Otherwise, we become a bad kind of bureaucracy.

I am on the whole positive about this idea. Obviously, specialization is good, and creating dedicated fund managers to make donation decisions can be very beneficial. And it makes sense that the boundaries between these funds arise from normative differences between donors, while putting fund managers in charge of sorting out empirical questions about efficiency. This is just the natural extension, of the original GiveWell concept, to account for normative differences, and also to utilize some of the extra trust that some EAs will have for other people in the community that isn't shared by a lot of GiveWell's audience.

That said, I'm worried about principle-agent problems and transparency, and about CEA becoming an organization receiving monthly direct debits from the bank accounts of ten thousand people. Even if we assume that current CEA employees are incorruptible superhuman angels, giving CEA direct control of a firehose of cash makes it an attractive target for usurpers (in a way that it is not when it's merely making recommendations and doing outreach). These sorts of worries apply much less to GiveWell when it's donating to developing-world health charities than to CEA when it's donating to EA start-ups who are good friends with the staff.

Will EA Fund managers be committed to producing the sorts of detailed explanations and justifications we see from GiveWell and Open Phil, at least after adjusting for donation size? How will the conflicts of interest be managed and documented with such a tightly interlinked community?

What sorts of additional precautions will be taken to manage these risks, especially for the long term?

[anonymous]3
0
0

These sorts of worries apply much less to GiveWell when it's donating to developing-world health charities than to CEA when it's donating to EA start-ups who are good friends with the staff.

Part of the reason that CEA staff themselves are not fund managers is to help with this kind of conflict. I think that regardless of who we choose as fund managers, there is potential for recipients to develop personal connects with the fund managers and use that to their advantage. This seems true in almost any funding scheme were evaluating the people in charge is part of the selection process. Do you think EA Funds will make this worse somehow?

Will EA Fund managers be committed to producing the sorts of detailed explanations and justifications we see from GiveWell and Open Phil, at least after adjusting for donation size?

We will definitely require some level of reporting from fund managers although we haven't yet determined how much and in what level of detail. As I mentioned in a different comment, I'd be interested in learning more about what people would like to see.

How will the conflicts of interest be managed and documented with such a tightly interlinked community?

Having Nick as a fund manager is a good test case since there's a conflict given that he's a CEA trustee. Our plan so far has been to make sure that we make the presence of this conflict well known. Do you think this is a good long term plan or would you prefer something else?

Small donors have played a valuable role by providing seed funding to new projects in the past. They can often fund promising projects that larger donors like OpenPhil can't because they have special knowledge of them through their personal networks and the small projects aren't established enough to get through a large donor's selection process. These donors therefore act like angel investors. My concern with the EA fund is that:

  • By pooling donations into a large fund, you increase the minimum grant that it's worth their time to make, thus making it unable to fund small opportunities
  • By centralising decision-making in a handful of experts, you reduce the variety of projects that get funded because they have more limited networks, knowledge, and value variety than the population of small donors.

Also, what happened to EA Ventures? Wasn't that an attempt to pool funds to make investments in new projects?

[anonymous]15
0
0

Hi Richard,

Thanks a lot for the feedback. I work at CEA on the EA Funds project. My thoughts are below although they may not represent the views of everyone at CEA.

Funding new projects

I think EA Funds will improve funding for new projects.

As far as I know small donors (in the ~$10K or below range) have traditionally not played a large role in funding new projects. This is because the time it takes to evaluate a new project is substantial and because finding good new projects requires developing good referral networks. It generally doesn't make sense for a small donor to undertake this work.

Some of the best donors I know of at finding and supporting new projects are private individuals with budgets in the hundreds of thousands or low millions range. For these donors, it makes more sense to do the work required to find new projects and it makes sense for the projects to find these donors since they can cover a large percentage of the funding need. I think the funds will roughly mimic this structure. Also, I think Nick Beckstead has one of the better track records at helping to get early-stage projects funded and he's a fund manager.

Donor centralization

I agree with this concern. I think we should aim to not have OpenPhil program officers be the only fund managers in the future and we should aim for a wider variety of funds. What we have now represents the MVP, not the long-term goal.

EA Ventures

I was in charge of EA Ventures and it is no longer in operation. The model was that we sourced new projects and then presented them to our donors for potential funding.

We shut down EA Ventures because 1) the number of exciting new projects was smaller than we expected; 2) funder interest in new projects was smaller than expected and 3) opportunity cost increased significantly as other projects at CEA started to show stronger results.

My experience at EA Ventures updated me away from the view that there are lots of promising new projects in need of funding. I now think the pipeline of new projects is smaller than would be idea although I'm not sure what to do to solve this problem.

Just to give a perspective from the 'other' (donor) side:

I was excited about EA Ventures, partly because of the experimental value (both as an experiment in itself, and it's effect on encouraging other people to experiment). I also agreed with the decision to cease operation when it did, and I think Kerry's read of the situation basically concurs with my own experience

Also, as Kerry said, I think a large part of what happened here was that "the best projects are often able to raise money on their own without an intermediary to help them". At the time EA Ventures was running, EA was (and may still be) a close-enough community that I was finding out about several of the opportunities EAV was presenting via my own network, without EAV's help. That's not at all to say EAV was providing zero value in those cases since they were also filtering/evaluating, but it meant that the most promising charity (in my opinion) that I heard about via EAV was something I was already funding and keen to continue to fund up to RFMF/percentage-of-budget constraints.

Thanks, that clarifies.

I think I was confused by 'small donor' - I was including in that category friends who donate £50k-£100k and who fund small organisations in their network after a lot of careful analysis. If the fund is targeted more at <$10k donors that makes sense.

OpenPhil officers makes sense for MVP.

On EA Ventures, points 1 and 2 seem particularly surprising when put together. You found too few exciting projects but even they had trouble generating funder interest? So are you saying that even for high-quality new projects, funder interest was low, suggesting risk-aversion? If so, that seems to be an important problem to solve if we want a pipeline of new potentially high-impact projects.

On creating promising new projects, myself and Michael Peyton Jones have been thinking a lot about this recently. This thinking is for the Good Technology Project - how can we create an institution that helps technology talent to search for and exploit new high-social-impact startup opportunities. But a lot of our thinking will generalise to working out how to help EA get better at exploration and experimentation.

"On EA Ventures, points 1 and 2 seem particularly surprising when put together. You found too few exciting projects but even they had trouble generating funder interest?"

This isn't surprising if the model is just that new projects were uniformly less exciting than one might have expected: there were few projects above the bar for 'really cool project', and even they were only just above the bar, hence hard to get funding for.

[anonymous]4
0
0

This is my read on what happened.

Part of the problem is that the best projects are often able to raise money on their own without an intermediary to help them. So, even if there are exciting projects in EA, they might not need our help.

As far as I know small donors (in the ~$10K or below range) have traditionally not played a large role in funding new projects.

While this is not enough to contradict your point and while I agree the general process of small donor fundraising is inefficient, I think it is worth noting a single counterexample I know of -- the Local Effective Altruism Network raised $65K for its 2016 chapter building initiatives entirely from donors donating under $15K each with many donors donating under $1K.

Very interesting idea, and potentially really useful for the community (and me personally!).

What's the timeline for this?

I'm presuming that the Funds would be transparent about how much money is in them, how much has been given and why - is that the case? Also as a starter, has Nick written about how much is/was in his Fund and how its been spent?

[anonymous]4
0
0

We hope to start rolling out a beta version of this in the next few weeks.

I'm presuming that the Funds would be transparent about how much money is in them, how much has been given and why - is that the case? Also as a starter, has Nick written about how much is/was in his Fund and how its been spent?

Nick has provided a list of where donations he's advised have gone in the past. I think it was included in his write-up for GiveWell's staff donation decisions post. That list will also be provided on the fund's website.

We're not yet certain how much communication the funds should provide to donors. On the one hand, we obviously want to let people know where their money is going. On the other hand, we don't want participation in the funds to place an undue burden on fund managers.

Please let me know If you have thoughts on what kind of communication you'd like to see given the tradeoffs involved.

The list of donation recipients from Nick's DAF is here: https://docs.google.com/spreadsheets/d/1H2hF3SaO0_QViYq2j1E7mwoz3sDjdf9pdtuBcrq7pRU/edit#gid=0

I don't believe there's been any write-ups or dollar amounts, except the above list is ordered by donation size.

Awesome!

Is there a difference between donating to the Global Health and Development fund and donating to GiveWell top charities (as Elie has done with his personal donation for each of the last four years)?

[anonymous]7
0
0

In practice, I expect a high probability that the Global Heath and Development fund goes to GiveWell-recommended charities. However, the fund leaves open the possibility that Elie will donate it elsewhere if hey thinks better opportunities in that space exist.

The goal is that the fund is at least as good as donating to GiveWell's recommendations with some possibility of being better.

[anonymous]9
0
0

In the first instance, we’re just going to have four funds...If the initial experiment goes well...running the Donor Lottery fund...we could potentially use this platform to administer moral trades between donors

FWIW, I am lukewarm on the funds idea, but excited about the Donor Lottery and most excited about the moral trade platform. I hope that if the funds idea fails, the Donor Lottery and moral trade platform are not scrapped as a result. I've never donated to CEA, but I would donate to support these two projects.

Thanks! That's really helpful to know. The Funds are potentially solving a number of problems at once, and we know there's some demand for each of these problems to be solved, but not how much demand, so comments like this are very useful.

It’s not a coincidence that all the fund managers work for GiveWell or Open Philanthropy.

Second, they have the best information available about what grants Open Philanthropy are planning to make, so have a good understanding of where the remaining funding gaps are, in case they feel they can use the money in the EA Fund to fill a gap that they feel is important, but isn’t currently addressed by Open Philanthropy.

It makes some sense that there could be gaps which Open Phil isn't able to fill, even if Open Phil thinks they're no less effective than the opportunities they're funding instead. Was that what was meant here, or am I missing something? If not, I wonder what such a funding gap for a cost-effective opportunity might look like (an example would help)?

There's a part of me that keeps insisting that it's counter-intuitive that Open Phil is having trouble making as many grants as it would like, while also employing people who will manage an EA fund. I'd naively think that there would be at least some sort of tradeoff between producing new suggestions for things the EA fund might fund, and new things that Open Phil might fund. I suspect you're already thinking closely about this, and I would be happy to hear everyone's thoughts.

Edit: I'd meant to express general confidence in those who had been selected as fund managers. Also, I have strong positive feelings about epistemic humility in general, which also seems highly relevant to this project.

IIRC, Open Phil often wants to not be a charity's only funder, which means they leave the charity with a funding gap that could maybe be filled by the EA Fund.

Seems a little odd to solve that problem by setting up an "independent" funding source also controlled by Open Phil staff, though of course as mentioned elsewhere that may change later.

[anonymous]1
0
0

Thanks for the feedback!

Two thoughts: 1) I don't think the long-term goal is that OpenPhil program officers are the only fund managers. Working with them was the best way to get an MVP version in place. In the long-run, we want to use the funds to offer worldview diversification and to expand the funding horizons of the EA community.

2)

There's a part of me that keeps insisting that it's counter-intuitive that Open Phil is having trouble making as many grants as it would like, while also employing people who will manage an EA fund.

I think I agree with you. However, since the OpenPhil program officers know what OpenPhil is funding it means that the funds should provide options that are at least as good as OpenPhil's funding. (See Carl Shulman's post on the subject.) The hope is that the "at least as good as OpenPhil" bar is higher than most donors can reach now, so the fund is among the most effective options for individual donors.

Let me know if that didn't answer the question.

However, since the OpenPhil program officers know what OpenPhil is funding it means that the funds should provide options that are at least as good as OpenPhil's funding. (See Carl Shulman's post on the subject.) The hope is that the "at least as good as OpenPhil" bar is higher than most donors can reach now, so the fund is among the most effective options for individual donors.

The article you link (quote below) suggests the opposite should be true - individual donors should be able to do at least better than OpenPhil.

Risk-neutral small donors should aim to make better charitable bets at the margin than giga-donors like the Open Philanthropy Project (Open Phil) and Good Ventures using donor lotteries, and can do at least as well as giga-donors by letting themselves be funged

[anonymous]1
0
0

The article you link (quote below) suggests the opposite should be true - individual donors should be able to do at least better than OpenPhil

We're making it easier for individual donors to at least be funged since our fund managers will have better information than most individual donors.

What will be these funds' policy on rolling funds over from year to year, if the donations a fund gets exceed the funding gaps the managers are aware of?

(This seems particularly important for funds whose managers are also involved with OpenPhil, given that OpenPhil did not spend its entire budget last year.)

[anonymous]4
0
0

Right now we're trying to give fund managers lots of latitude on what to do with the money. If they think there's an argument for saving the money and donating later we'll allow that (but ask for some communication about why saving the money makes sense).

I'd be interested in whether people would prefer a different policy.

My concern is that the marginal effect of donating to one of these funds on the amount of money actually reaching charities might be zero. Given that OpenPhil spent below its budget, and these funds are managed by OpenPhil staff, it appears as though these funds put money on the wrong side of a bottleneck. One of the major constraints on OpenPhil's giving has been wanting charities to have diverse sources of funding; this appears to reduce funding diversity, by converting donations from individual small donors into donations from OpenPhil. What reason do donors have to think they aren't just crowding out donations from OpenPhil's main fund?

The Funds do have some donor diversifying effect, if only because donors can change whether they give to the Fund based on whether they like its recent beneficiaries; though it doesn't capture all the benefits of diversification.

I could imagine this being more useful if the EA Funds are administered by OPP staff in their non-official capacity, and they have more leeway to run risky experiments or fund things that are hard to publicly explain/justify or might not reflect well on OPP. (This would work best if small EA Funds donors were less concerned about 'wasting' their donation than Good Ventures, though, which is maybe unrealistic.)

I haven't thought much about the tradeoffs, but it does seem to me like you could costlessly get more of those advantages if the four funds were each co-run by a pair of people (one from OPP, one from elsewhere) so it's less likely that any one individual or organization will suffer the fallout for controversial choices.

[anonymous]3
0
0

I could imagine this being more useful if the EA Funds are administered by OPP staff in their non-official capacity

I think Nick's administration of the EA Giving DAF is done in his non-official capacity. However, this can only go so far. If one of the fund managers donates to something very controversial that probably still harms OpenPhil even if they were funding the thing as private individuals.

We'll need to have non-OpenPhil staff manage funds in the future to get the full benefits of diversification.

it does seem to me like you could costlessly get more of those advantages if the four funds were each co-run by a pair of people (one from OPP, one from elsewhere) so it's less likely that any one individual or organization will suffer the fallout for controversial choices.

I like this idea. The only potential problem is that the more people you add to the fund, the more they need to reach consensus on where to donate. The need to reach consensus often results in safe options instead of interesting ones.

I'd be very interested in other ideas for how we can make it easier to donate to unusual or controversial options via the fund. Diversification is really critical to the long-term impact of this project.

I like this idea. The only potential problem is that the more people you add to the fund, the more they need to reach consensus on where to donate.

That's true, though you could also give one individual more authority than the others -- e.g., have the others function as advisers, but give the current manager the last word. This is already presumably going to happen informally, since no one makes their decisions in a vacuum; but formalizing it might diffuse perceived responsibility a bit. It might also encourage diversification a bit.

[anonymous]1
0
0

What reason do donors have to think they aren't just crowding out donations from OpenPhil's main fund?

The idea is that crowding out OpenPhil is likely better than the alternative for a lot of individual EAs. For example, if you were going to donate to AMF, crowding out OpenPhil is better if you buy OpenPhil's belief that their grantmaking is better in expectation than AMF. If you think 1) that your donations are better than crowding out OpenPhi in expectation and 2) that OpenPhil staff serving as fund managers will use the money to crowd out OpenPhil, then it makes sense to not donate via the fund in its MVP configuration.

In the future, we plan to use the funds as a platform that will let a wider variety of people do in-depth charity research and have money to support the charities they find. In the future, I think EA Funds will contribute to a much wider variety of ideas.

In principle, if there's unmet demand for these things, then it's a great idea to set up such funds. Overall this infrastructure seems plausibly helpful.

But I'm confused about why, if this is a good idea, Open Phil hasn't already funded it. I wouldn't make such a claim about any possible fund set up in this way - that way leads to playing the Defectbot strategy in the iterated prisoner's dilemma. But in this particular case, I'd expect Open Phil to have much more reason than outside donors do to trust Elie's, Lewis's, and Nick's judgment and value-alignment. Though per Kerry's "minimum viable product" comment below, perhaps this info asymmetry argument will be less true in the future.

I suspect that Open Phil is actually making a mistake by not empowering individuals more to make unaccountable discretionary decisions, so this is seems good to try in its current form anyhow. I weakly expect it to outperform just giving the money to Open Phil or the GiveWell top charities. I'm looking forward to seeing what happens.

One thing to note, re diversification (which I do think is an important point in general) is that it's easy to think of Open Phil as a single agent, rather than a collection of agents; and because Open Phil is a collective entity, there are gains from diversification even with the funds.

For example, there might be a grant that a program officer wants to make, but there's internal disagreement about it, and the program officer doesn't have time (given opportunity cost) to convince others at Open Phil why it's a good idea. (This has been historically true for, say, the EA Giving Fund). Having a separate pool of money would allow them to fund things like that.

I think this is an important point. But it's worth acknowledging there's a potential downside to this too -- perhaps the bar of getting others on board is a useful check against errors of individual judgement.

This is great!! Pretty sure I'd be giving more if it felt more like a coordinated effort and less like I have to guess who needs the money this time.

I guess my only concern is: how to keep donors engaged with what's going on? It's not that I wouldn't trust the fund managers, it's more that I wouldn't trust myself to bother researching and contributing to discussions if donating became as convenient as choosing one box out of 4.

[anonymous]1
0
0

We plan to have some reporting requirements from fund managers although we don't yet know how much. What would you be interesting in seeing?

I'm assuming people who donated to the fund would get periodic notifications about where the money's being used.

That's great, but the less actively I'm involved in the process the more likely I am to just ignore it. That might just be me though.

I suggest the name of the "Animal Welfare" fund is too restrictive, and the name adopted instead be broader, such as "Non-Human Animal Issues," to encompass such efforts as personhood, abolition, cultured meat,for example, in additional to welfare concerns.

It seems strange to have the funds run by people who also direct money from on behalf of big grant-making organizations. Under what circumstances will the money end up going somewhere different? I can see the motivation for having EA funds if they were managed by someone independent - say Carl or Paul - but the current incarnation seems to be basically equivalent to just giving GiveWell or OPP money with a cause-based restriction.

A lot of people have been asking for a way to donate to (/ be funged by) OPP, so if this only enables people to do that, I'd still expect it to be quite popular. Some relevant reasons OPP staff gave in their donor suggestions for wanting more money to go to charities than OPP was willing and able to provide:

  • [re Cosecha and ASJ] "Given the amount we’re aiming to allocate to criminal justice reform as a whole, my portfolio has too many competing demands for us to offer more." [I don't know how much this is a factor for the four areas above.]

  • "I see value to ACE having a broad support base to (a) signal to groups that donors care about its recommendations, (b) raise its profile, and attract more donors, and (c) allow it to invest in longer-term development, e.g. higher salaries (i.e. without fear of expanding with a fragile support base)".

  • [re CIWF] "we’re limited in our ability to provide all of them by the public support test and a desire to avoid being the overwhelming funder of any group".

  • [re MIRI] "the ultimate size of our grant was fairly arbitrary and put high weight on accurate signaling about our views". [Note I work at MIRI, though I'm just citing them as an example here.]

  • "we would not want to be more than 50% of 80,000 Hours’ funding in any case (for coordination/dependence reasons)."

  • [re 80K] "my enthusiasm for supporting specific grants to support the effective altruism community has been higher than other decision-makers’ at Open Phil, and we’ve given less than I’ve been inclined to recommend in some other cases."

  • [in general] "There’s an internal debate about how conservative vs. aggressive to be on grants supporting organizations like these with, I think, legitimate arguments on both sides. I tend to favor larger grants to organizations in these categories than other decision-makers at Open Phil."

The slowness of OPP's grant process might also be an advantage for non-OPP funders. (E.g., ACE, FHI, CEA, 80K, and the Ploughshares Fund were informally promoted by OPP staff to third parties before OPP had reached the end of its own organizational decision process.)

The EA Funds strike me as unlikely to capture all the advantages of donor diversification, but they capture some of them.

[anonymous]1
0
0

Agree with all of this (including the argument that we're not yet capturing full diversification value.

I don't think EA funds will outperform the current situation. In prediction, what generally has the most predictive value is an ensemble of models. What EA funds does is take this ensemble of models on reality that people have, throw away all the information, and use only one model of reality, Elie, Lewis or Nick, to make a prediction about where the best use of money is.

I suppose the general argument is that Elie, Lewis or Nick have an information advantage. Which is plausibly true, however, why not just make this information public and let people decide with their own models what the best use of their money is? Given that we are in a cooperative environment here where we all want to maximize to total amount of impact, in contrast with general asset management, and one only wants to maximize the returns of the investors of the fund.

In conclusion, we lose out on the benefits of model averaging for seemingly no reason.

However, what I believe to be useful, would be a set up where EA funds makes all the information that they have public. Then people can decide for if they want to donate to EA funds, and save themselves valuable time, or make their own judgement on where the best place to donate is.

Have you thought about integrating the problem quiz? It could be a button like "help me set my allocation" instead of a pop-up in the beginning that could derail participation from being too complicated.

I think this will probably be useful to many EAs.

That said, I'm aware something like this has been tried elsewhere and considered unsuccessful (sorry for not mentioning where, I'm not sure whether I was told this in confidence or not, but if you message me privately I can tell you more - it was not an EA context)

The reason appears to be that donors want to have a sense of ownership of the success that they have made happen, whereas putting money into a fund makes the donor's impact even more indirect.

(This is also the reason why I personally would be unlikely to use this facility, despite the fact that I also find it hard, difficult work to find optimal giving opportunities)

This may work if EAs are less glory-seeking donors than non-EAs (and me, for that matter!) I suspect that this is probably the case.

[anonymous]3
0
0

It is true that many in the traditional charity market have tried ideas similar to this without much success. I think the EA community is very different from traditional charity and I think the idea might make sense for the EA community. That said, I retain some probability that the idea will fail for the same reasons that it has failed outside of EA.

Id be interested in long run future and things focused more directly on human wellbeing than generic health and income. Id also be more interested if these groups not only updated on orgs we all know about but also did / collated exploratory work on speculative opportunities.

I like the idea!

Kind of unrelated, but I've wondered about these first two considerations that people use to pick a charity, as listed above: 

1) which cause is most important
2) which interventions in the cause are most effective

Couldn't there be a cause that is extremely important but just that don't have any good interventions? Maybe there is a "most effective" intervention for this cause, but it's still not that good, and donating to that intervention doesn't really result in much. 

If there aren't any good interventions (including researching the problem further to identify good direct interventions), then presumably the cause isn't so important; it would rate low on the tractability scale.

Maybe a weird corner case is saving/investing to donate to the cause later?

I think 1) and 2) are basically backwards. You should support whichever interventions are most effective, regardless of cause, and if these happen to fall into one cause, then that's the most important cause.

Ah, okay. So tractability is built into the term "most important"?

I thought they were two separate concepts: https://concepts.effectivealtruism.org/concepts/importance-neglectedness-tractability/

I agree that all that really matters is how effective a particular intervention will be in reducing suffering for the amount of money you plan to donate. Other metrics (especially neglectedness) are just heuristics.

I think it's unfortunate we used the word "importance" for one of the factors, since it could also be understood to mean overall how valuable it is to work on something. I think many use the word "scale" now instead for the factor.

If you prioritized by scale only, then you can make a problem arbitrarily large in scale, to the point of uselessness, e.g. "prevent all future suffering".

Presumably wild animal suffering is also much greater in scale than factory farming (or at least the suffering of the farmed animals, setting other effects aside), but it receives much less support since, in part, so far, it seems much less tractable. (Wild animal welfare is still a legitimate cause, though, and it does get support. Wild Animal Initiative was just recommended as a top charity by Animal Charity Evaluators.)

This paper by Butera and Houser (2016) seems relevant and interesting (emphasis mine):

Philanthropy, and particularly ensuring that one’s giving is effective, can require substantial time and effort. One way to reduce these costs, and thus encourage greater giving, could be to encourage delegation of giving decisions to better-informed others. At the same time, because it involves a loss of agency, delegating these decisions may produce less warm-glow and thus reduce one’s charitable impulse. Unfortunately, the importance of agency in charitable decisions remains largely unexplored. In this paper, using a laboratory experiment with real donations, we shed light on this issue. Our main finding is that agency, while it does correlate with self-reported warm-glow, nevertheless seems to play a small role in encouraging giving. In particular, people do not reduce donations when giving decisions are made by algorithms that guarantee efficient recipients but limit donors’ control over giving allocations. Moreover, we find participating in giving groups − a weaker form of delegation − is also effective in that they are appealing to donors who would not otherwise make informed donations, and thus improves overall effective giving. Our results suggest that one path to promoting effective giving may be to create institutions that facilitate delegated generosity.

What will it cost?

[anonymous]2
0
0

No cost. In fact, we think we can get lower donation processing fees than might be available to people elsewhere. However, CEA is a plausible recipient for the movement building fund.

Presumably there's an operational cost to CEA in setting up / running the funds? I'd thought this was what Tom was asking about.

Curated and popular this week
Relevant opportunities