Hide table of contents

The results of the January 2019 CEA donor lottery meant that I was responsible for allocating the donor lottery's $500k funding pool. I entered the donor lottery anonymously, though I now intend to explain my grants and decision-making process publicly; I believe that transparency and open sharing of ideas is a good thing for effective altruism, and I'm glad to be able to contribute to that here.

I expect that my grant recommendations from this funding pool will ultimately be made in three or four phases; this writeup is a preliminary report on phase 1, and is released simultaneously with my writeup on phase 2.

The decision-making process for phase 1 was largely completed prior to February 2020, and phase-1 grants were not substantially affected by consideration of the Covid-19 pandemic (see "Adjusting for unexpected developments", below). Phase-2 decision-making began after February 2020, and phase-2 grants focused on neglected responses to the Covid-19 pandemic (see phase-2 writeup post). As of December 2020, phase-3 decision-making has not yet begun in earnest.

Overall summary

In phase 1, CEA accepted my recommendations for two earmarked grants to the Good Food Institute:

  • $120k to GFI's European affiliate to support policy advocacy enabling the development and mainstream deployment of food products that replace farmed-animal products.
  • $45k to GFI's Asia–Pacific affiliate to support market research and policy advocacy, likewise to support deployment of alternative protein products in Asia.

This post has top-level sections on:

Personal background and preliminaries

I am a trader at a quantitative trading firm and an independent research economist, currently living and working in Hong Kong. I have identified as an effective altruist since 2014. This writeup represents independent work and is not coauthored or endorsed by CEA, the organizations mentioned, or by my employer. Grantee organizations were given a week to review a draft, though final editorial decisions were mine.

My initial approach (and some of the structure of this writeup) was inspired by Adam Gleave's writeup of his allocation of the January 2018 donor lottery pool. I'm grateful to Adam for his strong leading example, and I'd be glad to hold myself out as a resource for donor lottery participants and allocators in the future. Adam certainly set a high bar that I am sad not to have met myself (by my own evaluation).

Philosophically, I assign comparable value to future and present lives and place significant weight on animal welfare (with medium-high uncertainty). With high uncertainty, I largely endorse the standard arguments regarding the overwhelming importance of the far future. I'm undecided but currently skeptical of the marginal approaches to influencing the far future. My personal donation writeups for 2017, 2018, 2019, and 2020 [pending] have more detail.

Since I find myself in substantial agreement with the Open Philanthropy Project and similar large donors about many important ideas, I initially focused on (1) my opinions that disagree on the margin with "common consensus" and (2) on organizations whose funding gaps OpenPhil did not fill out out of concern about being too high a fraction of the organization's funding base.

I began this process expecting my grant recommendations to primarily focus on far-future and EA-meta opportunities, though I did not expect to be comparatively advantaged in finding new organizations and opportunities. To generate an initial list of organizations to consider, I (1) solicited ideas from individuals I trust and (2) reviewed recent grants made by the Long-Term Future Fund and EA Meta Fund.

Considerations on cause areas

(I discuss my post-February-2020 thinking in a section of my phase-2 writeup.)

I wrote in 2018 that I expected to direct donor lottery funds to far-future or EA-meta causes. I considered opportunities in these cause areas, but—somewhat surprisingly—decided to make a phase-1 grant to a different cause area (animal welfare). This section discusses some of my thinking.

My preliminary review of the far-future funding environment revealed that there was a large existing pool of large donors who were relatively familiar with the large organizations in the far future space. Furthermore, I gathered that many of the effective altruists who have made recent fortunes in crypto-related enterprises are familiar with, and more positive than I am on, organizations addressing the far-future cause area. With medium confidence, I expect this latter segment of donors to grow as the crypto business environment continues to resolve. This suggests that far-future causes will see increased coverage in the coming years, by donors more familiar with the space than I am likely to become without dedicated work.

Finally, I found my initial conversations about EA-meta organizations difficult for me to draw actionable conclusions from. A large part of this difficulty was due to my physical and social distance from the Bay-area- and London-based communities of EA organizations. Furthermore, most people I talked with suggested that they found it difficult to evaluate the work of EA-meta organizations in an outside-view and evidence/results-based framework.

If the best options for understanding and evaluating EA-meta organizations involved inside-view qualitative assessments of their staff, principals, and qualitative effects in the community (and I did not have private information about any of the individuals or impacts), I felt at a comparative disadvantage in determining funding allocation to them.

These findings led me to update to be more pessimistic about my comparative advantage at thoughtful far-future-related and EA-meta-related funding. I considered a grant to the LTF Fund or EA Meta Fund to delegate the decision to their committee members, and reviewed their recent grants to form an opinion on their marginal grant opportunities. I found these likewise difficult to thoughtfully evaluate, for largely similar reasons, and decided to broaden my consideration of cause areas slightly. I thought that animal welfare was the next-most-promising cause area to investigate. (More on that below.)

On medium-sized donors

Roles for medium-sized donors

One of my secondary objectives in allocating the donor lottery funds was to better understand the role for "medium-sized" EA donors (where here I'm using the term to mean $100k-$1mln/year). At this scale, how should donors be spending their attention and time? What opportunities are most interesting for them to dig into? These questions have clear implications for donors with annual donations in this range, as well as donors selected in future donor lotteries.

Firstly, it seems clear that there are opportunities for medium-sized donors to take a focused look at very young organizations and materially support their early operations—see Adam Gleave's grant to ALLFED, for example. But my early forays in this direction were somewhat disappointing, as above, and I did not expect I would become sufficiently familiar with the animal welfare space to evaluate early-stage organizations there, either.

However, I now believe there's also a role for medium-sized donors in working with more-established organizations, and improving the community's understanding of their work via sustained engagement. The professional evaluators and the EA funds committees do good work, but we the community would be stronger with many dozens more narrowly-informed, third-party amateurs. It takes a particular approach to do this effectively, in a way that's not mostly redundant, though.

Relative opinions

Say you're a medium-sized donor evaluating an elastic (i. e., exhibiting diminishing marginal returns) organization or opportunity that has already been widely considered. I claim (with medium-high confidence) that you should be primarily focused on forming relative information and opinions, rather than an absolute evaluation of the organization.

This is closely related to the idea of zeroing out, by my former colleague Zvi Mowshowitz—if other actors in the "consensus" (or, if you prefer, the market) are making reasonable decisions, but you can find one thing that they're missing, then you can do well by pushing the world a little bit in the direction implied by the thing that's "underdone".

Perhaps unintuitively, this is still true if you don't have a complete model of the things that the consensus opinion is getting right—so long as you have a better model about the relevant things that they're not.

Relative opinions (example)

Suppose that an organization has ten independent features, each of which can multiply its overall effectiveness by a smaller or larger number. A large group of people evaluated features 1-5, made estimates of features 6-10, and figured out how they wanted to fund the organization compared to other opportunities. Furthermore, assume that they have decided—based on those evaluations and estimates—that it deserves some positive amount of funding (capped at the point where diminishing returns makes the marginal project slightly worse than other marginal opportunities).

Now, say that you're not at all sure about features 1-9, but you took a good look at feature 10 and you are pretty sure that everyone else's estimates missed something that suggests a higher score on it.

Even if you don't know anything about features 1-5, and no one knows much about features 6-9, you can predict that the overall consensus estimate was too low, and the marginal projects of the organization are better than the marginal projects to be found elsewhere. (After all, finding a specific good thing in feature 10 isn't bad news about features 6-9, nor does it mean much about any other organizations...)

So, if you're pretty sure that everyone else did miss this good thing, then funding the organization's next project will be a good move for efficiency.

Relative opinions (in application)

The same principle applies in less black-and-white cases—if others have tried to evaluate all the parts of something with many complicated interacting parts, and you kinda-believe-but-aren't-completely-sure that there's something you think is more positive than your guess of what the consensus is, it's still a win for efficiency to update upwards from the consensus belief and get behind the marginal project.

This only really works well if the funding allocated by "relative" analysis is sufficiently small compared to the amount that's allocated by "absolute" analysis. That said, there's still a massive potential efficiency gain from not duplicating work on analyzing the same basics yet another time, and I suspect that "sufficiently small" actually means something like 20% here. What's more, if the EAs doing such research then share their thoughts openly, they still can get rolled up into the community knowledge.

All taken together, I think that specialized relative analysis is relatively underdone (recursion!) and there's still plenty of scope for it to see more play in the EA community.

[ry: I'll probably write these ideas up further in a standalone post, and link it here when I do.]

Good Food Institute


The Good Food Institute is a US-headquartered nonprofit supporting the development and mainstream deployment of food products that replace industrially-farmed animal products. My favorite description of their work came from executive director Bruce Friedrich, who described it at a meetup event as "consigning industrial animal agriculture to the dustbin of history—as quickly as possible". (The more-polished official tagline puts this as "creating alternative proteins that cost the same or less—and taste the same or better—than the products of conventional animal agriculture.")

GFI's typical initiatives look like:

  • directly providing strategic support to the alt-protein private sector,
  • working to move conventional food companies into the sector,
  • mobilizing research scientists to fill identified gaps in the field of alternative protein technology,
  • working to direct governmental, academic, and institutional funding towards open-access research in relevant fields, or
  • targeted lobbying against government policies that would restrain commercial adoption of alternative proteins.

Globally, GFI cooperates with a network of regional affiliates (currently, in Asia–Pacific, Brazil, Europe, India, and Israel), organizational teams who operate on-the-ground in their respective regions but work together closely for global initiatives. Approximately 58% of GFI staff are US- based, with the remaining 42% working for a non-US regional affiliate. (US-based staff also support the international affiliates on projects of international scope.)

GFI has been an Animal Charity Evaluators "top charity" five consecutive times since year-end 2016 (having officially launched in February 2016). Despite their high profile, I believe that they still have substantial room to deploy additional funding—particularly in expanding operations outside the US to address opportunities that seem to me to be particularly neglected. In particular, I believe that this room for additional funding exceeds ACE's recommendations.

My perspective

In the framework of "Relative opinions" above, I consider GFI to be a highly-rated organization that's nevertheless underrated by ACE and the animal-welfare EA consensus in important ways. (Because I am evaluating GFI as an underdone opportunity that's elsewhere highly-regarded, this section will look different than, say, Adam Gleaves's from-zero-baseline review of ALLFED in his 2017 donor lottery report.)

The primary mistake that I think the consensus evaluation makes regarding GFI: I expect the effective altruists most interested in animal welfare overestimate how compelling non-EAs will find moral suasion that assumes that animal suffering matters. To be clear, I do think that animal suffering matters and that such an approach can be effective—but I believe the consensus of donors funding animal-welfare organizations will (on average) overestimate its effectiveness in, say, the harder-to-reach half of the general population. Most people are, in general, bad at modeling people not like them, and furthermore bad at understanding how bad they are at modeling people not like them.

By contrast, GFI's theory of change does not require that moral suasion will be effective in changing opinions regarding farmed animal products. I strongly suspect that the GFI-led effort could be successful in changing the world via supply-side economic levers even if consumer opinions about farmed meat don't materially change.

The organization's principals understand this and I believe they are explicitly playing for an endgame where replacements to farmed meat win on essentially economic terms. I consider the ability of the GFI leadership to take this (uncommon) perspective as a strongly positive indicator about their strategic sense, which I also expect to be underappreciated by the donor/evaluator consensus.

Finally, I don't have material opinions about GFI that are more negative than (my impression of) consensus. My own amateur evaluation of their overall operations largely agrees with that of ACE.


GFI-Europe is a relatively new GFI regional affiliate (began operations in 2019) whose typical initiatives look like (1) lobbying on EU/UK regulatory issues affecting the regional alternative protein industry, and (2) lobbying to direct Europe-based public funding towards supporting alternative protein research.

At the end of 2019, the organization consisted only of managing director Richard Parr and European policy manager Alex Holst, though by September 2020, their staff count had grown to six. Given the lobbying opportunities available across the continent, it seems reasonable to me that GFI-Europe could scale their initiatives through a staff size of at least 20, suggesting significant room for more funding.

At the time, GFI-Europe was seeking commitments of funding to allow them to plan their initial staff expansion; while funding for key organizational roles was provisioned from funds from GFI-US, Richard indicated to me significant room to use additional funding to expand their policy staff through 2020 and 2021.

Additional policy staff would in turn let them engage with a wider variety of policymakers more frequently—and bring higher-quality information and better-targeted suggestions to those meetings.

GFI's per-staff funding costs for policy staff are, as best I can tell, typical for the nonprofit sector.


GFI-Asia-Pacific operates out of Hong Kong, and is involved in a variety of initiatives supporting private-sector alternative-protein development and deployment in the APAC region, as well as additional projects to influence country-level local policy. While the organization is slightly more established than GFI-Europe (9 staff as of December 2020, operating since early 2019 under managing director Elaine Siu), its relative newness and the sheer size of the APAC region suggests significant room for organizational growth and additional funding.

At the time, GFI-APAC was maintaining active operations on a number of projects in private-sector engagement, industry research, and policy advocacy, and was considering additional initiatives in market research and establishing a team of external science/technology consultants. (GFI's "SciTech" specialists are deployed as field experts representing GFI's opinions to media sources, government consultations, or industry groups.)

At the time, additional funding was required to allow GFI-APAC to retain their first Asia-based SciTech specialist, a senior professor of food science at a high-profile East Asian university. Previously, GFI had relied on specialists from the US or other regions for this work, which was less effective for a number of reasons.

Earmarked grants

GFI's recommendation for donors is to donate unrestricted funds to the parent organization for general operations. I believe that this is the correct course for small donors generally excited about GFI's approach to changing the world.

However, I had the opportunity here to take a deeper dive into specific initiatives at GFI—and while I didn't feel I could go over the whole organization at a sufficient level of detail, I did feel like I could take a deeper dive into the projects and plans of Elaine and Richard's teams. (A point of comparative advantage here is that I'm based out of Hong Kong, not the US.) My hope is that my targeted grants here can give GFI an external—though amateur—vote of confidence in the specific plans and leadership of GFI-APAC and GFI-Europe.

Finally, a major part of my excitement about GFI is my belief that the work they do is dramatically neglected. And while GFI leadership has recognized that the opportunities in consumer markets outside the US are even more dramatically neglected, I personally believe (with medium-weak confidence) that even they are under-estimating the appropriate scope for investment in GFI's operations abroad. (ACE's 2020 evaluation of GFI suggests a similar belief.) I hope to understand this picture better in my future discussions with the GFI team.

Finally, I expect that my earmarking of grant funds will be partially funged within the GFI organization, and I think this is inevitable, basically fine, and in fact weakly good.

Adjusting for unexpected developments

In late February 2020, the world changed, and I had to ask to what degree my funding decisions should change with it.

At the time I was finalizing these phase-1 grants to GFI-Europe and GFI-APAC, early news about the Covid-19 epidemic began to break, and it became substantially likely that the economic environment in 2020 would turn out very differently from that of 2019 (though it was still unclear how that difference would affect the EA funding environment). In the face of financial uncertainty, GFI placed a hold on its expansion of staff and additionally reduced three human-resources roles who had been primarily focused on hiring.

(Even in hindsight, it's difficult to interpret the year-over-year change in GFI's fundraising revenue from ACE's November 2020 estimates, as GFI revenue saw a sharp increase from 2018 to 2019. Compared to 2018, however, ACE projects an approximate +55% increase in fundraising revenue on approximately +130% in operating expenditures.)

While I thought at the time it was likely that there would be substantial scope to fund neglected pandemic-related interventions (and now believe I was right), I also suspected that overall stability of funding would be key to avoid a multi-year setback to GFI's international expansion plans. (The grants under consideration would represent roughly 1% of GFI's overall projected 2020 fundraising, so concerns about destabilizing funding were nontrivial though not overwhelming.)

As we were already deep into discussions about specific funding gaps, I felt it would be substantially disruptive to pull back from the grants under discussion. In the end, I decided to proceed with grant amounts slightly reduced from the penciled-in numbers, and began searching for promising Covid-19-related interventions in the next phase of grantmaking (see following post).

What next?

Read my writeup of phase-2 grants here.

Sorted by Click to highlight new comments since:

Thanks a lot for publishing this report, it's great to see that so much careful thought has gone into your decision.

I want to highlight that giving to the donor lottery is a highly effective way to donate even if you don't publish such a report. I've heard people say that they were hesitant to give to the donor lottery because they didn't want to be obliged to publish articles like these. Just like some people choose to report publicly about their ordinary donations and others don't, it's fine if some report about their donor lottery decision and others don't. Your decision whether to participate in the donor lottery doesn't affect the probability that someone else will win and publish a report.

You can read more about the lottery here.

Finally, I expect that my earmarking of grant funds will be partially funged within the GFI organization, and I think this is inevitable, basically fine, and in fact weakly good.

I received a private request (from an early reviewer of this post) to expand on my thoughts here, so a few more words:

When making decisions under collective uncertainty, aggregating information is a hard problem (citation not required). I think that my relative opinions here push the world towards a more efficient allocation, but I recognize that my opinions about GFI are inevitably incomplete. So if I overstated my certainty too much when translating my opinions into effects-on-the-world, I expect I would be making the allocation of resources less efficient overall. If I insisted on absolutely no counterfactual funging, I would be overstating my confidence.

On the other hand, if I trust GFI to take my grants in the spirit that they're intended, then I expect they'll take them as information given in good faith, trust that I was trying to communicate something that I thought was not already known to them, consider what things they know that (they think) were not known to me, and decide what the net effect of my additional opinion should be. (This should remind you of Aumann's agreement theorem, if you're familiar with that concept from the rationality literature.)

(I think it's also plausible in general that earmarking $X in a vote of confidence in a particular program prompts the receiving organization to update their beliefs and direct more non-earmarked funding than they would have otherwise, causing the opposite of funging.)

Do I actually believe that GFI's principals are as good at playing this Aumann-esque information-aggregation game as the professional colleagues I'm used to working with? Probably not, no. But this is the way I think cooperative allocation of resources should play out, and I think that the EA community only gets better at it if we start discussing ideas like this and playing "cooperate" in the epistemic prisoners' dilemma. And my instinct is actually that if some of my funding ends up being funged towards initiatives that GFI principals think are highest-value, it's probably net good for the overall work.

Curated and popular this week
Relevant opportunities