JT

Joel Tan🔸

Founder @ CEARCH
1730 karmaJoined
exploratory-altruism.org/

Bio

I run the Centre for Exploratory Altruism Research (CEARCH), a cause prioritization research and grantmaking organization.

Sequences
1

CEARCH: Research Methodology & Results

Comments
157

Topic contributions
1

Thanks Sjir! I'm grateful for the transparency and data sharing throughout - I don't see how we could have done the evaluation otherwise!

I don't have the estimates for how the multiplier changes over time, though you would expect a decline, driven by the future pledging pool being less EA/zealous than earlier batches.

For the value of a *pledge* - based on analysis of the available data, it doesn't appear that donations increase over time (for any given pledge batch), so after relevant temporal discounts (inflation etc), the value of a pledge is relatively front-loaded:
 

Hi Nuno,

We report a crude version of uncertainty intervals at the end of the report (pg 28) - taking the lower bound estimates of all the important variables, the multiplier would be 0x, while taking the upper bound estimates, it would be 100x. 

In terms of miscellaneous adjustments, we made an attempt to be comprehensive; for example, we adjust for (a) expected prioritization of pledges over donations by GWWC in the future, (b) company pledgers, (c) post-retirement donations, (d) spillover effects on non-pledge donations, (e) indirect impact on the EG ecosystem (EG incubation, EGsummit), (f) impact on the talent pipeline, (g) decline in the counterfactual due to the growth of EA (i.e. more people are likely to hear of effective giving regardless of GWWC), and (h) reduced political donations. The challenge is that a lot of these variables lack the necessary data for quantification, and of course, there may be additional important considerations we've not factored in.

That said, I'm not sure if we would get a meaningful negative effect from people being less able to do ambitious things because of fewer savings - partly for effect size reasons (10% isn't much), and also you would theoretically have people motivated by E2G to do very ambitious for-profit stuff when they otherwise would have done something less impactful but more subjectively fulfilling (e.g. traditional nonprofit roles). It does feel like a just-so story either way, so I'm not certain if the best model would include such an adjustment in the absence of good data.

https://docs.google.com/spreadsheets/d/1MF9bAdISMOMV_aOok9LMyKbxDEpOsvZ9VO8AfwsS6_o/

Probably majority AI, given the organizations being given to and the distribution of funding. This contrasts with the non-GWWC EG organizations in Europe, where I believe there is a much greater focus on climate, mainly to meet donors where they are at.

They're working on creating an option to make it easy for posters to add the diamond, but in the meantime you can DM the forum team (I did!) 

Hi Nicolaj,

Thanks for sharing! That's really interesting. Couple of thoughts:

(1) For us, CEARCH uses n=1 when modelling the value of income doublings, because we've tended to prioritize health interventions where the health benefits tend to swamp the economic benefits anyway (and we've tended to priortize health interventions because of the heuristic that the NCDs are a big and growing problem which policy can cheaply combat at scale, vs poverty which by the nature of economic growth is declining over time).

(2) The exception is when modelling the counterfactual value of government spending, which a successful policy advocacy intervention redirects, and has to be factored in, albeit at a discount to EA spending, and while taking into account country wealth (https://docs.google.com/spreadsheets/d/1io-4XboFR4BkrKXgfmZHQrlg8MA4Yo_WLZ7Hp6I9Av4/edit?gid=0#gid=0).

There, the modelling is more precise, and we use n=1.26 as a baseline estimate, per Layard, Mayraz and Nickell's review of a couple of SWB surveys (https://www.sciencedirect.com/science/article/abs/pii/S0047272708000248). Would be interested in hearing how your team arrived at n=1.87 - I presume this is a transformation of an initial n=1 based on your temporal discounts?

Cheers,
Joel

It's true that people with abhorrent views in one area might have interesting or valuable things to say in other areas - Richard Hanania, for example, has made insightful criticisms of the modern American right.

However, if you platform/include people with abhorrent views (e.g. "human biodiversity", the polite euphemism for the fundamentally racist view some racial groups have lower IQ than others - which is a view held by a number of Manifest speakers), you run into the following problem - that the bad chases out the good.

The net effect of inviting in people with abhorrent views is that it turns off most decent people, either because they morally object to associating with such abhorrent views, or because they just don't want the controversy. You end up with a community with an even smaller percentage of decent people and a higher proportion of bigots and cranks, which in turn turns off even more decent people, and so on and so forth. Scott Alexander himself says it best in his article on witches:

The moral of the story is: if you’re against witch-hunts, and you promise to found your own little utopian community where witch-hunts will never happen, your new society will end up consisting of approximately three principled civil libertarians and seven zillion witches. It will be a terrible place to live even if witch-hunts are genuinely wrong.

At the end of the day, platforming anyone whatsoever will leave you only with people rejected by polite society, and being open to all ideas will leave you with only the crank ones.

Mathias can share more (assuming no confidentiality concerns) but talking to both him and others in the aid space - it's just brutally difficult, and politicians aren't interested

Generally, they have a combination of the following characteristics: (a) a direct understanding of what their own grantmaking organization is doing and why, (b) deep knowledge of the object-level issue (e.g. what GHD/animal welfare/longtermist projects to fund, and (c) extensive knowledge of the overall meta landscape (e.g. in terms of what other important people/organizations there are, the background history of EA funding up to a decade in the past, etc).

Hi Linch,

Thanks for engaging. I appreciate that we can have a fairly object-level disagreement over this issue; it's not personal, one way or another.

Meta point to start: We do not make any of these criticisms of EA Funds lightly, and when we do, it's against our own interests, because we ourselves are potentially dependent on EAIF for future funding.

To address the points brought up, generally in the order that you raised them:

(1) On the fundamental matter of publication. I would like to flag out that, from checking the email chain plus our own conversation notes (both verbatim and cleaned-up), there was no request that this not be publicized.

For all our interviews, whenever someone flagged out that X data or Y document or indeed the conversation in general shouldn't be publicized, we respected this and did not do so. In the public version of the report, this is most evident in our spreadsheet where a whole bunch of grant details have been redacted; but more generally, anyone with the "true" version of the report shared with the MCF leadership will also be able to spot differences. We also redacted all qualitative feedback from the community survey, and by default anonymized all expert interviewees who gave criticisms of large grantmakers, to protect them from backlash. 

I would also note that we generally attributed views to, and discussed, "EA Leadership" in the abstract, both because we didn't want to make this a personal criticism, and also because it afforded a degree of anonymity.

At the end of the day, I apologize if the publication was not in line with what EA Funds would have wanted - I agree it's probably a difference in norms. In a professional context, I'm generally comfortable with people relaying that I said X in private, unless there was an explicit request not to share (e.g. I was talking to a UK-based donor yesterday, and I shared a bunch of my grantmaking views. If he wrote a post on the forum summarizing the conversations he had with a bunch of research organizations and donor advisory orgs, including our own, I wouldn't object). More generally, I think if we have some degree of public influence (including by the money we control) it would be difficult from the perspective of public accountability if "insiders" such as ourselves were unwilling to share with the public what we think or know.

(2) For the issue of CEA stepping in: In our previous conversation, you relayed that you asked a senior person at CEA and they in turn said that "they’re aware of some things that might make the statement technically true but misleading, and they are not aware of anything that would make the statement non-misleading, although this isn’t authoritative since many thing happened at CEA". For the record, I'm happy to remove this since the help/assistance, if any, doesn't seem too material one way or another.

(3) For whether it's fair to characterize EAIF's grant timelines as unreasonably long. As previously discussed, I think the relevant metric is EAIF's own declared timetable ("The Animal Welfare Fund, Long-Term Future Fund and EA Infrastructure Fund aim to respond to all applications in 2 months and most applications in 3 weeks."). This is because organizations and individuals make plans based on when they expect to get an answer - when to begin applying; whether to start or stop projects; whether to go find another job; whether to hire or fire; whether to reach out to another grantmaker who isn't going to support you until and unless you have already exhausted the primary avenues of potential funding.

(4) The issue of the major donor we relayed was frustrated/turned off. You flag out that you're keeping tabs on all the major donors, and so don't think the person in question is major. While I agree that it's somewhat subjective - it's also true that this is a HNWI who, beyond their own giving, is also sitting on the legal or advisory boards many other significant grantmakers and philanthropic outfits. Also, knowledgeable EAs in the space have generally characterized this person as an important meta funder to me (in the context of my own organization then thinking of fundraising, and being advised as to whom to approach). So even if they aren't major in the sense that OP (or EA Funds are), they could reasonably be considered fairly significant. In any case, the discussion is backwards, I think - I agree that they don't play as significant a role in the community right now (and so you assessment of them as non-major is reasonable), but that would be because of the frustration they have had with EA Funds (and, to be fair, the EA community in general, I understand). So perhaps it's best to understand this as potentially vs currently major.

(5) On whether it's fair to characterize EA Funds leadership as being strongly dismissive of cause prioritization. We agree that grants have been made to RP; so the question is cause prioritization outside OP and OP-funded RP. Our assessment of EA Fund's general scepticism of prioritization was based, among other things, on what we reported in the previous section "They believe cause prioritization is an area that is talent constrained, and there aren't a lot of people they feel great giving to, and it's not clear what their natural pay would be. They do not think of RP as doing cause prioritization, and though in their view RP could absorb more people/money in a moderately cost-effective way, they would consider less than half of what they do cause prioritization. In general, they don't think that other funders outside of OP need to do work on prioritization, and are in general sceptical of such work." In your comment, you dispute that the bolded part in particular is true, saying "AFAIK nobody at EA Funds believes this."

We have both verbatim and cleaned up/organized notes on this (n.b. we shared both with you privately). So it appears we have a fundamental disagreement here (and also elsewhere) as to whether what we noted down/transcribed is an accurate record of what was actually said.

TLDR: Fundamentally, I stand by the accuracy of our conversation notes.

(a) Epistemically, it's more likely that one doesn't remember what one said previously vs the interviewer (if in good faith) catastrophically misunderstanding and recording down something that wholesale wasn't said at all (as opposed to a more minor error - we agree that that can totally happen; see below)

(b) From my own personal perspective - I used to work in government and in consulting (for governments). It was standard practice to have notes of meeting, as made by junior staffers and then submitted to more senior staff for edits and approval. Nothing resembling this happened to either me or anyone else (i.e. just total misunderstanding tantamount to fabrication, in saying that that XYZ was said when nothing of the sort took place).

(c) My word does not need to be taken for this. We interviewed other people, and I'm beginning to reach out to them again to check that our notes matched what they said. One has already responded (the person we labelled Expert 5 on Page 34 of the report); they said "This is all broadly correct" but requested we made some minor edits to the following paragraphs (changes indicated by bold and strikethrough)

  • Expert 5: Reports both substantive and communications-related concerns about EA Funds leadership.

    For the latter, the expert reports both himself and others finding communications with EA Funds leadership difficult and the conversations confusing.

    For the substantive concerns – beyond the long wait times EAIF imposes on grantees, the expert was primarily worried that EA Funds leadership has been unreceptive to new ideas and that they are unjustifiably confident that EA Funds is fundamentally correct in its grantmaking decisions. In particular, it appears to the expert that EA Funds leadership does not believe that additional sources of meta funding would be useful for non-EAIF grants [phrase added] – they believe that projects unfunded by EAIF do not deserve funding at all (rather than some projects perhaps not being the right fit for the EAIF, but potentially worth funding by other funders with different ethical worldviews, risk aversion or epistemics). Critically, the expert reports that another major meta donor found EA Funds leadership frustrating to work with, and so ended up disengaging from further meta grantmaking coordination and this likely is one reason they ended up disengagement from further meta grantmaking coordination [replaced].

My even handed interpretation of this overall situation (trying to be generous to everyone) is that what was reported here ("In general, they don't think that other funders outside of OP need to do work on prioritization") was something the EA Funds interviewee said relatively casually (not necessarily a deep and abiding view, and so not something worth remembering) - perhaps indicative of scepticism of a lot of cause prioritization work but not literally thinking nothing outside OP/RP is worth funding. (We actually do agree with this scepticism, up to an extent).

(6) On whether our statement that “EA Funds leadership doesn't believe that there is more uncertainty now with EA Fund's funding compared to other points in time” is accurate. You say that this is clearly false. Again, I stand by the accuracy of our conversation notes. And in fact, I actually do personally and distinctively remember this particular exchange, because it stood out, as did the exchange that immediately followed, on whether OP's use of the fund-matching mechanism creates more uncertainty.

My generous interpretation of this situation is, again, some things may be said relatively casually, but may not be indicative of deep, abiding views.

(8) For the various semantic disagreements. Some of it we discussed above (e.g. the OP cause prioritization stuff); for the rest -

On whether this part is accurate: “​​Leadership is of the view that the current funding landscape isn't more difficult for community builders”. Again, we do hold that this was said, based on the transcripts. And again, to be even handed, I think your interpretation (b) is right - probably your team is thinking of the baseline as 2019, while we were thinking mainly of 2021-now.

On whether this part is accurate: “The EA Funds chair has clarified that EAIF would only really coordinate with OP, since they're reliably around; only if the [Meta-Charity Funders] was around for some time, would EA Funds find it worth factoring into their plans. ” I don't think we disagree too much, if we agree that EA Fund's position is that coordination is only worthwhile if the counterpart is around for a bit. Otherwise, it's just some subjective disagreement on what what coordination is or what significant degrees of it amount to.

On this statement: "[EA funds believes] “so if EA groups struggle to raise money, it's simply because there are more compelling opportunities available instead.”

In our discussion, I asked about the community building funding landscape being worse; the interviewee disagreed with this characterization, and started discussing how it's more that standards have risen (which we agree is a factor). The issue is that the other factor of objectively less funding being available was not brought up, even though it is, in our view, the dominant factor (and if you asked community builders this will be all they talk about). I think our disagreement here is partly subjective - over what a bad funding landscape is, and also the right degree of emphasis to put on rising standards vs less funding.

(9) EA Funds not posting reports or having public metrics of successes. Per our internal back-and-forth, we've clarified that we mean reports of success or having public metrics of success. We didn't view reports on payouts to be evidence of success, since payouts are a cost, and not the desired end goal in itself. This contrasts with reports on output (e.g. a community building grant actually leading to increased engagement on XYZ engagement metrics) or much more preferably, report on impact (e.g. and those XYZ engagement metrics leading to actual money donated to GiveWell, from which we can infer that X lives were saved). Like, speaking for my own organization, I don't think the people funding our regranting budgets would be happy if I reported the mere spending as evidence of success.

(OVERALL) For what it's worth, I'm happy to agree to disagree, and call it a day. Both your team and mine are busy with our actual work of research/grantmaking/etc, and I'm not sure if further back and forth will be particularly productive, or a good use of my time or yours.

Load more