JT

Joel Tan

Founder @ CEARCH
1565 karmaJoined Aug 2022
exploratory-altruism.org/

Bio

I run the Centre for Exploratory Altruism Research (CEARCH), a cause prioritization research and grantmaking organization.

Sequences
1

CEARCH: Research Methodology & Results

Comments
149

Topic contributions
1

Generally, they have a combination of the following characteristics: (a) a direct understanding of what their own grantmaking organization is doing and why, (b) deep knowledge of the object-level issue (e.g. what GHD/animal welfare/longtermist projects to fund, and (c) extensive knowledge of the overall meta landscape (e.g. in terms of what other important people/organizations there are, the background history of EA funding up to a decade in the past, etc).

Hi Linch,

Thanks for engaging. I appreciate that we can have a fairly object-level disagreement over this issue; it's not personal, one way or another.

Meta point to start: We do not make any of these criticisms of EA Funds lightly, and when we do, it's against our own interests, because we ourselves are potentially dependent on EAIF for future funding.

To address the points brought up, generally in the order that you raised them:

(1) On the fundamental matter of publication. I would like to flag out that, from checking the email chain plus our own conversation notes (both verbatim and cleaned-up), there was no request that this not be publicized.

For all our interviews, whenever someone flagged out that X data or Y document or indeed the conversation in general shouldn't be publicized, we respected this and did not do so. In the public version of the report, this is most evident in our spreadsheet where a whole bunch of grant details have been redacted; but more generally, anyone with the "true" version of the report shared with the MCF leadership will also be able to spot differences. We also redacted all qualitative feedback from the community survey, and by default anonymized all expert interviewees who gave criticisms of large grantmakers, to protect them from backlash. 

I would also note that we generally attributed views to, and discussed, "EA Leadership" in the abstract, both because we didn't want to make this a personal criticism, and also because it afforded a degree of anonymity.

At the end of the day, I apologize if the publication was not in line with what EA Funds would have wanted - I agree it's probably a difference in norms. In a professional context, I'm generally comfortable with people relaying that I said X in private, unless there was an explicit request not to share (e.g. I was talking to a UK-based donor yesterday, and I shared a bunch of my grantmaking views. If he wrote a post on the forum summarizing the conversations he had with a bunch of research organizations and donor advisory orgs, including our own, I wouldn't object). More generally, I think if we have some degree of public influence (including by the money we control) it would be difficult from the perspective of public accountability if "insiders" such as ourselves were unwilling to share with the public what we think or know.

(2) For the issue of CEA stepping in: In our previous conversation, you relayed that you asked a senior person at CEA and they in turn said that "they’re aware of some things that might make the statement technically true but misleading, and they are not aware of anything that would make the statement non-misleading, although this isn’t authoritative since many thing happened at CEA". For the record, I'm happy to remove this since the help/assistance, if any, doesn't seem too material one way or another.

(3) For whether it's fair to characterize EAIF's grant timelines as unreasonably long. As previously discussed, I think the relevant metric is EAIF's own declared timetable ("The Animal Welfare Fund, Long-Term Future Fund and EA Infrastructure Fund aim to respond to all applications in 2 months and most applications in 3 weeks."). This is because organizations and individuals make plans based on when they expect to get an answer - when to begin applying; whether to start or stop projects; whether to go find another job; whether to hire or fire; whether to reach out to another grantmaker who isn't going to support you until and unless you have already exhausted the primary avenues of potential funding.

(4) The issue of the major donor we relayed was frustrated/turned off. You flag out that you're keeping tabs on all the major donors, and so don't think the person in question is major. While I agree that it's somewhat subjective - it's also true that this is a HNWI who, beyond their own giving, is also sitting on the legal or advisory boards many other significant grantmakers and philanthropic outfits. Also, knowledgeable EAs in the space have generally characterized this person as an important meta funder to me (in the context of my own organization then thinking of fundraising, and being advised as to whom to approach). So even if they aren't major in the sense that OP (or EA Funds are), they could reasonably be considered fairly significant. In any case, the discussion is backwards, I think - I agree that they don't play as significant a role in the community right now (and so you assessment of them as non-major is reasonable), but that would be because of the frustration they have had with EA Funds (and, to be fair, the EA community in general, I understand). So perhaps it's best to understand this as potentially vs currently major.

(5) On whether it's fair to characterize EA Funds leadership as being strongly dismissive of cause prioritization. We agree that grants have been made to RP; so the question is cause prioritization outside OP and OP-funded RP. Our assessment of EA Fund's general scepticism of prioritization was based, among other things, on what we reported in the previous section "They believe cause prioritization is an area that is talent constrained, and there aren't a lot of people they feel great giving to, and it's not clear what their natural pay would be. They do not think of RP as doing cause prioritization, and though in their view RP could absorb more people/money in a moderately cost-effective way, they would consider less than half of what they do cause prioritization. In general, they don't think that other funders outside of OP need to do work on prioritization, and are in general sceptical of such work." In your comment, you dispute that the bolded part in particular is true, saying "AFAIK nobody at EA Funds believes this."

We have both verbatim and cleaned up/organized notes on this (n.b. we shared both with you privately). So it appears we have a fundamental disagreement here (and also elsewhere) as to whether what we noted down/transcribed is an accurate record of what was actually said.

TLDR: Fundamentally, I stand by the accuracy of our conversation notes.

(a) Epistemically, it's more likely that one doesn't remember what one said previously vs the interviewer (if in good faith) catastrophically misunderstanding and recording down something that wholesale wasn't said at all (as opposed to a more minor error - we agree that that can totally happen; see below)

(b) From my own personal perspective - I used to work in government and in consulting (for governments). It was standard practice to have notes of meeting, as made by junior staffers and then submitted to more senior staff for edits and approval. Nothing resembling this happened to either me or anyone else (i.e. just total misunderstanding tantamount to fabrication, in saying that that XYZ was said when nothing of the sort took place).

(c) My word does not need to be taken for this. We interviewed other people, and I'm beginning to reach out to them again to check that our notes matched what they said. One has already responded (the person we labelled Expert 5 on Page 34 of the report); they said "This is all broadly correct" but requested we made some minor edits to the following paragraphs (changes indicated by bold and strikethrough)

  • Expert 5: Reports both substantive and communications-related concerns about EA Funds leadership.

    For the latter, the expert reports both himself and others finding communications with EA Funds leadership difficult and the conversations confusing.

    For the substantive concerns – beyond the long wait times EAIF imposes on grantees, the expert was primarily worried that EA Funds leadership has been unreceptive to new ideas and that they are unjustifiably confident that EA Funds is fundamentally correct in its grantmaking decisions. In particular, it appears to the expert that EA Funds leadership does not believe that additional sources of meta funding would be useful for non-EAIF grants [phrase added] – they believe that projects unfunded by EAIF do not deserve funding at all (rather than some projects perhaps not being the right fit for the EAIF, but potentially worth funding by other funders with different ethical worldviews, risk aversion or epistemics). Critically, the expert reports that another major meta donor found EA Funds leadership frustrating to work with, and so ended up disengaging from further meta grantmaking coordination and this likely is one reason they ended up disengagement from further meta grantmaking coordination [replaced].

My even handed interpretation of this overall situation (trying to be generous to everyone) is that what was reported here ("In general, they don't think that other funders outside of OP need to do work on prioritization") was something the EA Funds interviewee said relatively casually (not necessarily a deep and abiding view, and so not something worth remembering) - perhaps indicative of scepticism of a lot of cause prioritization work but not literally thinking nothing outside OP/RP is worth funding. (We actually do agree with this scepticism, up to an extent).

(6) On whether our statement that “EA Funds leadership doesn't believe that there is more uncertainty now with EA Fund's funding compared to other points in time” is accurate. You say that this is clearly false. Again, I stand by the accuracy of our conversation notes. And in fact, I actually do personally and distinctively remember this particular exchange, because it stood out, as did the exchange that immediately followed, on whether OP's use of the fund-matching mechanism creates more uncertainty.

My generous interpretation of this situation is, again, some things may be said relatively casually, but may not be indicative of deep, abiding views.

(8) For the various semantic disagreements. Some of it we discussed above (e.g. the OP cause prioritization stuff); for the rest -

On whether this part is accurate: “​​Leadership is of the view that the current funding landscape isn't more difficult for community builders”. Again, we do hold that this was said, based on the transcripts. And again, to be even handed, I think your interpretation (b) is right - probably your team is thinking of the baseline as 2019, while we were thinking mainly of 2021-now.

On whether this part is accurate: “The EA Funds chair has clarified that EAIF would only really coordinate with OP, since they're reliably around; only if the [Meta-Charity Funders] was around for some time, would EA Funds find it worth factoring into their plans. ” I don't think we disagree too much, if we agree that EA Fund's position is that coordination is only worthwhile if the counterpart is around for a bit. Otherwise, it's just some subjective disagreement on what what coordination is or what significant degrees of it amount to.

On this statement: "[EA funds believes] “so if EA groups struggle to raise money, it's simply because there are more compelling opportunities available instead.

In our discussion, I asked about the community building funding landscape being worse; the interviewee disagreed with this characterization, and started discussing how it's more that standards have risen (which we agree is a factor). The issue is that the other factor of objectively less funding being available was not brought up, even though it is, in our view, the dominant factor (and if you asked community builders this will be all they talk about). I think our disagreement here is partly subjective - over what a bad funding landscape is, and also the right degree of emphasis to put on rising standards vs less funding.

(9) EA Funds not posting reports or having public metrics of successes. Per our internal back-and-forth, we've clarified that we mean reports of success or having public metrics of success. We didn't view reports on payouts to be evidence of success, since payouts are a cost, and not the desired end goal in itself. This contrasts with reports on output (e.g. a community building grant actually leading to increased engagement on XYZ engagement metrics) or much more preferably, report on impact (e.g. and those XYZ engagement metrics leading to actual money donated to GiveWell, from which we can infer that X lives were saved). Like, speaking for my own organization, I don't think the people funding our regranting budgets would be happy if I reported the mere spending as evidence of success.

(OVERALL) For what it's worth, I'm happy to agree to disagree, and call it a day. Both your team and mine are busy with our actual work of research/grantmaking/etc, and I'm not sure if further back and forth will be particularly productive, or a good use of my time or yours.

On (2). If you go to 80k's front page (https://80000hours.org/), there is no mention that the organizational's focus is AGI or that they believe it to be the most important cause. For the other high-level pages accessible from the navigation bar, things are similar not obvious. For example, in "Start Here", you have to read 22 paragraphs down to understand 80k's explicit prioritization of x-risk over other causes. In the "Career Guide", it's about halfway down the page. If the 1-1 advising tab, you have to go down to the FAQs at the bottom of the page, and even then it only refers to "pressing problems" and links back to the research page. And on the research page itself, the issue is that it doesn't give a sense that the organization strongly recommends AI over the rest, or that x-risk gets the lion's share of organizational resources.

I'm not trying to be nitpicky, but trying to convey that a lot of less engaged EAs (or people who are just considering impactful careers) are coming in, reading the website, and maybe browsing the job board or thinking of applying for advising - without realizing just how convinced on AGI 80k is (and correspondingly, not realizing how strongly they will be sold on AGI in advisory calls). This may not just be less engaged EAs too, depending on how you defined engaged - like I was reading Singer since two decades ago; have been a GWWC pledger since 2014; and whenever giving to GiveWell have actually taken the time to examine their CEAs and research reports. And yet until I actually moved into direct EA work via the CE incubation program, I didn't realize how AGI-focused 80k was.

People will never get the same mistaken impression when looking at Non-Linear or Lightcone or BERI or SFF. I think part of the problem is (a) putting up a lot of causes on the problems page, which gives the reader the impression of a big tent/broad focus, and (b) having normie aesthetics (compare: longtermist websites). While I do think it's correct and valuable to do both, the downside is that without more explicit clarification (e.g. what Non-Linear does, just bluntly saying on the front page in font 40: "We incubate AI x-risk nonprofits ​by connecting founders with ideas, funding, and mentorship"), the casual reader of the website doesn't understand that 80k basically works on AGI.

Hi Gisele,

At CEARCH (https://exploratory-altruism.org/), we generally agree that combating non-communicable chronic diseases is highly cost-effective (e.g. salt reduction policies to combat high blood pressure, sugar drinks taxes to combat obesity, also things like trans fat bans or alcohol taxes).

As part of our grantmaking work, we're on the lookout for charities/NGOs working on these issues (or more generally on advocating for health policy, and helping governments implement such policies). If you are aware of any organizations in this space, do let us know!

Hi Jamie,

For (1) I'm agree with 80k's approach in theory - it's just that cost-effectiveness is likely heavily driven by the cause-level impact adjustment - so you'll want to model that in a lot of detail.

For (2), I think just declaring up front what you think is the most impactful cause(s) and what you're focusing on is pretty valuable? And I suppose when people do apply/email, it's worth making that sort of caveat as well. For our own GHD grantmaking, we do try to declare on our front page that our current focus is NCD policy and also if someone approaches us raising the issue of grants, we make clear what our current grant cycle is focused on.

Hope my two cents is somewhat useful!

I think you're right in pointing out the limitations of the toy model, and I strongly agree that the trade-off is not as stark as it seems - it's more realistic that we model it aa a delay from applying to EA jobs before settling for a non EA job (and that this wont be like a year or anything)

However, I do worry that the focus on direct work means people generally neglect donations as a path to impact and so the practical impact of deciding to go for an EA career is that people decide not to give. An unpleasant surprise I got from talking to HIP and others in the space is that the majority of EAs probably don't actually give. Maybe it's the EA boomer in me speaking, but it's a fairly different culture compared to 10+ years ago where being EA meant you bought into the drowning child arguments and gave 10% or more to whatever cause you thought most important

I apologize if we're talking at cross purposes, but the original idea I was trying to get across is that when valuing additional talent from community building, there is the opportunity cost of a non-EA career where you just give. So basically you're comparing (a) the value of money from that earning to give vs (b) the value of the same individual trying for various EA jobs.

The complication is that (i) the uncertainty of the individual really following through on the intention to earn to give (or going into an impactful career) applies to both branches; however, (ii) the uncertainty of success only applies to (b). If they really try to earn to give they can trivially succeed (e.g. give 10% of the average American salary - so maybe $5k, ignoring adjustments for lower salaries for younger individual and higher salaries for typically elite educated EAs). However, if they apply to a bunch of EA jobs, the aren't necessarily going to succeed (i.e. they aren't necessary going to be better than the counterfactual hire). So ultimately we're comparing the value an additional $5k annual donation vs additional ~10 applications of average quality to various organizations (depends on how many organizations an application will apply to per annum - very uncertain).

I also can't speak with certainty as to how organizations will choose, but my sense is that (a) smaller EA organizations are funding constrained and would prefer getting the money; while (b) larger EA organizations are more agnostic because they have both more money and the privilege of getting the pick of the crop for talent (c.f. high demand for GiveWell/OP jobs).

1) It agree that policy talent is important but comparatively scarce, even in GHD. It's the biggest bottleneck that Charity Entrepreneurship is facing on incubating GHD policy organizations right now, unfortunately.

5) I don't think it's safe to assume that the new candidate is better than your current candidate? While I agree that's fine for dedicated talent pipeline programmes, I'm not confident of making this assumption for general community building, is by its nature less targeted and typically more university/early-career oriented.

I think this is a legitimate concern, but it's not clear to me that it outweighs the benefits, especially for roles where experience is essential.

Hi Chris,

Just to respond to the points you raised

(1) With respect to prioritize India/developing country talent, it probably depends on the type of work (e.g. direct work in GHD/AW suffers less for this), but in any case, the pool of talent is big, and the cost savings are substantial, so it might be reasonably to go this route regardless.

(2) Agreed that it's challenging, but I guess it's a chicken vs egg problem - we probably have to start somewhere (e.g. HIP etc does good work in the space, we understand).

(3) For 80k, see my discussion with Arden above - AGB's views are also reasonably close to my own.

(4) On Rethink - to be fair, our next statement after that sentence is "This objection holds less water if one is disinclined to accept OP's judgement as final." I think OP's moral weights work, especially, is very valuable.

(5) There's a huge challenge over valuing talent, especially early career talent (especially if you consider the counterfactual being earning to give at a normal job). One useful heuristic is: Would the typical EA organization prefer an additional 5k in donations (from an early career EA giving 10% of their income annually) or 10 additional job applications to a role? My sense from talking to organizations in the space is that (a) the smaller orgs are far more funding constrained, so prefer the former, and (b) the bigger orgs are more agnostic, because funding is less a challenge but also there is a lot of demand for their jobs anyway.

(6) I can't speak for OP specifically, but I (and others in the GHD policy space I've spoken to) think that Eirik is great. And generally, in GHD, the highest impact work is convincing governments to change the way they do things, and you can't really do that without positions of influence.

Load more