Hi!
I'm currently (Aug 2023) a Software Developer at Giving What We Can, helping make giving significantly and effectively a social norm.
I'm also a forum mod, which, shamelessly stealing from Edo, "mostly means that I care about this forum and about you! So let me know if there's anything I can do to help."
Please have a very low bar for reaching out!
I won the 2022 donor lottery, happy to chat about that as well
I'm probably less informed than you are, but depending on what you mean by "sources of funders" I disagree.
I think if you can demonstrate getting valuable results and want funding to scale, people will be happy to fund you. My impression is that several people influencing >=6 digit allocations are genuinely looking for projects to fund that can be even more effective than what they're currently funding.
I'm fairly confident that if anyone hosted a conference or online program, got good results, had a clear theory of change with measurable metrics, and gradually asked for funding to scale, people will be happy to fund that.
I agree with 3 of your points but I disagree with the first one:
EA atm doesn't seem very practical to take action on except for donating and/or applying to a small set of jobs funded mostly by one source.
My guess is this is reducing the number of "operator" types that get involved and selects for cerebral philosophising types.
It's really hard for me to tell if this is a good or bad thing, especially because I think it's possible that things like animal welfare or GCR reduction can plausibly be significantly more effective than more obviously good "practical testable" work (and the reason to favour "R&D" mentioned previously)
I heard about 80k and CEA first but it was the practical testable AIM charities that sparked my interest THEN I've developed more of an interest in implications from AI and GCRs.
Not really a disagreement, but I think it's great that there's cross-pollination, with people getting into AIM from 80k and CEA, and into 80k and CEA from AIM
I do think the comment would have been much better received if it was more concise and simple to read (regardless of how it was written), see The value of content density
I feel I'm not informed enough to reply to this, and it feels weird to speculate about orgs I know very little about, but I worry that the people most informed won't reply here for various reasons, so I'm sharing some thoughts based on what very little information I have (almost entirely from reading posts on this forum, and of course speaking only for myself). This is all very low confidence.
"Effective" Altruism implies a value judgement that requires strong evidence to back up
I think if you frame it as a question, something like "We are trying to do altruism effectively, this is the best that we're able to do so far", it doesn't require that much evidence (for better or worse)
"Ambitious Impact" implies more speculative, less easy to measure activities in pursuit of even higher impact returns.
That is not clear to me, one can be very ambitious by working on things that are very easy to measure. For example, people going through AIM's "founding to give" program seem to have a goal that's easy to measure: money donated in 10 years[1], but I still think of them as clearly "ambitious" if they try to donate millions. Google defines ambitious as "having or showing a strong desire and determination to succeed"
My understanding is that Open Philanthropy split from GiveWell because of the realisation that there was more marginal funding required for "Do-gooding R&D" with a lower existing evidence base.
That is not my understanding, reading their public comms I thought OP split from GiveWell to better serve 7-figure donors "Our current product is a poor fit with the people who may represent our most potentially impactful audience." (which I assumed implicitly meant that Moskovitz and Tuna could use more bespoke recommendations)
Why do we need "Do-Gooding R&D"?
I agree with this! I liked Finding before funding: Why EA should probably invest more in research, but I expect that the "R&D" work itself might be tricky to do in practice. Still, I'm very excited about GiveWell's RCT grants
IMO AIM has outcompeted CEA on a number of fronts (their training is better, their content (if not their marketing) is better, they are agile and improve over time). Probably 80% of the useful and practical things I've learned about how to do effective altruism, I've learned from them.
I agree that AIM is more impressive than CEA on many fronts, but I think they mostly have different scopes.[2] My impression is that CEA doesn't focus much on specific ways to implement specific approaches to "how to do effective altruism", but on things like "why to do effective altruism" and "here are some things people are doing/writing about how to do good, go read/talk to them for details".
If not for CEA, I think I probably wouldn't have heard of AIM (or GWWC, or 80k, or effective animal advocacy as a whole field). And if I had only interacted with AIM, I'm not sure if I would have been exposed to as many different perspectives on things like animal welfare, longtermism, and spreading positive values[3]
The AIM folks I've spoken to are frustrated that their results - based on exploiting cost-effective high-evidence base interventions - are used to launder the reputation of OP funded low evidence base "Do-gooding R&D."
I understand the frustration, especially given the brand concerns below and because I think many AIM folks think that a lot of the assumptions behind longtermism don't hold.[4] But I don't know if this "reputation laundering" is actually happening that much:
If we think about EA brand as a product, I'd guess we're in "The Chasm" below as the EA brand is too associated with the "weird" stuff that innovators are doing to be effectively sold to lower risk tolerance markets.
I personally believe that the EA brand is in a pretty bad place, and at the moment often associated with things like FTX, TESCREAL and OpenAI, and that is a bigger issue. I think EA is seen as a group non-altruistic people, not as a group of altruistic people who are too "weird". (But I have even lower confidence on this than on the rest of this comment)
AIM should be the face of EA and should be feeding in A LOT more to general outreach efforts.
Related to the point above, it's not clear to me why AIM should be the face of "EA" instead of any other "doing the most good" movement (e.g. Roots of Progress, School for Moral Ambition, Center for High Impact Philanthropy, ...). I think none of these would make a lot of sense, and don't see why "AIM being the face of AIM" would be worse than AIM being the face of something else.
You can see in their 2023 annual review that they did deeply consider building a new community "but ultimately feel that a more targeted approach focusing on certain careers with the most impact would be better for us".[5]
In general, I agree with your conclusions on wishing for more professionalization, and increasing the size of the pie (but it might be harder than one would think, and it might make sense to instead increase the number of separate pies)
I imagine positive externalities from the new organizations will also be a big part of their impact, but I expect the main measure will be amount donated.
And that this does not say that much about CEA as imho AIM is more impressive than the vast majority of other projects.
AIM obviously does a lot for animal welfare, but I don't think they focus on helping people reson about how to prioritize human vs non-human welfare/rights/preferences.
I can't link to the quote, so I'll copy-paste it here.
JOEY: Yeah, I basically think I don't find a really highly uncertain, but high-value expected value calculation as compelling. And they tend to be a lot more concretely focused on what's the specific outcome of this? Like, okay, how much are we banking on a very narrow sort of set of outcomes and how confident are we that we're going to affect that, and what's the historical track record of people who've tried to affect the future and this sort of thing. There's a million and a half weeds and assumptions that go in. And I think, most people on both sides of this issue in terms of near-term causes versus long-term causes just have not actually engaged that deeply with all the different arguments. There's like a lot of assumptions made on either side of the spectrum. But I actually have gotten fairly deeply into this. I had this conversation a lot of times and thought about it quite thoroughly. And yeah, just a lot of the assumptions don't hold.
Linked from this post.
Full quote:
Building a new community (e.g. Impact Now):
An option many people have been asking us about in the wake of the struggles of the EA movement is if CE would consider building out a movement that brings back some of the strengths of EA 1.0. We considered this idea pretty deeply but ultimately feel that a more targeted approach focusing on certain careers with the most impact would be better for us. The logistical and time costs of running a movement are quite large and it seems as though often a huge % of the movement's impact comes from a small number of actors and orgs. Although we like some things the EA movement has brought to the table when comparing it to more focused uses (e.g. GiveWell focuses more on GiveWell's growth), we have ended up more pessimistic about the impact of new movements.
I don't think they linked to their 2024 annual report on the forum, so this might be different now.
Yes I think we agree, but I also think that it's not a crux of the argument.
As Neel Nanda noted, whatever vaguely reasonable method you use to calculate impact will result in attributing a lot of impact to life-saving interventions.
I think there is a valuable concern about Triple counting impact in EA and I agree that there is a case for Shapley values being better than counterfactuals[1].
What I really don't agree with is that we should let someone choke and die, just because otherwise Henry Heimlich would get the credit anyway. The goal is not to get the most credit or Shapley values, but to help others, I don't see what prof. Wenar proposes as a better alternative to GiveWell.
I disagree that Shapley values are better than counterfactual in most cases, but I think it's a reasonable stance to have.
I don't see how asking for higher standards for criticism makes EA defenseless against "bullshit."
I actually would argue the opposite: if we keep encouraging and incentivizing any kind of criticism, and tolerate needlessly acrimonious personal attacks, we end up in an environment where nobody proposes anything besides the status quo, and the status quo becomes increasingly less transparent.
Three recent examples that come to mind:
I think Holly_Elmore herself is another example: she used to write posts like "We are in triage every second of every day", which I think are very useful to make EA less "bullshit", but now mostly doesn't post on this forum, partly because of the bad quality costly criticism she receives.
I largely agree with the last section of this comment from Aaron Gertler written one year ago:
The Forum has a hard balance to strike:
- I think the average comment is just a bit less argumentative / critical than would be ideal.
- I think the average critical comment is less kind than would be ideal.
- I want criticism to be kind, but I also want it to exist, and pushing people to be kinder might also reduce the overall quantity of criticism. I'm not sure what the best realistic outcome is.
I personally fear that the current discussion environment on this forum errs too much in the "unkind criticism" direction, and I see at least two large downsides:
I used to think that accepting callousness was required to have technical excellence, e.g. reading how people like famous software engineer Linus Torvalds used to communicate. After seeing many extremely competent people communicate criticism in a professional and constructive manner, I have completely changed my mind. Torvalds also apologised and changed communication style years ago.
I believe that a culture of more constructive and higher-quality criticism would encourage more discussion overall, not less, especially from experienced professionals who have different perspectives from mainline EA thinking.
See also this paragraph from the Charity Entrepreneurship handbook:
Writing as myself, not as a moderator
I'm not sure if he meant Good Ventures, Open Philanthropy, or some other group
Thanks for writing this! I would be curious to know what you think about this 4 years later, and now that interest rates are much higher.