@Elizabeth and I recently recorded a conversation of ours that we're hoping becomes a whole podcast series. The original premise is that we were trying to convince each other about whether we should both be EAs or both not be EAs. (She quit the movement earlier this year when she felt that her cries of alarm kept falling on deaf ears; I never left.)
Audio recording (35 min)
Some highlights:
- @Elizabeth's story of falling in love with, trying to change, and then falling out of love with Effective Altruism. That middle part draws heavily on past posts of hers, including EA Vegan Advocacy is not truthseeking, and it’s everyone’s problem and Truthseeking is the ground in which other principles grow
- I told Elizabeth that I would also have left when she did (if I had had her experience).
- I claimed that EA is ready for a Renaissance.
- We both agreed that I should 'check the integrity of Hogwarts' by challenging EA to live up to my standards of integrity, and that I should also leave the movement if I give up on EA meeting that challenge (as Elizabeth did).
If you like the podcast or want to continue the conversation, tell us about it in the comments (or on LW if you want to make sure Elizabeth sees it), and consider donating toward future episodes.
Thanks for the interesting conversation! Some scattered questions/observations:
I doubt that Elizabeth -- or a meaningful number of her potential readers -- are considering whether to be associated with anti-vegan advocates on Facebook or any movement related to them. I read the discussion as mainly about epistemics and integrity (these words collectively appear ~30 times in the transcript) rather than object-level harms.
I recognize there may be object-level disagreement here as to whether a given presentation is false, misleading, or poses a risk of meaningful harm.
Yes, I would even say that the original comment (which I intend to reply to next) seems to suffer from ends-justify-the-means-logic as well (e.g. prioritizing "shutting up and multiplying" such as "shipping resources to the best interventions" over "being honest about health effects").
I might say kidney donation is a moral imperative (or good) if we consider only the effects on your welfare and the effects on the welfare of the beneficiaries. But when you consider indirect effects, things are less clear. There are effects on other people, nonhuman animals (farmed and wild), your productivity and time (which affects your EA work or income and donations), your motivation and your values. For an EA, productivity and time, motivation and values seem most important.
EDIT: And the same goes for veganism.
What do you mean by moral imperative?
I notice that I "believe in" minimum moral standards (like a code of conduct or laws) but not what I call moral imperatives (in X situation, I have no choice if I want to remain in good moral standing).
I also don't believe in requiring organ donation as part of a minimum moral standard, which is probably related to my objection to the concept of "moral imperative".
I like the distinction of cause-first vs member-first; thanks for that concept. Thinking about that in this context, I'm inspired to suggest a different cleavage that works better for my worldview on EA: Alignment/Integrity-first vs. Power/Impact-first.
I believe that for basically all institutions in the 21st century, alignment should be the highest priority, and power should only become the top priority to the extent that the institution believes that alignment at that power level has been solved.
By this splitting, it seems clear that Elizabeth's reported actions are prioritizing alignment over impact.
Would you sometimes advocate for prioritizing impact (e.g. SUM shipping resources towards interventions) over alignment within the EA community?
I believe that until we learn how to prioritize Alignment over Impact, we aren't ready for as much power as we had at SBF's height.
Thanks for this; I agree that "integrity vs impact" is a more precise cleavage point for this conversation than "cause-first vs member-first".
Unhelpfully, I'd say it depends on the tradeoff's details. I certainly wouldn't advocate to go all-in on one to the exclusion of the other. But to give one example of the way I think, I'd currently prefer the marginal 1M be given to EA Funds' Animal Welfare Fund than used to establish a foundation to investigate and recommend improvements to EA's epistemics.
It seems to me that I think the EA community has a lot more "alignment/integrity" than you do. This could arise from empirical disagreements, different definitions of "alignment/integrity", and/or different expectations we place on the community.
For example, the evidence Elizabeth presented of a lack of alignment/integrity in EA is that some veganism advocates on Facebook incorrectly claimed that veganism doesn't have tradeoffs, and weren't corrected by other community members. While I'd prefer people say true things to false things, especially when they affect people's health, this just doesn't feel important enough to update upon. (I've also just personally never heard any vegan advocate say anything like this, so it feels like an isolated case.)
One thing that could change my mind is learning about many more cases to the point that it's clear that there are deep systemic issues with the community's epistemics. If there's a lot more evidence on this which I haven't seen, I'd love to hear about it!
Comment cross-posted on LessWrong
I've begun listening to this podcast episode. Only a few minutes in, I feel a need to clarify a point of contention over some of what Elizabeth said:
She also mentioned that she considers herself to have caused harm by propagating EA. It seems like she might be being too hard on herself. While she might consider being that hard on herself to be appropriate, the problem could be what her conviction implies. There are clearly still some individual, long-time effective altruists she still respects, like Tim, even if she's done engaging with the EA community as a whole. If that wasn't true, I doubt this podcast would've been launched in the first place. Having been so heavily involved in the EA community for so long, and still being so involved in the rationality community, she may know hundreds of people, friends, who either still are effective altruists now, or used to be effective altruists, but no longer. Regarding the sort of harm caused by EA propagating itself as a movement, she provides this as a main example.
Hearing that made me think about a criticism of the organization of EA groups for university students made last year by Dave Banerjee, former president of the student EA club at Columbia University. His was one of the most upvoted criticisms of such groups, and how they're managed, ever posted to the EA Forum. While Dave apparently realized what are presumably some of the same conclusions as Elizabeth about the problems with evangelical university EA groups, he did so with a much quicker turnaround than her. He shifted towards such a major update while still a university student, while it took her several years. I don't mention that so as to imply that she was necessarily more naive and/or idiotic than he was. From another angle, given that he was propagating a much bigger EA club than Elizabeth ever did, at a time when EA was being driven to grow much faster than when Elizabeth might've been more involved with EA movement/community building, Dave could have easily have been responsible for causing more harm. Therefore, perhaps he has perhaps been even a more naive idiot than she ever was.
I've known other university students who were formerly effective altruists helping build student EA clubs, who quit because they also felt betrayed by EA as a community. Given that it's not like EA will be changing overnight, in spite of whoever considers it imperative some of it movement-building activities stop, there will be teenagers in the future, coming months, who may come through EA with a similar experience. Their teenagers who may be chewed up and spit out, feeling ashamed of their complicity in causing harm through propagating EA as well. They may not have even graduated high school yet, and within a year or two, they may also be(come) those effective altruists, then former effective altruists, who Elizabeth is anticipating and predicting that she would call naive idiots. Yet those are the very young people Elizabeth would seek to prevent from befalling harm themselves by joining EA in the first place. It's not evident that there's any discrete point at which they cease being those who should heed her warning in the first place, and instead become naive idiots to chastise.
Elizabeth also mentions how she became introduced to EA in the first place.
As of a year ago, Scott Alexander wrote a post entitled In Continued Defense of Effective Altruism. While I'm aware he made some later posts responding to some criticisms of that one he made, I'm guessing he hasn't abandoned that thesis of that post in its entirety. Meanwhile, as one of, if not the, most popular blog associated with either the rationality or EA communities, one way or another, Scott Alexander may still be drawing more people into the EA community than almost any other writer. If that means he may be causing more harm by propagating EA than almost any other rationalist still supportive of EA, then, at least in that particular way Elizabeth has in mind, Scott may right now continue to be one of the most naive idiots in the rationality community. The same may be true of so many effective altruists Elizabeth got to know in Seattle.
What I'm aware is a popular refrain among rationalists is: speak truth, even if your voice trembles. Never mind on the internet, Elizabeth could literally go meet hundreds of effective altruists or rationalists she has known in either the Bay Area, and Seattle, and tell them that for years they, too, were also naive idiots, or that they're still being naive idiots. Doing so could be how Elizabeth could prevent them from causing harm. In not being willing to say so, she may counterfactually be causing so much more harm by saying or doing so much less to stop EA from propagating than she knows that she can.
Whether it be Scott Alexander, or so many of her friends who have been or still are in EA, or those who've helped propagate university student groups like Dave Banerjee, or those young adults who will come and go through EA university groups by the year 2026, there are hundreds of people Elizabeth should be willing to call, to their faces, naive idiots. It's not a matter of whether she, or anyone, expects that'd work as some sort of convincing argument. That's the sort of perhaps cynical and dishonest calculation she, and others, rightly criticize in EA. She should tell all of them that, if she believes it, even if her voice trembles. If she doesn't believe that, that merits an explanation of how she considers herself to have been a naive idiot, but so many of them to not have been. If she can't convincingly justify, not just to herself, but others, why she was exceptional in her naive idiocy, then perhaps she should reconsider her belief that even she was a naive idiot.
In my opinion she, or so many other former effective altruists, were not just naive idiots. Whatever mistakes they made, epistemically or practically, I doubt the explanation is that simple. The operationalization here of "naive idiocy" doesn't seem like a decently measurable function of, say, how long it took before it was just how much harm someone was causing by propagating EA, and how much harm they did cause in that period of time. "Naive idiocy" here doesn't seem to be all that coherent an explanation for why so many effective altruists got so much, so wrong, for so long.
I suspect there's a deeper crux of disagreement here, one that hasn't been pinpointed yet, by Elizabeth or Tim. It's one I might be able to discern if I put in the effort, though I don't have a sense of what it might've been either. I could, given that I still consider myself an effective altruist, though I ceased to be an EA group organizer myself last year too, on account of me not being confident in helping grow the EA movement further, even if I've continued participating in it for what I consider its redeeming qualities.
If someone doesn't want to keep trying to change EA for the better, and instead opts to criticize it to steer others away from it, it may not be true that they were just naive idiots before. If they can't substantiate their formerly naive idiocy, then to refer to themselves as having only been naive idiots, and by extension imply so many others they've known still are or were naive idiots too, is neither true nor useful. In that case, if Elizabeth would still consider herself to have been a naive idiot, that isn't helpful, and maybe it is also a matter of her, truly, being too hard on herself. If you're someone who has felt similarly, but you couldn't bring yourself to call so many friends you made in EA a bunch of naive idiots to their faces because you'd consider that false or too hard on them, maybe you're being too hard on yourself too. Whatever you want to see happen with EA, us being too hard on ourselves like that isn't helpful to anyone.
This comment that I've cross-posted to LessWrong has quickly accrued negative karma. This comment is easy to misunderstand as I originally wrote it, so I understand the confusion. I'll explain here what I explained in an edit to my comment on LW, so as to avoid the confusion here on the EA Forum that I incurred there.
I wrote this comment off the cuff, so I didn't put as much effort into writing it as clearly or succinctly as I could, or maybe should, have. So, I understand how it might read is as a long, meandering nitpick, of a few statements near the beginning of the podcast episode, without me having listened to the whole episode yet. Then, I call a bunch of ex-EAs naive idiots, like Elizabeth referred to herself as at least formerly being a naive idiot, and then say even future effective altruists will be proven to be idiots, and those still propagating EA after so long, like Scott Alexander, might be the most naive and idiotic of all. To be clear, I also included myself, so this reading would also imply that I'm calling myself a naive idiot.
That's not what I meant to say. I would downvote that comment too. I'm saying that
whether someone like Tim--and, by extension, someone like me in the same position, who has also mulled over quitting EA--are still being naive idiots, on account of not having updated yet to the conclusion Elizabeth has already reached.
Thank you for sharing this Timothy. I left a long comment on the LW version of the post. I'm happy to talk about this more with you or Elizabeth — if you're interested, you're welcome to reach out to me directly.