This is a post written by David Thorstad, a philosophy professor who maintains a blog for criticizing various tenets of effective altruism called Reflective Altruism, as part of a series of on human biodiversity (HBD), a modern iteration of so-called race science. HBD, of course, isn't typical fare for EA, or any of its championed causes. Yet it has, to much controversy over the years, been recognized as a subject of interest among prominent thinkers associated with either the effective altruism or rationality communities, or others writers they've been affiliated with. This latest post in Thorstad's series provides a critical overview of @Scott Alexander's history of engagement with said body of ideas, both on his current blog, Astral Codex Ten (ACX), as well as before then, such as on his previous blog, Slate Star Codex (SSC).
One thing I will say here that I think shouldn't be controversial:
At the very least the Cade Metz NYT on Scott fairly clearly did not give readers a misleading impression (whether or not it gave the reader that impression in a fair way.): the article does not state "Scott Alexander is a hardcore white nationalist", or even, in my view, really give people that impression. What it does give the reader as an impression is that he is highly skeptical of feminism and social justice, his community of followers includes white nationalists, and he is sympathetic to views on race on which Black people are genetically intellectually inferior. All these things are true, as anyone who reads Thorstad's blogpost can verify. But more importantly, while I understand not everyone reads Scott and his blog commentators religiously, all these things are fairly obviously true if you've followed Scott's writing closely. (As I have; I used to like it a great deal, before disagreement on exactly this stuff very gradually soured me on it.*) I think it is a failure of community epistemics that a lot of people jumped to "this is a smear" before really checking, or suspending judgment.
*I actually find this whole topic very emotionally upsetting and confusing, because I think I actually have a very similar personality to Scott and other rationalists, and seeing them endorse what to me is fairly obvious evil-I'm talking here about reactionary political projects here, not any particular empirical beliefs-makes me worried that I am bad too. Read everything I say on this thread with this bias in mind.
I identify with your asterisk quite a bit. I used to be much more strongly involved in rationalist circles in 2018-2020, including the infamous Culture War Thread. I distanced myself from it around ~2020, at the time of the NYT controversy, mostly just remaining on Rationalist Tumblr. (I kinda got out at the right time because after I left everyone moved to Substack, which positioned itself against the NYT by personally inviting Scott, and was seemingly designed to encourage every reactionary tendency of the community.)
One of the most salient memories of the alt-right infestation in the SSC fandom to me was this comment by a regular SSC commenter with an overtly antisemitic username, bluntly stating the alt-right strategy for recruiting ~rationalists:
There is isn't really much more to say, he essentially spilled the beans – but in front on an audience who pride itself so much in "high-decoupling" that they can't warp their mind around the idea that overt neo-Nazis might in fact be bad people who abuse social norms of discussion to their advantage – even when said neo-Nazis are openly bragging about it to their face.
If one is a a rationalist who seek to raise the sanity waterline and widely spread the tools of sound epistemology, and even more so if one is an effective altruist who seek to expand the moral circle of humanity, then there is zero benefit to encourage discussion of the currently unknowable etiology of a correlation between two scientifically dubious categories, when the overwhelming majority of people writing about it don't actually care about it, and only seek to use it as a gateway to rehabilitating a pseudoscientific concept universally rejected by biologists and geneticists, on explicitly epistemologically subjectivist and irrationalist grounds, to advance a discriminatory-to-genocidal political project.
The author spends no time discussing the object level, he just points at examples where Scott says things which are outside the Overton window, but he doesn't give factual counterarguments where what Scott says is supposed to be false.
Small note (while not endorsing the NGO), I struggle to see how "Project Prevention" could be considered a "slide into open Eugenics" just because they wanted to move into Haiti. Are EA Family planning organisations similar because they want to work in Africa? Of course not.
From wikipedia looking at their clientelle "As of May 2022, out of 7,833 clients it had paid: 4,791 (61.3%) were white; 1,626 (20.8%) black; 830 (10.6%) Hispanic; 572 (7.3%) other." which seems like a fairly representative mix among the people groups they work with.
The founder has been interviewed on Radiolab and the guardian has written a fairly reasonable article on it (ages ago) which was fairly even handed while mentioning that her work has been "compared with Nazi Eugenics".
I'd say an obvious difference is that EA family planning orgs aren't doing permanent sterilization.
I'd also say that the reason Thorstad is upset is probably mostly because he sees Scott's support for the org as "let's get rid of drug addicts children from the next generation because they have bad genes", and-rightly in my view-worries that this is the sort of logic that the Nazis used to justify genocide of the "wrong sort" of people, and that if HBD becomes widely believed people might turn this logic against Black people. Scott could (and would) reasonably protest that there is a big difference between being prepared to use violence for eugenic goals, and merely incentivizing people towards them in non-coerceive ways. But if you apply this to race rather than drug addicts "we should try and make there be less Black people, non-coercively" is still Nazi and awful.
This sort of eugenic reasoning doesn't actually seem to be what's going on with Project Prevention itself, incidentally. From the Guardian article, it seems like the founder genuinely values the children of drug addicts as human beings, given she adopted them and is just trying to stop them being hurt. From that point of view, I'd say she is probably a bit confused though: it's not clear most children of addicts have lives that are worse than nothing, even though they will be worse than average. So it's not clear it actually helps them to prevent them being born.
I agree with your comment about Scott's support for the org, but I think he unnecessarily sullies and misrepresents the org along the way. Why not just explain what the org does and then tell about Alexander's response to it, as the focus is on Alexander.
Like your say regardless of what you think about the orgs methods, they aren't an org which has eugenic intentions and shouldn't be tarred by that brush in the article.
Again to say I probably don't agree with what the org does, but have a lot of compassion for her founder because she has genuinely given much of her life towards looking after children others don't want, and this org came out of trying to solve that issue.
Puzzled by your last paragraph. The Guardian article explicitly says that in the US their work has been compared to Nazi eugenics.
You're correct I missed that! Have edited. The point I was trying to make is that it was a fairly even handed article though, coming from a fairly left wing source, so it's hardly a consensus.
If some of the quotes from Scott Alexander seem particularly poorly reasoned, I would encourage readers to click through the original source. Some examples:
From Thorstad:
Original quote:
From Thorstad:
Original quote:
Don't see a significant difference.
I do, reading Thorstad I thought Alexander
Reading the original I see that neither is true: the Murray pick was absurdist humor, and the Zuckerberg thing was that good things are good even if Zuckerberg does them.
"the Murray pick was absurdist humor" What makes you think that? I would feel better if I thought that was true.
Honest question, have you read the linked post?
Maybe absurdist humor is not the right description, but it's very clearly not meant to be a serious post.
Having now read the whole thing, not just the bit you quoted originally, I think it is sort of a joke but not really: a funny, slightly exaggerated rendering of what his real ideological views actually are, exaggerated a bit for comic effect. I don't think Thorstad was majorly in the wrong here, but maybe he could have flagged this a bit.
I'll let readers decide, just adding some reactions at the time for more context:
Fair enough, this does make me move a bit further in the "overall a joke" direction. But I still think the names basically match his ideological leanings.
Do you mean Bernie Sanders, Peter Thiel, or "Anonymous Mugwump"? I can't think of an ideological leaning these three have in common, but I don't know much about Mugwump
Thiel and Sanders don't have much in common, but Scott has stuff in common with Thiel and Sanders. (I.e. he shares broadly pro-market views and skepticism of social justice and feminism with Thiel, and possibly pro HBD views, although I don't know what Thiel thinks about HBD, plus an interest in futurism and progress, and he shares redistributive and anti-blaming the poor for being poor economic views with Sanders.)
Then I'm sure he has stuff in common with Mugwump as well (and with you, me, and Thorstad)
My reading of the post (which is contestable) is that he chose the people as a sort of joke about "here is a controversial or absurdly in-group person I like on this issue". I can't prove that reading is correct, but I don't really see another that makes sense of the post. Some of the people are just too boring choices-Yglesias, for the joke to just be that the list is absurd.
I think it, like much of Scott's work, is written with a "micro-humorous" tone but reflect to a significant extent his genuine views – in the case you quoted, I see no reason to it's not his genuine view that building Trump's wall would be a meaningless symbol that would change nothing, with all that implies of scorn toward both #BuildTheWall Republicans and #Resistance Democrats.
Another example, consider these policy proposals:
Months later he replied this to an anonymous ask on the subject:
Do Scott actually believe the Achaemenid Empire should be restored with Zoroastrianism as state religion? No, "that was *kind of* joking, and [he doesn't] know anything about foreign policy, and this is probably the worst idea ever". Does this still reflect a coherent set of (politically controversial) beliefs about foreign policy which he clearly actually believe (e.g. that "Bashar al-Assad [...] kept the country at peace" and Syrian oppositionists were all "Al-Qaeda in disguise"), that are also consistent with him picking Tulsi Gabbard as Secretary of State in his "absurdist humor"? Yeah, it kinda does. Same applies, I think, to the remainder of his post.
EDIT: If you’re inclined to downvote this comment, I’d also like to know where your crux is 😘
If you’re inclined to defend Scott Alexander, I’d like to figure out where the crux is. So I’ll try and lay out some standards of evidence that I would need to update my own beliefs after reading this article.
If you believe Scott doesn’t necessarily believe in HBD, but does believe it’s worth debating/discussing, why has he declined to explicitly disown or disavow the Topher Brennan email?
If you believe Scott doesn’t believe HBD is even worth discussing, what does he mean by essentially agreeing with the truth of Beroe’s final paragraph in his dialogue on ACX?
For both, why would he review Richard Hanania’s book on his blog without once mentioning Hanania’s past and recent racism? (To pre-empt ‘he’s reviewing the book, not the author’, the review’s conclusion is entirely about determining Hanania’s motivation for writing it)
If you believe Scott has changed his positions, why hasn’t he shouted from the rooftops that he no longer believes in HBD / debating HBD? This should come with no social penalty.
I would set Julia Wise’s comments to Thorstad in this article as the kind of statement I would expect from Scott if he did not believe in HBD and/or the discussion of HBD.
I imagine people inclined to defend Scott are often a) People who themselves agree with HBD or b) people who don't really have an opinion on it (or maybe even disagree with it)* but think that Scott arrived at his "belief" (i.e. >50% credence) in HBD by honest inquiry into the evidence to the best of his ability, and think that is never wrong to form empirical beliefs in this way. I don't think people could believe Scott rejects HBD if they actually read him at all closely. (Though he tends to think and talk in probabilistic terms rather than full acceptance/rejection. As you should!) In the Hanania review he explicitly says he puts "moderate probability" on some HBD views, which isn't that different from what he said in the Brennan email.
As to WHY people think a) and b), I'd say it is a mixture of (random order, not order of importance):
1) People like Scott and that biases them.
2) People want to defend a prominent rationalist/EA for tribal reasons.
3) People have a (genuinely praiseworthy in itself in my view) commitment to following the evidence where it leads even when it leads to taboo conclusion, and believe that Scott's belief in HBD (and other controversial far-right-aligned beliefs of his) have resulted from him following the evidence to the best of his ability, and therefore that he should not be condemned for them. (You can think this even if you don't think the beliefs in question are correct. My guess is "the views are wrong and bad but he arrived at them honestly so you can't really blame him" is what less right-leaning rationalists like Kelsey Piper or Ozy Brennan think for example, though they can speak for themselves obviously. Maybe Eliezer Yudkowsky thinks this too actually, he's condemned rationalisms far-right wing in pretty strong terms in the past, though that doesn't necessarily mean he rejects every HBD belief I guess.)
4) A faction of rationalists (and therefore EAs, and also I guess *some* EAs who aren't rationalists are like this, though my guess is much less) are just, well *bigoted*: they enjoying hearing and discussing things about why women/Black people are bad, because they like hating on women/Black people. As to WHY they are like that, I think (though I may be typical minding here**), that an important part of the answer is that they feel rejected socially, and especially sexually, for their broadly "autistic" personality traits, and also believe that the general culture is "feminizing" against the things that people-mostly, not entirely men-with that type of personality tend to value/overvalue, truth-seeking, honesty even when it upsets people, trying to be self-controlled and stoical. (I actually agree that certain parts of US liberal culture HAVE probably moved too far against those things.)
*My guess is that Matthew Adelstein/Bentham's Bulldog is probably a Scott-defender who thinks HBD is wrong: https://benthams.substack.com/p/losing-faith-in-contrarianism
**I have autism, and have recently acquired my first ever girlfriend aged 37, and even as my considered belief is that they are in fact quite unfair to feminists in many ways, a lot of the feelings in Scott's Radicalizing the Romanceless and Untitled posts are very, very familiar to me.
Disagree votes are going to be predictably confusing here, since I don't know whether people disagree with the main point that most people who defend Scott do think he is friendly towards HBD, or they just disagree with something else, like my very harsh words about (some) rationalists.
It was requested by an anonymous individual in a private message group among several others--some effective altruists, and some not--that this be submitted to the EA Forum, with the anonymous requester not wanting to submit the post themself. While that person could technically have submitted this post under an anonymous EA Forum user account, as a matter of personal policy they have other reasons they wouldn't want to submit the post regardless. As I was privy to that conversation, I volunteered to submit this post myself.
Other than submitting the link post to Dr. Thorstad's post, the only other way I contributed was to provide the above summary on the post. I didn't check with David beforehand that he verified that summary as accurate, though I'm aware he's aware that these link posts are up and hasn't disputed the accuracy of my summary since.
I also didn't mean to tag Scott Alexander above in the link post as a call-out. Having also talked to the author, David, beforehand, he informed me that Scott was already aware of that this post had been written and published. Scott wouldn't have been aware beforehand, though, that I was submitting this as a link post after it had been published on Dr. Thorstad's blog, Reflective Altruism. I tagged Scott so he could receive a notification to be aware of this post largely about him whenever he might next log on to the EA Forum (and, also, LessWrong, respectively, where this link post was also cross-posted). As to why this post was downvoted, other than the obvious reasons, I suspect based on the link post itself or the summary I provided that:
I'd consider those all to be worse reasons to downvote this post, based on either reactive conclusions about either optics or semantics. Especially as to optics, to counter one Streisand effect with massive downvoting can be an over-correction causing another Streisand effect. I'm only making this clarifying comment today, when I didn't bother to do so before, only because I was reminded of it when I received a notification it has received multiple downvotes since yesterday. That may also be because others have been reminded of this post because David a few days ago made another post on the EA Forum, largely unrelated, and this link post was the last one most recently posted referring to any of David's criticisms of EA. Either way, with over 20 comments in the last several weeks, downvoting this post didn't obscure or bury it. While I doubt that was necessarily a significant motivation for most other EA Forum members who downvoted this post, it seems to me that anyone who downvoted mainly to ensure it didn't receive any attention was in error. If anyone has evidence to the contrary, I'd request you please present it, as I'd be happy to receive evidence I may be wrong about that. What I'd consider better reasons to downvote this post include:
I sympathize with this comment as one of the points of contention I have with Dr. Thorstad's article. While I of course sympathize with what the criticism is hinting at, I'd consider it better if it had been prioritized as the main focus of the article, not a subtext or tangent.
Dr. Thorstad's post multiple times as 'unsavoury' the views expressed in the post, as though they're like an overcooked pizza. Bad optics for EA being politically inconvenient via association with pseudoscience, or even bigotry, are a significant concern. They're often underrated in EA. Yet PR concerns might as well be insignificant to me, compared to the possibility of excessive credulity among some effective altruists towards popular pseudo-intellectuals leading them to embracing dehumanizing beliefs about whole classes of people based on junk science. The latter belies what could be a dire blind spot among a non-trivial portion of effective altruists in a way that glaringly contradicts the principles of an effectiveness-based mindset or altruism. If that's not as much of a concern for criticisms like these as some concern about what some other, often poorly informed leftists on the internet believe about EA, the worth of these criticisms will be much lower than they could or should be.
I've been mulling over submitting a response of my own to Dr. Thorstad's criticism of ACX, clarifying where I agree or disagree with its contents, or how they were presented. I appreciate and respect what Dr. Thorstad has generally been trying to do with his criticisms of EA (though I consider some of the series, other than the one in question about human biodiversity, to be more important), though I also believe that, at least in this case, he could've done better. Given that I could summarize my constructive criticism(s) to Dr. Thorstad as a follow-up to my previous correspondence with him, I may do that so as not to take up more of his time, given how very busy he seems to be. I wouldn't want to disrupt or delay to much the overall thrust of his effort, including his focus on other series that addressing concerns about these controversies might derail or distract him from. Much of what I would want to say in a post of my own I have now presented in this comment. If anyone else would be interested in reading a fuller response from me to this post last month that I linked, please let me know, as that'd help inform my decision of how much more effort I'd want to invest in this dialogue.
It seems to me that you are doing more to associate HBD with EA by linking this here than Scott Alexander was allegedly doing by sending a private email.