This is a post written by David Thorstad, a philosophy professor who maintains a blog for criticizing various tenets of effective altruism called Reflective Altruism, as part of a series of on human biodiversity (HBD), a modern iteration of so-called race science. HBD, of course, isn't typical fare for EA, or any of its championed causes. Yet it has, to much controversy over the years, been recognized as a subject of interest among prominent thinkers associated with either the effective altruism or rationality communities, or others writers they've been affiliated with. This latest post in Thorstad's series provides a critical overview of @Scott Alexander's history of engagement with said body of ideas, both on his current blog, Astral Codex Ten (ACX), as well as before then, such as on his previous blog, Slate Star Codex (SSC).
It was requested by an anonymous individual in a private message group among several others--some effective altruists, and some not--that this be submitted to the EA Forum, with the anonymous requester not wanting to submit the post themself. While that person could technically have submitted this post under an anonymous EA Forum user account, as a matter of personal policy they have other reasons they wouldn't want to submit the post regardless. As I was privy to that conversation, I volunteered to submit this post myself.
Other than submitting the link post to Dr. Thorstad's post, the only other way I contributed was to provide the above summary on the post. I didn't check with David beforehand that he verified that summary as accurate, though I'm aware he's aware that these link posts are up and hasn't disputed the accuracy of my summary since.
I also didn't mean to tag Scott Alexander above in the link post as a call-out. Having also talked to the author, David, beforehand, he informed me that Scott was already aware of that this post had been written and published. Scott wouldn't have been aware beforehand, though, that I was submitting this as a link post after it had been published on Dr. Thorstad's blog, Reflective Altruism. I tagged Scott so he could receive a notification to be aware of this post largely about him whenever he might next log on to the EA Forum (and, also, LessWrong, respectively, where this link post was also cross-posted). As to why this post was downvoted, other than the obvious reasons, I suspect based on the link post itself or the summary I provided that:
I'd consider those all to be worse reasons to downvote this post, based on either reactive conclusions about either optics or semantics. Especially as to optics, to counter one Streisand effect with massive downvoting can be an over-correction causing another Streisand effect. I'm only making this clarifying comment today, when I didn't bother to do so before, only because I was reminded of it when I received a notification it has received multiple downvotes since yesterday. That may also be because others have been reminded of this post because David a few days ago made another post on the EA Forum, largely unrelated, and this link post was the last one most recently posted referring to any of David's criticisms of EA. Either way, with over 20 comments in the last several weeks, downvoting this post didn't obscure or bury it. While I doubt that was necessarily a significant motivation for most other EA Forum members who downvoted this post, it seems to me that anyone who downvoted mainly to ensure it didn't receive any attention was in error. If anyone has evidence to the contrary, I'd request you please present it, as I'd be happy to receive evidence I may be wrong about that. What I'd consider better reasons to downvote this post include:
I sympathize with this comment as one of the points of contention I have with Dr. Thorstad's article. While I of course sympathize with what the criticism is hinting at, I'd consider it better if it had been prioritized as the main focus of the article, not a subtext or tangent.
Dr. Thorstad's post multiple times as 'unsavoury' the views expressed in the post, as though they're like an overcooked pizza. Bad optics for EA being politically inconvenient via association with pseudoscience, or even bigotry, are a significant concern. They're often underrated in EA. Yet PR concerns might as well be insignificant to me, compared to the possibility of excessive credulity among some effective altruists towards popular pseudo-intellectuals leading them to embracing dehumanizing beliefs about whole classes of people based on junk science. The latter belies what could be a dire blind spot among a non-trivial portion of effective altruists in a way that glaringly contradicts the principles of an effectiveness-based mindset or altruism. If that's not as much of a concern for criticisms like these as some concern about what some other, often poorly informed leftists on the internet believe about EA, the worth of these criticisms will be much lower than they could or should be.
I've been mulling over submitting a response of my own to Dr. Thorstad's criticism of ACX, clarifying where I agree or disagree with its contents, or how they were presented. I appreciate and respect what Dr. Thorstad has generally been trying to do with his criticisms of EA (though I consider some of the series, other than the one in question about human biodiversity, to be more important), though I also believe that, at least in this case, he could've done better. Given that I could summarize my constructive criticism(s) to Dr. Thorstad as a follow-up to my previous correspondence with him, I may do that so as not to take up more of his time, given how very busy he seems to be. I wouldn't want to disrupt or delay to much the overall thrust of his effort, including his focus on other series that addressing concerns about these controversies might derail or distract him from. Much of what I would want to say in a post of my own I have now presented in this comment. If anyone else would be interested in reading a fuller response from me to this post last month that I linked, please let me know, as that'd help inform my decision of how much more effort I'd want to invest in this dialogue.