Hide table of contents

I wrote an initial draft of this post much closer to the Manifest controversy earlier this summer. Although I got sidetracked and it took a while to finish, I still think this is a conversation worth having; perhaps it would even be better to have it now since calmer heads have had time to prevail. 

 

--- 

 

I can’t in good faith deny being an effective altruist. I’ve worked at EA organizations, I believe many of the core tenants of the movement, and thinking about optimizing my impact by EA lights has guided every major career decision I’ve made since early 2021. And yet I am ashamed to identify myself as such in polite society. Someone at a party recently guessed that I was an EA after I said I was interested in animal welfare litigation or maybe AI governance; I laughed awkwardly, said yeah maybe you could see it that way, and changed the subject. I find it quite strange to be in a position of having to downplay my affiliation with a movement that aims to unselfishly do as much as possible to help others, regardless of where or when they may live. Are altruism and far-reaching compassion not virtues? 

 

This shame comes in large part from a deeply troubling trend I’ve noticed over the last few years in EA. This trend is towards acceptance or toleration of race science (“human biodiversity” as some have tried to rebrand it), or otherwise racist incidents. Some notable instances in this trend include: 

  • The community’s refusal to distance itself from, or at the very least strongly condemn the actions of Nick Bostrom after an old email came to light where he used the n-word and said “I like that sentence and think that it is true” in regards to the statement that “blacks are more stupid than whites,” followed by an evasive, defensive apology. 
  • FLI’s apparent sending of a letter intent to a far-right Swedish foundation that has promoted holocaust denial.[1]
  • And now, most recently, many EAs’ defense of Manifest hosting Richard Hanania, who pseudonymously wrote about his opposition to interracial marriage, cited neo-Nazis, and expressed views indicating that he didn’t think Black people could govern themselves.[2] 

I’m not here to quibble about each individual instance listed above (and most were extensively litigated on the forum at the time). Maybe you think one or even all of the examples I gave has an innocent explanation. But if you find yourself thinking this way, you’re still left having to answer the deeply uncomfortable question of why EA has to keep explaining these incidents. I have been even more disturbed by the EA forum’s response.[3] Many have either leapt to outright defend those who seemed to espouse racist views or urged us to view their speech in the most possible favorable light without consideration of the negative effects of their language. 

 

Other communities that I have been a part of (online or otherwise) have not had repeated race-science related scandals. It is not a coincidence that we are having this conversation for the fourth or fifth time in the last few years. I spend a lot of this post defending my viewpoint, but I honestly think this is not a particularly hard or complicated problem; part of me is indignant that we even need to have this conversation. I view these conversations with deep frustration. What, exactly, do we have to gain by tolerating the musings of racist edgelords? We pride ourselves on having identified the most pressing problems in the world, problems that are neglected to the deep peril of those living and to be born; human and non-human alike. Racial differences in IQ is not one of those problems. It has nothing to do with solving those problems. Talking about racial differences in IQ is at best a costly distraction and at worst a pernicious farce that risks undermining so much of what we all hope to achieve. 

 

Why might one tolerate this? 

 

One explanation for tolerating race science is that you think it is true and ought to be promoted. I don’t have very much to say to people in this camp.

 

The stronger argument (and the one I suspect more people are willing to defend in the open) is that truth-seeking requires engaging with uncomfortable and impolite questions. Limits on free discussion can hinder free inquiry in arbitrary ways and broadly chill speech that is necessary to challenge the current social consensus. There is a lot to say for this argument and we have gained a lot from our willingness to engage with ideas that run counter to common sense. No one took AI risk seriously 6 years ago and the vast majority of people still routinely fail to include animals in their moral circle. 

 

However, this principle should not be taken as a maxim to be honored regardless of the cost. There is a bar for when dedication to no-holds-barred “intellectual inquiry” (more on that in a moment) should take a backseat to more pressing goals. Thinking that the bar is very high (as I do) is different from thinking the bar does not exist at all. I have a hard time seeing how curtailing discussion in EA of racial differences will restrict our ability to seek truths that expand our moral circle or develop better models of what AI alignment could look like. 

 

Another compelling argument worth addressing is that the answer to bad speech is good speech; the truth will win out in a truth-seeking environment. The problem here is that some issues and lines of inquiry have higher-level distorting effects on the community. I was particularly struck by this passage from a recent review of Manifest, as I think it illustrates this dynamic in a microcosm: 

[T]he organizers had selected for multiple quite controversial speakers and presenters, who in turn attracted a significant number of attendees who were primarily interested in these controversial topics, most prominent of which was eugenics. This human biodiversity (HBD) or “scientific racism” curious crowd engaged in a tiring game of carefully trying the waters with new people they interacted with, trying to gauge both how receptive their conversation partner is to racially incendiary topics and to which degree they are “one of us”.

Most people are turned off by race science. A rational person who knows they do not have time to thoroughly investigate every community or ideology they come across will rely on heuristics to make quick judgments about what they should take a deeper look into. “Do some people in this community seem to endorse or at least tolerate race science?” is one of the easier heuristics one can rely on to weed out unpalatable communities and ideologies. Had I known at the outset how many people in EA were seemingly tolerant or supportive of race science, I would have quickly applied that heuristic, dismissed the movement as a bunch of fringe weirdos, and moved on with my life. I am sure many people who would have been great contributors to the EA project made just that calculation, although we’ll obviously never hear from them. 

 

When a community is permeated by exclusionary ideas, it becomes particularly hostile to the targets of those ideas. People of color, and particularly Black people, are underrepresented in EA. While questions of representation are complicated, I have a very hard time believing that there isn’t some non-trivial causal relationship between some EAs openly flirting with race science and the absence of racial minorities in EA.[4] I’m not sure exactly how many people who otherwise would have been EAs have been turned off by some EAs endorsing race science or have left EA in frustration. That is by nature a difficult thing to track, but I think the qualitative case for assuming that it’s a non-trivial number is quite strong. I don’t intend to portray people of color as a monolith and recognize that there are likely some in the community who will disagree with my characterization (see e.g. this post defending Bostrom). As proponents of human biodiversity are ironically fond of noting, I am speaking in averages and talking about what I assume is likely to be true on balance. 

 

On a hard-nosed epistemic level, we lose out from a lack of diversity. Intellectual insularity is dangerous. In my experience, very smart people systematically overestimate their ability to steelman every possible objection to their position. To truly truth test, you need to expose your arguments to those with radically different priors, viewpoints, and experiences. 

 

One might respond to this by pointing out that I am advocating for making EA more insular by excluding people who want to discuss HBD and related concepts; perhaps I’m choosing the people who get offended over the offenders. My response is: yes, I am. To the extent that these kinds of decisions unavoidably have a zero sum element, I want to include people in EA who detest race science rather than those whose participation is contingent on discussing race science because they view it as a litmus test or hallmark of epistemic integrity, or who view the lack of toleration of race science as a canary in a coalmine for epistemic degradation. 

 

Zooming out, this goes beyond pragmatic considerations and gets at the heart of the question of what kind of community we want to build. I want fellow travelers who are truth-seeking, yes, but also compassionate, wise, skeptical of unfettered attempts to maximize, and dedicated to making EA a welcoming space. There is another kind of community we could be, one that is committed to entertaining every controversial topic in EA settings, without respect to its relevance to EA and the damage it might cause. Tolerating flirtations with race science means choosing the latter. 

 

Why does it have to be this thing in particular? 

 

A culture of truth-seeking is one of EA’s largest comparative advantages; some may even say its greatest comparative advantage relative to other social movements. I could imagine advocates of a very strong truth-seeking culture saying something like: it’s true that HBD is unseemly and uncomfortable, but we have to be uncompromising in our interrogation of difficult questions, even those that run afoul of social taboos and common sense. After all, didn’t longtermism itself come from a willingness to engage with fringe and counterintuitive positions? 

 

Broadly, my response to this is that one of these things is not like the others. Race science is noxious, both in the scale of historical atrocity that it has enabled and the harm that tolerating it does to our community. And crucially, it is also just so unclear to me why we need to talk about it at all. It doesn’t have anything to do with our core cause areas and certainly does not help us make any progress on the most pressing problems within those cause areas. We don’t spend much time on the forum discussing baseball either, and baseball certainly does not have the baggage of race science. 

 

Reinstituting the taboo on discussing race science doesn’t mean that we need to chill free inquiry into other difficult and taboo topics. Here are nine things off the top of my head on which we could do difficult truth-seeking instead.[5] All of these topics are more relevant to EA cause areas or the lives of EAs and taking any of these ideas seriously sends a pretty costly signal about a willingness to be open-minded: 

  1. Insect welfare 
  2. Polyamory and more broadly whether traditional conceptions of monogamous relationships ought to be challenged 
  3. Digital minds 
  4. Suffering risks 
  5. Ending predation
  6. Whether it might actually be better for China to win the AI race (and more broadly whether the Chinese social and political system has advantages over Western liberal democracy)
  7. Pro-natalism  
  8. The meat-eater paradox 
  9. How EA should engage with major AI labs going forward

 

I have yet to hear a single person defend why race science needs to be the taboo issue of choice. I view the choice of race science as a litmus test for truth seeking with deep suspicion — it is hard not to view the people doing it as knowingly or unknowingly racially insensitive. 

 

Some of the topics I listed above could make people uncomfortable. For example, someone who has been a victim of political repression in China may very understably have a strong reaction to a discussion of the possible benefits of the CCP system. An easy line to draw is whether the topic is completely irrelevant to the project of EA and has the possibility of demeaning or excluding people based on their immutable characteristics. Race science clearly falls into this bucket, as would conversations about whether trans people actually exist.[6] Rejecting this fairly narrow and workable exception to otherwise strong truth-seeking norms strikes me as quite dogmatic. We should be wary of absolute principles; at the end of the day, dogmatism is antithetical to truth-seeking, ironically so when the dogmatism is about truth-seeking itself. 

On the subject of good faith

 

The view that says that good speech will eventually win out over bad speech in a truth seeking community also assumes that there is some sort of fair debating game going on in which all participants are conversing in good faith. One of the aspects of EA I have found refreshing is its willingness to give people the benefit of the doubt and to fully hear them out to understand the best version of their ideas. However, I have often seen this bleed into a view (especially among rationalists) that as a best practice we should more or less take what people say at face value. This is, to put it simply, not how the world works. In the real world, people lie about their beliefs and intentions; they obfuscate and strategically misdirect.[7] Online neo-Nazis are particularly adept at taking advantage of this. 

 

I want to be clear that I am not accusing any particular person of being a secret Nazi; that is an extremely serious accusation that requires a very high burden of proof. It is well-documented that online racists use strategic ambiguity, saying things that push the envelope and Overton window only to back down and say it was a joke or that what they said was a joke/misunderstood/taken out of context. I don’t think that we should take Richard Hanania at his word that he is entirely reformed given his history of quoting neo-Nazis under a pseudonym. I think straightforwardly taking him at his word would be naive. We should still strive to read others in good faith and not allow debates about important ideas to collapse into ad hominem attacks; I am very specifically arguing that we should make an exception to this presumption of good faith for people advocating for racism or race-science-adjacent ideas.

 

Another intuitive response is that bad ideas and speech go underground if you try to censor or suppress them. Therefore, it is best to confront bad ideas directly, in the open. In general I think this is quite compelling, but I don’t think it’s right in the very specific circumstance I’m talking about. We’re already talking about these ideas out in the open and the costs in terms of damage done to the community are quite high. As I’ve said, it’s not clear to me why the EA movement has to be the forum for challenging race science anyway.[8] EA is already fairly professionalized and probably needs to become even more so to achieve its lofty ambitions. Movements and communities that are more professional and focused tend not to associate with race science or the people who promote it. 

 

Where do we go from here? 

 

I think it is important to be concrete about what you’re advocating for, especially on mushy topics like community attitudes and discursive norms. I would like to see something in the direction of the following reforms.

 

The EA Forum should ban any discussion of race science, “human biodiversity”, or racial differences in IQ. I would ask readers to remember that I am only talking about the EA Forum here. I’m not calling to use the power of the state to censor these ideas. If you really, absolutely need to talk about race science, there are other places you could go.[9] Just not here and not in our community. 

 

Major EA organizations and leaders should publicly disavow race science and human biodiversity. EA is quite top heavy. Major organizations like CEA and OP taking a strong and clear stance on an issue has a large effect on shaping community norms. Normally such organizations should be cognizant of the power they hold and restrained about using it, but I think this is an exception.[10] 

 

EA funders should avoid giving money to people or organizations with a history of associating with race science. This would easily include funding anything associated with Richard Hanania, as well as the other highly controversial speakers at Manifest. I am also of the view that absent a much more genuine and detailed reckoning with his racist past, EAs should have a much higher bar for supporting Nick Bostrom. My view overall is that it’s better to be safe than sorry when it comes to this sort of stuff. Given how bad it is for EA to be associated with race science, in terms of community composition and broader reputational damage, I think it’s worth missing out on a small number of potentially valuable projects or unfairly cutting a few people off if the net effect is a much healthier community.

 

EA should avoid any public association with people who have a history of making statements sympathetic to race science. This has similar issues to the last suggestion. Following this prescription almost certainly means dissociating ourselves from people who really can offer us something that would help us solve our most pressing problems. This tradeoff is starkest in the case of Bostrom, who has made undeniably massive contributions to longtermism, but I would expect to see it at a smaller scale, e.g. Richard Hanania could plausibly have something to contribute about prediction markets at Manifest. 

 

EAs should be empowered to speak out against race science and its proponents. I mean speak out in the sense of saying “I find these views extremely disconcerting and don’t think they have any place here” rather than trying to debate the merits of whether there are racial differences in IQ. I worry that the last couple years of polarization conversations on the forum caused people holding more “average” views relative to the community to keep quiet. I am writing this post with the sincere hope that I am in fact speaking for a sizable number of people who have also found recent conversations on the forum upsetting and out of step with the values they hold and the movement they thought they joined. If you find yourself in this group, know that you are not alone. 

 

---

 

If we really believe that we are working on the most pressing problems in the world, we need to be serious and hold ourselves to a higher standard. No one else who is seriously working on problems at the highest levels of importance openly tolerates any association with race science. We shouldn’t either. 

 

Imagine yourself in the future where we have failed (assume for the sake of argument that you still exist in this future). Take a hard look in the mirror and ask yourself: do you really believe that we failed because we weren’t tolerant enough to people who wanted to debate race science? Or did we fail because we stayed too insular, too online, too unwilling to professionalize and accept the burdens of integrity imposed by the monumental task we set ourselves to? 

 

You say that you want to save the world? Then act like it. 

  1. ^

    To be fully transparent about my biases here: As a Jew, I found this incident deeply disturbing and emotionally difficult. I have a hard time taking people at their word when they say that their flirtations with holocaust denial were just honest mistakes, but that’s just me.

  2. ^

    The author of the most widely circulated criticism of Manifest wrote that at least 8 people attending the conference as “special guests” could plausible be placed under the eugenics/HBD label.

  3. ^

    I’m not sure how representative the forum is of average attitudes in the community. Most IRL EAs I’ve talked to about these controversies have been as appalled as I am. On the other hand, comments and posts defending the various bad actors in these controversies have received hundreds of upvotes, so I don’t think the forum can be dismissed as entirely unrepresentative of what at least a non-insignificant portion of EAs think. 

  1. ^

    I thought Garrison’s comment said this particularly well and succinctly.  

  2. ^

    This isn’t a list of the topics I think we most urgently need to seek the truth about. I was trying to think of things that could both be genuinely controversial or taboo by either EA or mainstream lights and could still be valuable to discuss in spite of this. I haven’t put any effort into actually thinking about the relative merits of these topics, and I’m really not sure what the answer to that question would be. 

  3. ^

    I haven’t personally seen this come up but I'm including it because it seems like there were at least some anti-trans sentiments expressed at Manifest and the adjoining events. I think I’m in a pretty poor position to assess the overall levels of transphobia in the EA community, but it is worth noting that EA seems to have a much higher proportion of trans people than the general population. 

  4. ^

    This has been seared into my brain after the SBF experience of 2022. 

  5. ^

    Lots of localized forums make for poor places to challenge certain ideas, e.g. your workplace probably isn’t the place to discuss HBD either.

  6. ^

    See e.g., 4chan and its successors.

  7. ^

    This disavowal can take the form of a quick take or a forum comment. 

Show all footnotes

5

7
18

Reactions

7
18

More posts like this

Comments26


Sorted by Click to highlight new comments since:

Forum users can, should, and do downvote posts that are bad, distracting, etc. (The trolls should soon get the message and leave.) I'm very opposed to top-down hierarchical interventions of the sort you describe. I don't particularly think that EA spaces should host "unequivocal condemnations" of things that (as you rightly note) have nothing to do with EA, so I'd also encourage people to downvote those. It's groupthinky and cringe, and risks being massively off-putting to the kinds of independent thinkers who value epistemic integrity and have little tolerance for groupthink or witch-hunts, however meritorious the message (or wicked the witches).

"No one else who is seriously working on problems at the highest levels of importance openly tolerates any association with race science."

You should look into how universities work! Academic freedom means that individual professors are free to condemn whatever views they find obnoxious. They're also free to invite speakers that their colleagues find obnoxious, and sometimes they do (even, e.g., at Princeton). Their colleagues -- many of whom work on important problems! -- must then tolerate this. Note that many of the best universities follow the Chicago Principles & Kalven Report guidance on institutional neutrality, according to which the university leadership should express no official opinion on matters that aren't directly relevant to the running of the university. The university is a community of scholars, of diverse opinions, not a political party with a shared orthodoxy.

I would much prefer for the EA community to model itself on universities than on political parties.

Again, that doesn't mean that anything goes. We already have the solution to bad contributions. It's called 'downvoting'.

Effective altruism is meant to be a social movement, not a university debate. And unlike in a university setting, there are zero requirements for someone to be accurate or to have relevant expertise before posting here. 

It is common here for people with little expertise in a topic to do an arbitrary amount of online research and throw out their resulting opinions. This results in something like the post where someone cited "mankind quarterly" for their human genetics posts, without mentioning that it was a publication with a history of white supremacy, fraud and incompetence. That issue was caught, eventually, but I guarantee you the forum is riddled with similar problems that are not caught. 

 For a regular topic, these loose standards may be acceptable, as it makes it easier to throw out ideas and collaborate, and the air of loose discussion makes things fun. Someone may chime in with corrections, someone may not, ultimately it is not a big deal. 

But when it comes to race science, the consequences of this sort of loose, non-quality controlled discussion is worse. As the OP mentioned, you drive away minorities, and make the forum an unpleasant place to be. 

But it also might convince more people to be racist. At least one white supremacist has traced their radicalisation pipeline to go through lesswrong and Slatestarcodex. That was just one person out of forty, so perhaps it was a fluke, or perhaps it wasn't. Perhaps there are a few that didn't go all the way to posting on white supremacist forums, but became just a little bit more dismissive of black people on job applications. I don't know how high the cost is, but it exists.  

The way I see it, the forum should either hold back any race science related post and ensure that every claim made within it is thoroughly fact checked by relevant independent experts, or it should just ban the things. I prefer the latter, so we don't waste anybody's time. 

In fairness, expertise is not required in all university settings. Student groups invite non-experts political figures to speak, famous politicians give speeches at graduation ceremonies etc. I am generally against universities banning student groups from having racist/offensive speakers, although I might allow exceptions in extreme cases.

 Though I am nonetheless inclined to agree that the distinction between universities, which have as a central purpose free, objective, rational debate, and EA as a movement, which has a central purpose of carrying out a particular (already mildly controversial) ethical program, and which also, frankly, is in more danger of "be safe for witches, becomes 90% witch" than universities are, is important and means that EA should be less internally tolerant of speech expressing bad ideas. 

You seem to be imagining the choice as being between "host bad discussions" or "do something about it via centralized hierarchical control". But I'm trying to emphasize a third option: "do something about it via decentralized mechanisms." (Posts with negative karma are basically invisible, after all.)

The downside of centralized responses is that it creates a precedent for people to use social/political pressure to try to impose their opinions on the whole community. Decentralization protects against that. (I don't so strongly object to the mods just deciding, on their own, to ban certain topics. What especially troubles me is social/political pressure aimed towards this end.)

As I see it, the crucial question to ask is which mechanism is more reliable: top-down control in response to social/political pressure from vocal advocates, or decentralized community judgment via "secret ballot" karma voting. As I see it, the primary difference between the two is that the former is more "politicized" and subject to social desirability bias. (A secondary effect of politicization is to encourage factions to fight over control of this new power.) So I think the decentralized approach is much better.

One important difference between the Forum and most other fora is the strong vote -- a minority faction can use their strong votes to keep things in circulation unless the majority acts the same way. The Forum is also small enough for brigading to be a major concern.

I think encouraging people who take the view to just strong downvote race science material off the Forum poses its own set of epistemic problems for the Forum. And encouraging them to "merely" downvote may not be effective if the other side is employing the strongvotes.

Maybe this proposed solution would be more viable if there were special voting rules for current topics at mod discretion -- e.g., 500 karma required to vote, no strongvotes allowed? I'm not sure, though. If all the race science folks vote and everyone else mostly stays away, the result is a false sense of community views.

Still, I support a ban -- race science is off topic, there are other places people can go if they want to talk about it, and these discussions cause significant harmful effects. If discussions of why the New England Patriots and New York Yankees were a scourge on humanity were causing these problems for the Forum, I'd support a ban even though I believe those teams are. :)

Karma is not a straightforward signal of the value of contributions

"We already have the solution to bad contributions. It's called 'downvoting'."

This statement and the idea of karma as the decentralized solution to the problems OP describes feels overconfident to me. To reference this comment, I also would push back on karma not being subject to social desirability bias (ex: someone sees a post already has relatively high karma, so they’re more inclined to upvote it knowing that others on the Forum or in the EA community have, even if they, let's say, haven't read the whole post).

I would argue that karma isn’t a straightforward or infallible signal of “bad” or “good” contributions. As those working on the Forum have discussed in the past, karma can overrate certain topics. It can signal interest from a large fraction of the community, or “lowest-common-denominator” posts, rather than the value or quality of a contribution. As a current Forum staff member put it, “the karma system is designed to show people posts which the Forum community judges as valuable for Forum readers.”

I would note, though, that karma also does not straightforwardly represent the opinions of the Forum community as a whole regarding what’s valuable. The recent data from the 2023 EA Forum user survey shows that a raw estimate of 46.5% of those surveyed and a weighted estimate of 70.9% of those surveyed upvoted or downvoted a post or comment. Of 13.7k distinct users in a year, 4.4k of those are distinct commenters, and only 171 are distinct post authors. Engagement across users is “quite unequal,” and a small number of users create an outsized amount of comments, posts, and karma. Weighted upvotes and downvotes also mean that certain users can have more influence on karma than others. 

I appreciate the karma system and its values (of which there are several!), and maybe your argument is that more people should vote and contribute to the karma system. I just wanted to point out how karma seems to currently function and the ways it might not directly correlate with value, which brings me to my next point…
 

Karma seems unlikely to address the concerns the OP describes

Without making a claim for or against the OP’s proposed solutions, I’m unsurprised by their proposal for a centralized approach. One argument against relying on a mechanism like karma, particularly for discussions of race on the Forum, is that it hasn't been a solution for upholding the values or conditions I think the OP is referencing and advocating for (like not losing the potential involvement of people who are alienated by race science, engaging in broader intellectual diversity, and balancing the implications of truth-seeking with other values). 

To give an example: I heard from six separate people involved in the EA community that they felt alienated by the discussions around Manifest on the Forum and chose to not engage or participate (and for a few people, that this was close to a last straw for them wanting to remain involved in EA at all). The costs and personal toll for them to engage felt too high, so they didn't add their votes or voices to the discussion. I've heard of this dynamic happening for different race-related discussions on the Forum in the past few years, and I suspect it leads to some perspectives being more represented on the Forum than others (even if they might be more balanced in the EA community or movement as a whole). In these situations, the high karma of some topically related comments or posts in fact seemed to further some of the problems OP describes. 

I respect and agree with wanting to maintain a community that values epistemic integrity. Maybe you think that costs incurred by race science discussions on the Forum are not costly enough for the Forum to ban discussion of the topic, which is an argument to be made. I would be curious for what other ideas or proposals you would have for addressing some of the dynamics OP describes, or thoughts on the tradeoffs between allowing/encouraging discussions of race science in EA-funded spaces and the effects that can have on the community or the movement. 

Academic freedom is not and has never been meant to protect professors on topics that have no relevance to their discipline: "Teachers are entitled to freedom in the classroom in discussing their subject, but they should be careful not to introduce into their teaching controversial matter which has no relation to their subject. Limitations of academic freedom because of religious or other aims of the institution should be clearly stated in writing at the time of the appointment."

If, say, a philosophy professor wants to express opinions on infanticide, that is covered under academic freedom. If they want to encourage students to drink bleach, saying it is good for their health, that is not covered.

We can and should have a strong standard of academic freedom for relevant, on-topic contributions. But race science is off topic and irrelevant to EA. It's closer to spam. Should the forum have no spam filter and rely on community members to downvote posts as the method of spam control?

You elsewhere link to this post as a "clear example of a post that would be banned under the rules". That post includes the following argument:

People act like genetic engineering would be some sort of horrifying mad science project to create freakish mutant supermen who can shoot acid out of their eyes. But I would be pretty happy if it could just make everyone do as well as Ashkenazi Jews. The Ashkenazim I know are mostly well-off, well-educated, and live decent lives. If genetic engineering could give those advantages to everyone, it would easily qualify as the most important piece of social progress in history, even before we started giving people the ability to shoot acid out of their eyes.

The post concludes, "EA's existing taboos are preventing it from answering questions like these, and as new taboos are accepted, the effectiveness of the movement will continue to wain."

You may well judge this to be wrong, as a substantive matter. But I don't understand how anyone could seriously claim that this is "off topic and irrelevant to EA." (The effectiveness of the movement is obviously a matter of relevant concern for EA.) People's tendency to dishonestly smuggle substantive judgments under putatively procedural grounds is precisely why I'm so suspicious of such calls for censorship.

As an Ashkenazi Jew myself, saying "we'd like to make everyone like Ashkenazi Jews" feels just like a mirror image of Nazism that very clearly should not appear on the forum

I'm not making any claims either way about that. I'm just pointing out (contra Matthew) that it is clearly not "irrelevant spam". Your objections are substantive, not procedural. Folks who want to censor views they find offensive should be honest about what they're doing, not pretend that they're just filtering out viagra ads.

Topical relevance is independent of the position one takes on a topic, so the rule you're suggesting also implies that condemnations of race science are spam and should be deleted. (I think I'd be fine with a consistently applied rule of that form. But it's clearly not the OP's position.)

Thanks for sharing your thoughts.

Some thoughts on your suggestions of where to go from here:

  • I'm somewhat sympathetic to banning discussion on the forum. I think this topic takes up way more attention than it deserves, and it generates that attention because of its edgy and controversial status.
    • However:
      • I'd at least have some worry that this would be taken by people as confirming evidence for unfortunate beliefs they already held --
        • Some people might think that this was evidence in favour of scientific racism (why else would they try to close down discussion?)
        • Some people might think this was evidence of widespread belief in scientific racism in EA (what are they trying to hide?)
      • This would be in tension with another of your suggestions, "EAs should be empowered to speak out against race science and its proponents"
        • I think that people are currently welcome to speak out on this, including on the forum, and this often attracts a lot of upvotes
        • I can't see a good way to draw boundaries which would continue to allow this, while also banning the discussion you don't want to see on the forum 
          • Note that I think there is very little actual discussion of race science on the forum; most of the discussion is about social responses
            • (The most substantive thing I remember reading is someone saying basically "FYI, I looked into this and the claims of scientific racism seem to be false", although of course there may be things I missed)
          • It seems kind of unhealthy to facilitate people saying "X makes me uncomfortable" while in the same space banning people from saying "X doesn't make me uncomfortable" (though I think it's fine to ban direct discussion of X)
    • Overall, this makes me feel worse about banning than not-banning, but I could imagine being persuaded otherwise on that point, and am curious about your takes on the downsides
  • On "Major EA organizations and leaders should publicly disavow race science and human biodiversity" --
    • I'm into public condemnation of problematic actions, e.g. the CEA statement on Bostrom's old emails:
      • "Effective altruism is based on the core belief that all people count equally. We unequivocally condemn Nick Bostrom’s recklessly flawed and reprehensible words. We reject this unacceptable racist language, and the callous discussion of ideas that can and have harmed Black people. It is fundamentally inconsistent with our mission of building an inclusive and welcoming community."
    • I think it would be inappropriate for people to present as epistemic authorities on topics where they aren't
  • On "EA should avoid any public association with people who have a history of making statements sympathetic to race science" --
    • I don't think EA does have any direct public or private association with folks like Hanania
      • This is IMO absolutely the right choice
      • It's possible that that lack of association should be more publicised
        • It's kind of funny to stress it? It's not like "we are cutting ties", because I don't think there ever were any ties
        • But maybe it would be helpful to do anyway
    • However, EA is publicly associated with orgs (or at least with Manifest) which are publicly associated with Hanania
      • But here the grounds for rejecting public association seem much shakier
        • Manifest may be making mistakes, but they seem to be grounded in a desire for intellectual freedom
      • I can see an argument for making public statements making clear that Manifest's actions are not the ones EA orgs would choose
        • Generically these feel a bit bad-neighbourly (like it's impolite to criticise other people's choices, rather than just get on and do your own thing well), but if there's a risk that people are perceiving EA as making the choices that Manifest does, perhaps it would be worthwhile
    • I think the idea that it would be appropriate to disavow association with Bostrom is wrong
      • I think it's right and good to condemn his old emails
      • I think his apology was underwhelming in his insight about the harms of the original email, and it's fair to criticise that (although I think it's important that he repudiated the original email, and I believe him to be sincere in that)
      • The University of Oxford investigated here, and they concluded that he was not a racist and did not hold racist views (source: quotes from the outcome of that investigation, now included at the bottom of Bostrom's apology)
      • Although the nearly-three-decade-old emails were obviously problematic, I worked with Bostrom for several years, and never observed anything even resembling being edgy about these topics; nor have I heard reports of that from anyone who did
        • I also think he is clearly a person of high integrity, and generally sincere (even at times when it might be politically convenient for him to be less sincere), and while I'm sympathetic to your concerns about good faith in Hanania's case, I think it would be quite unfair to tar Bostrom with the same brush

The EA Forum should ban any discussion of race science, “human biodiversity”, or racial differences in IQ

Can you link to concrete examples of things on the EA Forum that would be deleted under the proposed new EA Forum rules?

I tried searching for "human biodiversity" but few of these posts seem like the kind of post where I would guess that you want them deleted.

Things that are found were mostly about the Manifest or Bostrom controversy. I am guessing you do not want to delete these. Or this post. In the wake of the Bostrom controversy there was also this heavily downvoted post that complained about "wokism". I am guessing this is the type of post that you want to see deleted. There is also this upvoted comment that argues against "human biodiversity", which, if I interpret your proposed rule change correctly, should also be deleted. (A rule that says "you are allowed to argue against HBD, but not for it" would be naive IMO, and I do not get the impression that you would endorse such a rule).

Overall, I do not remember seeing people discussing "human biodiversity" on the object level. It indeed seems off-topic for EA. And explicitly searching for it does not bring up a lot, and only in relation to EA controversies.

My hope is that in practice it would be pretty rare for this rule to be invoked, although I think it does depend a bit on how the final rule is worded. The comment you linked arguing against human biodiversity is a tough edge case. On the one hand, I am a lot more concerned about people arguing for human biodiversity than against it, but one the other hand it doesn't seem like the end of the world if a prohibition on discussing the topic also took down comments like that. 

IMO the forum rule I proposed is in my view the least important of the reforms/policies I suggested. The value comes more from signalling an opposition to racism/race science than it does from actually taking down a couple of comments here and there. Given how controversial the rule is, it would clearly be a pretty costly signal. That seems good by the lights of "making it more likely people of color engage with the EA movement." 

A clear example of a post that would be banned under the rules: why-ea-will-be-anti-woke-or-die.

I'll be honest, I'm not a fan of the argumentation style of this post.

It makes some good points, but too much of it is designed to circumvent rational discussion of what action the community should take for my liking, by using social pressure. It also encourages EA to focus on maintaining its image more than I would see as healthy (optics is important, but it shouldn't become EA's main concern).

FWIW, I'm not a fan of race science posts here either. I agree with you that it's hardly the most relevant topic and it creates a big distraction. However, if the community decided to ban such topics from the forum I would not want it to do so on the basis of many of the things you've said here.

Additionally, asking a bunch of orgs to issue a statement would cause a bunch of unnecessary politicization and wouldn't really help our reputation. Talking about an issue associates you with that issue and our critics would continue to use these issues to bludgeon EA because they have no reason not to.

My current take is:

1) We can't really do anything to prevent our opponents using this criticism against us and "wokeness" is on the decline, so EA will be fine reputationally, we just have to wait this out.
2) There's a tension between focusing on optics and focusing on being excellent. I used to think that there wasn't so much of a tension, but once a community starts focusing too much on optics, it seems to be quite easy for it to continue sliding in that direction, making this more of a tension than one might expect. I believe that the community should focus more on being excellent and that will draw the kinds of people we're looking for to us, even if some fraction of them find certain aspects frustrating.
3) Regarding the reputational impact on particular cause areas, I think we should try really hard to further establish these cause areas as their own entities outside of EA. There are many people who might be interested in these causes, but not interested in EA and it would also provide some degree of reputational separation.
4) I believe that there are advantages to specialization in that groups focused on individual cause areas can focus more on 'getting the thing done', whilst EA can focus more on epistemics and thinking things through from first principles. Insofar as EA makes providing epistemic support one of its goals, it's important for us to try to avoid this kind of internal politicization.

Re: the first footnote: Max Tegmark has a Jewish father according to Wikipedia. I think that makes it genuinely very unlikely that he believes Holocaust denial specifically is OK. That doesn't necessarily mean that he is not racist in any way or that the grant to the Nazi newspaper was just an innocent mistake. But I think we can be fairly sure he is not literally a secret Nazi. Probably what he is guilty of is trusting his right-wing brother, who had written for the fascist paper, too much, and being too quick (initially) to believe that the Nazis were only "right-wing populists".

I'm an Israeli Jew and was initially very upset about the incident. I don't remember the details, but I recall that in the end I was much less sure that there was anything left to be upset about. It took time but Tegmark did answer many questions posed about this.

The net upvotes on this were, if I recall correctly, significantly higher than they are at the time of this comment. The downward trend in voting on this post raises some concerns about possible brigading from an outside (or semi-outside) source.

I do remember this post having around 20 net upvotes about a day ago.

But some changes over time can also just be noise (if some people have strong-votes). Also, timezone correlations could also be an explanation (it would not surprise me if the US is more free-speech than Europe). Or there could be changes in the way the article gets found by different people. Or people change their vote after they changed their mind over the article. Or the article gets posted in a discord channel, without any intentions or instructions of brigading. Of course its still possible that the vote changes have some sketchy origin, and I am not against the forum moderators investigating these patterns.

This post is on a controversial topic, so lots of votes in both directions are to be expected.

Noise is certainly a viable alternative explanation, which is why I limited myself to "raises some concerns about possible brigading."[1]

I don't think a mod investigation would be a good use of time here. The mods have pointed out a past influx of new accounts triggered by this topic being hot, along with possible non-representative voting patterns on this topic before. In contrast to events at the time of the Bostrom affair, it would be much harder to rule in / rule out irregular voting with confidence where the activity volume is much smaller. 

However, since people do cite to Forum upvotes/downvotes as evidence of broader EA sentiment (whether justified or not), I think it's fair to point vote distortion out as a possibility. 

  1. ^

    I note that there is a range of opinions about what count as brigading. There are, for instance, places on Reddit where voting in the original subreddit if you learn about crossposted content from a different subreddit is counted as brigading. That is not my own view (although I understand why Reddit communities have operationalized it that way for administrability reasons). However, I do think it is possible for brigading to occur without specific intent or instruction. In particular, people whose involvement with the Forum is limited to threads on their pet issue and who are otherwise uninvolved with EA should not be voting in my opinion.

The backlink-checker doesn't show anything of the sorts; but I think it doesn't work for discord or big social media websites like 𝕏.

That's a useful tool; thanks for sharing. That being said, I think the absence of evidence from that source is fairly weak evidence against a brigading hypothesis if discord and big social media sites are excluded from its scope. Those are some of the primary means by which I would predict brigading to occur (conditional on it actually occuring). Based on past behavior, I believe the base rate of brigading on race-science posts is fairly significant. So this evidence does not move the needle very much for me.

To clarify my reason for concern: I think there is good reason to suspect brigading when there is a "late" voting bump that moves considerably in one direction or the other. We saw that with one of the race-science posts for which there was evidence of an exterior link driving the traffic.  Unfortunately, Wayback Machine's captures are all on August 1, and so I have only my (not reliable) memory of where the net karma was during this post's history.

Without better data, the best I think we can do in terms of outside influence is "maybe." For instance, I'd update more on knowing the timing of votes, the vote patterns for medium+ karma/engagement accounts vs. new or intermittent ones, whether there were votes from any account that tends to show up and vote when a small set of issues is discussed, etc. In light of the maybe, I feel there's value for flagging the possibility for the reader who may not be aware of the broader context.

IMO this post violates its own proposed rule of avoiding discussion of race science on the EA forum.

Similarly, there is some tension between the ideas “EA should publicly disavow race science” and “EA should never discuss race science”. Normally taking stances invites discussion.

Race science is well-established pseudoscience recognised by the scientific community. This why I roll my eyes when EAs think of themselves as elite or smarter than average. There are, fortunately or unfortunately, anti-intellectual currents within this movement, and race science isn't the only pseudoscientific inclination in my opinion, and in the opinion of a few others in this movement I have learned.

Unlike you however I actually am grateful for EAs anti-science streak to be so nakedly visible, because it is actually valuable information for outsiders and insiders to know. Knowledge of the EAs embracing race science should inform the public how seriously to take this movement, and can only help weed out the good parts of EA from the bad.

We shouldn't mask up the shortcomings of EA to make it look like a better movement than it actually is.

More from JSc
Curated and popular this week
Relevant opportunities