Effective altruism is based on the core belief that all people count equally. We unequivocally condemn Nick Bostrom’s recklessly flawed and reprehensible words. We reject this unacceptable racist language, and the callous discussion of ideas that can and have harmed Black people. It is fundamentally inconsistent with our mission of building an inclusive and welcoming community.

— The Centre for Effective Altruism

Sorted by Click to highlight new comments since:
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings
Moderator Comment33
Pinned by JP Addison

A short note as a moderator:[1] People (understandably) have strong feelings about discussions that focus on race, and many of us found the content that the post is referencing difficult to read. This means that it's both harder to keep to Forum norms when responding to this, and (I think) especially important.

Please keep this in mind if you decide to engage in a discussion about this, and try to remember that most people on the Forum are here for collaborative discussions about doing good.

If you have any specific concerns, you can also always reach out to the moderation team at forum-moderation@effectivealtruism.org.

  1. ^

    Mostly copying this comment from one I made on another post.

I feel really quite bad about this post. Despite it being only a single paragraph it succeeds at confidently making a wrong claim, pretending to speak on behalf of both an organization and community that it is not accurately representing, communicating ambiguously (probably intentionally in order to avoid being able to be pinned on any specific position), and for some reason omitting crucial context. 

Contrary to the OP it is easy to come up with examples where within the Effective Altruism framework two people do not count equally. Indeed most QALY frameworks value young people more than older people, many discussions have been had about hypothetical utility monsters, and about how some people might have more moral patienthood due to being able to experience more happiness or more suffering, and of course the moral patienthood of artificial systems immediately makes it clear that different minds likely matter differently in moral calculus. 

Saying "all people count equally" is not a core belief of EA, and indeed I do not remember hearing it seriously argued for a single time in my almost 10 years in this community (which is not surprising, since it indeed doesn't really ho... (read more)

I think I do see "all people count equally" as a foundational EA belief. This might be partly because I understand "count" differently to you, partly because I have actually-different beliefs (and assumed that these beliefs were "core" to EA, rather than idiosyncratic to me). 
What I understand by "people count equally" is something like "1 person's wellbeing is not more important than another's". 

E.g. a British nationalist might not think that all people count equally, because they think their copatriots' wellbeing is more important than that of people in other countries. They would take a small improvement in wellbeing for Brits over a large improvement in wellbeing for non-Brits. An EA would be impartial between improvements in wellbeing for British people vs non-British people. 

"most QALY frameworks value young people more than older people, many discussions have been had about hypothetical utility monsters, and about how some people might have more moral patienthood due to being able to experience more happiness or more suffering, and of course the moral patienthood of artificial systems immediately makes it clear that different minds likely matter differently in... (read more)

Thanks for writing out a reaction very similar to my own. As I wrote in a comment on a different topic, "it seems to me that one of the core values of effective altruism is that of impartiality― giving equal moral weight to people who are distant from me in space and/or time."

I agree that "all people count equally" is an imprecise way to express that value (and I would probably choose to frame in in the lens of "value" rather than "belief") but I read this as an imprecise expression of a common value in the movement rather than a deep philosophical commitment to valuing all minds exactly the same.

But there is a huge difference in this case between something being a common belief and a philosophical commitment, and there is also a huge difference between saying that space/time does not matter and that all people count equally.

I agree that most EAs believe that people roughly count equally, but if someone was to argue against that, I would in no way think they are violating any core tenets of the EA community. And that makes the sentence in this PR statement fall flat, since I don't think we can give any reassurance that empirical details will not change our mind on this point.

And yeah, I think time/space not mattering is a much stronger core belief, but as far as I can tell that doesn't seem to have anything to do with the concerns this statement is trying to preempt. I don't think racism and similar stuff is usually motivated by people being far away in time and space (and indeed, my guess is something closer to the opposite is true, where racist individuals are more likely to feel hate towards the immigrants in their country, and more sympathy for people in third world countries).

One of the defining characteristics of EA is rejecting certain specific reasons for counting people unequally; in particular, under EA ideology, helping someone in a distant country is just as good as helping a nearby person by the same amout. Combined with the empirical fact that a dollar has much larger  effect when spent on carefully chosen interventions in poorer countries, this leads to EA emphazing on poverty-reduction programs in poor, mainly African countries, in contrast to non-EA philanthropy which tends to favor donations local to wherever the donor is.

This is narrower than the broad philosophical commitment Habryka is talking about, though. Taken as a broad philosophical commitment, "all people count equally" would force some strange conclusions when translated into a QALY framework, and when applied to AI, and also would imply that you shouldn't favor people close to you over people in distant poor countries at all, even if the QUALYs-per-dollar were similar. I think most EAs are in a position where they're willing to pay $X/QALY to extend the lives of distant strangers, $5X/QALY to extend the lives of acquaintances, and $100X/QALY to extend the lives of close friends and family. And I think this is philosophically coherent and consistent with being an effective altruist.

In all of these situations, I think we can still say people "count" equally.

I don't think this goes through. Let's just talk about the hypothetical of humanity's evolutionary ancestors still being around.

Unless you assign equal moral weight to an ape than you do to a human, this means that you will almost certainly assign lower moral weight to humans or nearby species earlier in our evolutionary tree, primarily on the basis of genetic differences, since there isn't even any clean line to draw between humans and our evolutionary ancestors.

Similarly, I don't see how you can be confident that your moral concern in the present day is independent of exactly that genetic variation in the population. That genetic variation is exactly the same that over time made you care more about humans than other animals, amplified by many rounds of selection, and as such, it would be very surprising if there was absolutely no difference in moral patienthood among the present human population.

Again, I expect that variance to be quite small, since genetic variance in the human population is much smaller than the variance between different species, and also for that variance to really not align very w... (read more)

For information, CEA’s OP links to an explanation of impartiality:

Impartial altruism: We believe that all people count equally. Of course it's reasonable to have special concern for one's own family, friends and life. But, when trying to do as much good as possible, we aim to give everyone's interests equal weight, no matter where or when they live. This means focusing on the groups who are most neglected, which usually means focusing on those who don’t have as much power to protect their own interests.

That paragraph does feel kind of confused to me, though it's hard to be precise in lists of principles like this. As jimrandomh says above, it is widely accepted in EA that time and location do not matter morally (well, moreso location, I think it's actually pretty common for EAs to think that far future lives are worth less than present lives, though I don't agree with this reasoning). But that clearly does not imply that all people count equally, given that there are many possible reasons for differing moral weights.
"Emulated Minds" aka "Mind uploads".
Brain Emulations - basically taking a person and running a simulation of them on a computer, where they could potentially be copied, run faster or slower, etc.

Thanks for writing this up Amber — this is the sense that we intended in our statement and in the intro essay that it refers to (though I didn’t write the intro essay). We have edited the intro essay to make clearer that this is what we mean, and also to make clear that these principles are more like “core hypotheses, but subject to revision” than “set in stone”.

Sorry for the slow response.

I wanted to clarify and apologise for some things here (not all of these are criticisms you’ve specifically made, but this is the best place I can think of to respond to various criticisms that have been made):

  1. This statement was drafted and originally intended to be a short quote that we could send to journalists if asked for comment. On reflection, I think that posting something written for that purpose on the Forum was the wrong way to communicate with the community and a mistake. I am glad that we posted something, because I think that it’s important for community members to hear that CEA cares about inclusion, and (along with legitimate criticism like yours) I’ve heard from many community members who are glad we said something. But I wish that I had said something on the Forum with more precision and nuance, and will try to be better at this in future.
  2. The first sentence was not meant to imply that we think that Bostrom disagrees with this view, but we can see why people would draw this implication. It’s included because we thought lots of people might get the impression from Bostrom’s email that EA is racist and I don’t want anyone — w
... (read more)

I appreciate this

I agree with various concerns that have been raised about CEA and others in the community caring too much about PR concerns; I think truthfully saying what you believe — carefully and with compassion — is almost always more important than anything else

CEA's current media policy forbids employees from commenting on controversial issues without permission from leaders (including you). Does the view you express here mean you disagree with this policy? At present it seems that you have had the right to shoot from the hip with your personal opinions but ordinary CEA employees do not.

At the risk of running afoul of the moderation guidelines, this comment reads to me as very obtuse. The sort of equality you are responding to is one that I think almost nobody endorses. The natural reading of “equality” in this piece is the one very typical of, even to an extent uniquely radical about, EA. When Bentham says “each to count for one and none for more than one”, or Sidgwick talking about the point of view of the universe, or Singer discusses equal consideration of equal interests. I would read this charitably and chalk it up to an isolated failure to read the statement charitably, but it is incredibly implausible to me that this becoming the top voted comment can be accounted for by mass reading comprehension problems. If this were not a statement critical of an EA darling, but rather a more mundane statement of EA values that said something about how people count equally regardless of where in space and time they are, or sentient beings count equally regardless of their species, I would be extremely surprised to see a comment like this make it to the top of the post. I get that taking this much scandal in a row hurts, but guys, for the love of god just take the L, this behavior is very uncharming.

I think what Habryka is saying is that while EA does have some notion of equality, the reason it sticks so close to mainstream egalitarianism is because humans don't differ much. If there were multiple species civilizations like those in Orion's Arm for example, where multiple orders of magnitude differences in abilities are present, than a lot of stratification and non-egalitarianism will happen solely by the value of freedom/empowerment.

And this poses a real moral dilemma for EA, primarily because of impossibility results around fairness/egalitarianism.

or sentient beings count equally regardless of their species

Who supports this? This is an extremely radical proposal, that I also haven't seen defended anywhere. Of course sentient beings don't count equally regardless of their species, that would imply that if fish turn out to be sentient (which they might) their moral weight would completely outweigh all of humanity right now. Maybe you buy that, but it's definitely extremely far from consensus in EA.

In-general I feel like you just listed 6 different principles, some of which are much more sensible than others. I still agree that indifference to location and time is a pretty core principle, but I don't see the relevance of it to the Bostrom discussion at hand, and so I assumed that it was not the one CEA was referring to. This might be a misunderstanding, but I feel like I don't really have any story where stating that principle is relevant to Bostrom's original statement or apology, given that racism concerns are present in the current day and affect people in the same places as we are. If that is the statement CEA was referring to, then I do withdraw that part of the criticism and replace it with "why are you bringing up a p... (read more)

Equality is always “equality with respect to what”. In one sense giving a begger a hundred dollars and giving a billionaire a hundred dollars is treating them equally, but only with respect to money. With respect to the important, fundamental things (improvement in wellbeing) the two are very unequal. I take it that the natural reading of “equal” is “equal with respect to what matters”, as otherwise it is trivial to point out some way in which any possible treatment of beings that differ in some respect must be unequal in some way (either you treat the two unequally with respect to money, or with respect to welfare for instance).

The most radical view of equality of this sort, is that for any being for whom what matters can to some extent matter to them, one ought to treat them equally with respect to it, this is for instance the view of people like Singer, Bentham, and Sidgwick (yes, including non-human animals, which is my view as well). It is also, if not universally at least to a greater degree than average, one of the cornerstones of the philosophy and culture of Effective Altruism, it is also the reading implied by the post linked in that part of the statement.

Even if you disa... (read more)

The most radical view of equality of this sort, is that for any being for whom what matters can to some extent matter to them, one ought to treat them equally with respect to it

This feels to me like it is begging the question, so I am not sure I understand this principle. This framing leaves open the whole question of "what determines how much capacity for things mattering to them someone has?". Clearly we agree that different animals have different capacities here. Even if a fish managed so somehow communicate "the only thing I want is fish food", I am going to spend much less money on fulfilling that desire of theirs than I am going to spend on fulfilling an equivalent desire from another human.

Given that you didn't explain that difference, I don't currently understand how to apply this principle that you are talking about practically, since its definition seems to have a hole exactly the shape of the question you purported it would answer.

Vidur Kapur
That’s a good question, and is part of what Rethink Priorities are working on in their moral weight project! A hedonistic utilitarian would say that if fulfilment of the fish’s desire brings them greater pleasure (even after correcting for the intensity of pleasure perhaps generally being lower in fish) than the fulfilment of the human’s desire, then satisfying the fish’s desire should be prioritised. The key thing is that one unit of pleasure matters equally, regardless of the species of the being experiencing it.
Yeah, I think there are a bunch of different ways to answer this question, and active research on it, but I feel like the answer here does indeed depend on empirical details and there is no central guiding principle that we are confident in that gives us one specific answer.  Like, I think the correct defense is to just be straightforward and say "look, I think different people are basically worth the same, since cognitive variance just isn't that high". I just don't think there is a core principle of EA that would prevent someone from believing that people who have a substantially different cognitive makeup would also deserve less or more moral consideration (though the game-theory here also often makes it so that you should still trade with them in a way that evens stuff out, though it's not guaranteed). I personally don't find hedonic utilitarianism very compelling (and I think this is true for a lot of EA), so am not super interested in valence-based approaches to answering this question, though I am still glad about the work  Rethink is doing since I still think it helps me think about how to answer this question in-general. 
Vidur Kapur
Agree that not all EAs are utilitarians (though a majority of EAs who answer community surveys do appear to be utilitarian). I was just describing why it is that people who (as you said in many of your comments) think some capacities (like the capacity to suffer) are morally relevant still, despite this, also describe themselves as philosophically committed to some form of impartiality. I think Amber’s comment also covers this nicely.
Just to clarify, I am a utilitarian, approximately, just not a hedonic utilitarian.
Vidur Kapur
Bentham’s view was that the ability to suffer means that we ought to give at least some moral weight to a being (their capacity to suffer determining how much weight they are given). Singer’s view, when he was a preference utilitarian, was that we should equally consider the comparable interests of all sentient beings. Every classical utilitarian will give equal weight to one unit of pleasure or one unit of suffering (taken on their own), regardless of the species, gender or race of the being experiencing the pleasure or suffering. This is a pretty mainstream view within EA. If it means (as MacAskill suggests it might, in his latest book) that the total well-being of fish outweighs the total well-being of humanity, then this is not an objectionable conclusion (and to think otherwise would be speciesist, on this view).

It's interesting to read this critique of a EVF/CEA press statement through the lens of EVF/CEA's own fidelity model, which emphasizes the problems/challenges with communicating EA ideas in low-bandwidth channels.

I don't agree with the specific critique here, but would be curious as to how the decision to publish a near-tweet-level public statement fits into the fidelity model.

in addition to all of this, the statement compounds the already existent trust problem EA has. It was already extremely bad  in the aftermath of FTX that people were running to journos to leak them screenshots from private EA governance channels (vide that New Yorker piece). You can't trust people in an organization or culture who all start briefing the press against each other the minute the chips are down! Now we have CEA publicly knifing a long-term colleague and movement founder figure with this unbelievably short and brutal statement, more or less a complete disowning, when really they needed to say nothing at all, or at least nothing right now.  

When your whole movement is founded on the idea of utility maximizing, trust is already impaired because you forever feel that you're only going to be backed for as long as you're perceived useful: virtues such as loyalty and friendship are not really important in the mainstream EA ethical framework. It's already discomfiting enough to feel that EAs might slit your throat in exchange for the lives of a million chickens, but when they appear to metaphorically be quite prepared to slit each other's throats for much less, it's even worse!

Sabs -- I agree. EAs need to learn much better PR crisis management skills, and apply them carefully, soberly, carefully, and expertly. 

Putting out very short, reactive, panicked statements that publicly disavow key founders of our movement is not a constructive strategy  for defending a movement against hostile outsiders, or promoting trust within the movement, or encouraging ethical self-reflection among movement members.

I've seen this error again, and again, and again, in academia -- when administrators panic about some public blowback about something someone has allegedly done. We should be better than that.

Agree. At a meta-level, I was disappointed by the seemingly panicked and reactive nature of the statement. The statement is bad, and so, it seems, is the process that produced it.

Hm, I don't much agree with this because I think the statement is basically consistent with Bostrom's own apology. (Though it can still be rough to have other people agree with your criticisms of yourself).

Trust does not mean circling the wagons and remaining silent about seriously bad behavior. That kind of "trust" would be toxic to community health because it would privilege the comfort of the leader who made a racist comment over maintaining a safe, healthy community for everyone else.

Being a leader means accepting more scrutiny and criticism of your actions, not getting a pass because you're a "long-term colleague and movement founder figure."

Sounds like you feel pretty strongly about this and feel like this was very poorly communicated. What would you have preferred the statement to be instead?

emre kaplan
I would also like to add to the other comments that EA Intro Fellowship has included a book section titled "All Animals Are Equal" for quite some time.
Another statement that "people are equal" from GWWC.

Here's Bostrom's letter about it (along with the email) for context: https://nickbostrom.com/oldemail.pdf

I have to be honest that I’m disappointed in this message. I’m not so much disappointed that you wrote a message along these lines, but in the adoption of perfect PR speak when communicating with the community. I would prefer a much more authentic message that reads like it was written by an actual human (not the PR speak formula) even if that risks subjecting the EA movement to additional criticism and I suspect that this will also be more impactful long term. It is much more important to maintain trust with your community than to worry about what outsiders think, especially since many of our critics will be opposed to us no matter what we do.

I don't understand the importance of CEA saying anything to the community  about this particular matter. We can all read Bostrom's statement and draw our own conclusions; CEA has -- to my knowledge -- no special knowledge about or insight into this situation. The "PR speak" seems designed to ensure that each potentially quotable sentence includes a clear rejection of the racist language in question.

I would be fine if CEA hadn't put out a message at all, but this sets a bad precedent. Robotic PR messages has never been the kind of relationship that CEA has had with the community up until now.

I think Jason's point is more that CEA's statement isn't really an attempt to 'communicate with the EA community', so your criticisms don't apply in this case. E.g. this statement could be something for EAs to link to when talking about it with people looking in, who are trying to make an informed judgement (i.e. busy, neutral people lacking information, not committed critics).

Chris Leong
If the message was written for outsiders, then I would encourage them not to post it on the EA forum.

I don't see the value in CEA not posting its press statements to the forum. That just means that people have to regularly check another website if they want to see if a statement has been issued. On the other hand, if you do not want to engage with press statements, it only takes two seconds to read the post title and decide not to engage with content you think is inappropriate for the forum. Given the historical frequency of such comments, that's. . . thirty seconds a year?

The forum seems as good a place as any?

We are not the target audience here. If the PR-speak is interferring with something CEA needs to say to the community, that's one thing. But if there's no need for a community message at all I don't see how the PR-speak message is interfering with community communication. 

In what way do you feel like CEA's statement is counterproductive to maintaining trust?

Because PR messages are so standardised they effectively just follow a formula. They aren't authentic at all and it raises the question of to what extent other messages are representative of CEA's true beliefs.

Some context:

  1. Bostrom's problematic email was written in 1996.

  2. Bostrom claims to have apologised for the email back in 1996, within 24 hours after sending it. If that's right, then the 2023 message is his second apology.

I am disappointed that the CEA statement does not include these details.

David M
It’s embarrassing for Bostrom to claim this as an apology.
Did I link to an incorrect e-mail or why does this comment have -6 agreement karma? In general it would be helpful if people explained their downvote.

Bostrom's email was horrible, but I think it's unreasonable on CEA's part to make this short statement without mentioning that the email was written 26 years ago, as part of a discussion about offending people.

Bostrom's 2023 letter spends more time defending his 1996 beliefs than anything else. He chose to "get out in front" in a way that raises far more questions than it settles, & I think this statement rightly holds him accountable for that. Bostrom today clearly disavows using the N-word in '97. Does he still believe in some form of white superiority? I hope not! But right now, the ambivalence & vagueness of his 2023 letter is working like a dog whistle to western chauvinists, & I hope he figures that out & denounces them as strongly as this statement does.

I wonder why CEA feels the need to comment on what seems to be a personal matter not relating to CEA programming. While I understand how seductive it can be to criticize someone who has said something reprehensible, especially when brought to light with a clumsily worded apology, I wonder if this really relates to CEA, or whether this would have been a good time to practice the Virtue of Silence.

Hello Peter, I will offer my perspective as a relative outsider who is not formally aligned with EA in any way but finds the general principle of "attempting to do good well" compelling and (e.g.) donates to Give Directly. I found Bostrom's explanation very offputting and am relieved that an EA institution has commented to confirm that racism is not welcome within EA. Given Bostrom's stature within the movement, I would have taken a lack of institutional comment as a tacit condonation and/or determination that it is more valuable to avoid controversy than to ensure that people of colour feel welcome within EA. 

While AI safety has sucked up a lot of attention recently, EA's most famous and most well-funded efforts have been focused in Africa- malaria bednets, deworming, vitamin supplementation, etc etc. There's a post at least monthly, maybe weekly, about how EA isn't diverse enough, that it's a tragedy, and how they can and should improve that.

I find it difficult to consider the majority of EA actions could possibly be outweighed by one person's terribly stupid statement almost three decades ago, no matter how high-status that person is within the community. I find it difficult to think that a movement that has spent hundreds of millions of dollars improving the lives of the less-fortunate (mostly in Africa, but there was also that 300M$ experiment in criminal justice reform that would mostly help black people if it worked) has a racism problem, and that their hundreds of millions of dollars of actions, don't speak louder than one  goofus and his poor apology.

But if I try to put myself in that headspace, where this movement does have a serious racism problem despite all the evidence suggesting the contrary, one paragraph of PR-speak is not going to be the least bit comforting. 

Could you, or any readers, help me understand that mindset better?

Hello Robert, I am stepping back from this forum but as you've replied to me directly I will endeavour to help you understand my viewpoint. I will use italics as you seem to have a high level of belief in their ability to improve written communication.

If the only form that racism took was hatred of black people, then the evidence you present would be persuasive that EA as a movement as a whole does not condone racism.

However: racism also encompasses the belief that certain races are inferior. Belief that black people are stupider than white people, for example, is not incompatible with sending aid to Africa. 

Therefore, I was relieved to see an EA institution explicitly confirm that it does not condone racism.

Hope this helps. 

  EDIT: Did you mean to write "not compatible"? I didn't notice this until after I typed my reply.  I thought you were claiming that sending aid to Africa was incompatible.  If you could clarify, I'll add my wall of text back.
Hello, I did mean to type "not incompatible"- I think we are largely in agreement.
Ah okay, sorry, I thought you meant the opposite.  Thank you!

The community needs to split. Basically high cognitive decouplers and low decouplers can't live together online anymore. And if the EA brand is going to attack the high decoupler way of thinking for the sake of making people like britomart happy - which might be the right choice, there needs to be a new community for altruists who are oriented towards working through any argument themselves, no matter what it implies.

Mainly, the ea brand and community are tools for doing good, but currently the way they are functioning no longer work quite right.

Probably because CEA is problematic, and because of the recent recruitment drives that brought a lot of people who weren't coming from the rationalist meme space in, abd this naturally leads to culture clashes.

Also maybe things are still okay off the forums.


I think this is very related to CEA.

Influential EA philosophers having used racial slurs and saying they’re unsure about IQ and race is hurtful to black EAs, hurtful to black people outside EA and bad for future diversity in EA.

Although this shouldn’t be the primary concern, it is additionally also very harmful to the reputation of other individuals, organisations and initiatives associated with EA, potentially reducing their impact.

Radical Empath Ismam
It's also pseudoscience.
My gods, I don't understand why people are downvoting this, actually.
A pretty large fraction of engaged EAs believe in HBD. Its quite common the deeper you go into the community.

This list is a good example of the sort of arguments that look persuasive to those already opposed to HBD, but can push people on the fence towards accepting it, so it may be net-negative from your perspective. This is what has happened to me, and I'll elaborate on why – so that you may rethink your approach, if nothing else.

Disclaimer: I am a non-Western person with few traits worth mentioning. I identify with the rationalist tradition as established on LW, feel sympathy for the ideal of effective altruism, respect Bostrom despite some disagreements, have donated to GiveWell charities on EA advice, but I have not participated more directly. Seeing the drama, people expressing disappointment and threatening to leave the community, and the volume of meta-discussion, I feel like clarifying a few details that may be hard to notice from within your current culture, and hopefully helping you mend the fracture that is currently getting filled with the race-iq stuff.

All else being equal, people who hang around such communities prefer consistent models (indeed, utilitarianism itself is a radical solution to inconsistencies in other ethical theories). This discourse is suffused with intelle... (read more)

Anon Rationalist
  84% of surveyed intelligence researchers believe the gaps are at least partially genetic.[1]  This statement is not just an appeal to authority, it is also inaccurate. 1. ^ https://www.sciencedirect.com/science/article/abs/pii/S0160289619301886
Bob Jacobs
Why did you reply to MissionCriticalBit when it was I who made that claim? I almost didn't see it. Also pointing out that the academics who study this stuff for a living don't believe in it is not fallacious, but rather a very useful piece of information. Anyway, I wanted to give the HBDers another shot so I downloaded the survey (can we all agree that paywalls for publicly funded research is bullshit?) and I have two important things to note: genetic gaps is not equivalent to racial gaps, and the survey itself admits it is unrepresentative. It was an internet survey:  and had a high nonresponse rate: with respondents who are different than the field as a whole: which heavily biases the results in favor of your position: EDIT: To respond to missioncriticalbit below. My comment was about the sentence "HBD is not generally accepted in academia". The reason I can't show you a survey that shows you that is the same reason I can't show you a survey that zoologists don't believe in unicorns, they don't engage with it so there is no survey available (even the bad survey by anon rationalist is not about HBD). But I don't want to make an assertion without citing anything, so what is the best available option? How about an example of a professional biologists with no conflict of interests using publicly available data to create a well received paper that has been seen more than 12000 times that clearly rejects HBD. Missioncriticalbit just makes assertions without citing anything. The reason I don't respond and refused to continue to read his reply is not because I am afraid, but because he hadn't cited anything, didn't engage with my writings and outright insulted me. The reason I respond in an edit instead of a reply is because the HBDers have removed half a dozen of my latest comments from the frontpage while taking away a big chunk of my voting-power on this forum. I'm not inclined to give them another way to take away my voting-power, but I don't want to silence mys
First, that depends on what you mean by "this stuff"; Bird does not study intelligence nor behavioral genetics for a living, he's a plant geneticist. Skewed though the survey may be, it's probably more representative than a single non-expert. Second, why do you suppose the non-response rate is so high and so skewed? And might it have something in common with your own refusal to continue our conversation on merits of your list? I suspect that professionals who prefer not to respond, rather than respond in the negative about genetic contributions to the IQ gap, are driven by contradictory impulses: they believe that the evidence doesn't allow for a confident "100% environmental" response and, being scientists, have problem with outright lying, but they also don't want to give the impression of supporting socially unapproved beliefs or "validating" the very inquiry into this topic. So they'd rather wash their hands of the whole issue, and allow their less squeamish colleagues to give the impression of moderate consensus in favor of genetic contribution.
Differential response within the survey is again as bad. The response rate for the survey as a whole was about 20% (265 of 1345), and below 8% (102) for every individual question on which data was published across three papers (on international differences, the Flynn effect, and controversial issues). Respondents attributed the heritability of U.S. black-white differences in IQ 47% on average to genetic factors. On similar questions about cross-national differences, respondents on average attributed 20% of cognitive differences to genes. On the U.S. question, there were 86 responses, and on the others, there were between 46 and 64 responses. Steve Sailer's blog was rated highest for accuracy in reporting on intelligence research—by far, not even in the ballpark of sources that got more ratings (those sources being exactly every mainstream English-language publication that was asked about). It was rated by 26 respondents. The underlying data isn't available, but this is all consistent with the (known) existence of a contingent of ISIR conference attendees who are likely to follow Sailer's blog and share strong, idiosyncratic views on specifically U.S. racial differences in intelligence. The survey is not a credible indicator of expert consensus. (More cynically, this contingent has a history of going to lengths to make their work appear more mainstream than it is. Overrepresenting them was a predictable outcome of distributing this survey. Heiner Rindermann, the first author on these papers, can hardly have failed to consider that. Of course, what you make of that may hinge on how legitimate you think their work is to begin with. Presumably they would argue that the mainstream goes to lengths to make their work seem fringe.)
Bob Jacobs
Even if you think my reasons failed, why would that push you towards accepting it? HBD is a hypothesis for how the world works, so the burden of proof is on HBD and giving a bad reason not to believe in HBD is not evidence for HBD. To give a very clear example, if someone says 'I believe in unicorns', and I say 'no unicorns do not exist because 1+1=3' that would fail to be evidence for unicorns not existing, but that does not mean it counts towards evidence for unicorns existing. Thank you for donating to GiveWell! Unimportant nitpick that has always bothered me: LW has an empiricist tradition, the term 'rationalist' is a misnomer. I wouldn't say other ethical theories are internally inconsistent. They might have other attributes or conclusions that you think are bad, but the major ethical theories don't have any inconsistencies as far as I can tell. Do you have an example?On the other hand I do think Eliezer has some inconsistencies in his philosophy, although it's hard to tell because he's quite vague, doesn't always use philosophical terminology (in fact he is very dismissive of the field as a whole) and has a tendency to reinvent the wheel instead (e.g his 'Requiredism' is what philosophers would call compatibilism). Now usually I wouldn't mind it that much, but since philosophy requires such precision of language if you don't want to talk past each other, I do think this doesn't work in his favor. I would like to point out that my comment was not about Bostrom. I mean even if you don't know which way the arrow of causality points, that's still an unnecessarily big risk. It's not particularly altruistic to make statements that have that big a chance of helping racists. You could also spend your time... not doing that. Also even if you reject arguments from historical precedent there is still the entire field of linguistic racism. Just because people won't publicly state it doesn't mean it doesn't influence their thinking. Take for example the stereotype of

HBD is a hypothesis for how the world works, so the burden of proof is on HBD and giving a bad reason not to believe in HBD is not evidence for HBD.

This logic is only applicable to contrived scenarios where there is no prior knowledge at all – but you need some worldly knowledge to understand what both these hypotheses are about.
Crucially, there is the zero-sum nature of public debate. People deliberately publicizing reasons to not believe some politically laden hypothesis are not random sources of data found via unbiased search: they are expected to cherrypick damning weaknesses. They are also communicating standards of the intellectual tradition that stands by the opposing hypothesis. A rational layman starts with equal uncertainty about truth values of competing hypotheses, but learning that one side makes use of arguments that are blatantly unconvincing on grounds of mundane common sense can be taken as provisional evidence against their thesis even before increasing object-level certainty: poor epistemology is evidence against ability to discover truth, and low-quality cherrypicked arguments point to a comprehensively weak case. Again, consider beliefs generally ... (read more)

I don't want to engage with your arguments. I strongly think you're wrong, but it seems much less relevant to what I can contribute (or generally want to engage with) than the fact that you've posted that comment and people have upvoted it. I don't understand how this can happen on the EA Forum. Why would anyone believing in this and wanting to do good promote this? If anyone here does believe in ideas that have caused a great amount of harm and will cause more if spread, they should not spread them. If that's not the specific arguments that you think might be better and should be improved in such and such way but the views that you're arguing about, don't! If you want to do good, why would you ever, in our world, spread these views? If the impact of spreading these views is more tragedies happening, more suffering, and more people dying early, please consider these views an infohazard and don't even talk about them unless you're absolutely sure your views are not going to spread to people who'll become more intolerant- or more violent. If you, as a rationalist, came up with a Basilisk that you thought actually works, thinking that it's the truth that it works should be a really strong reason not to post it or talk about it, ever. The feeling of successfully persuading people (or even just engaging in interesting arguments), as good as it might be, isn't worth a single tragedy that will result from spreading this kind of ideas. Please think about the impact of your words. If people persuaded by what you say might do harm, don't. One day, if the kindest of rationalists do solve alignment and enough time passes for humanity to become educated and caring, the AI will tell us what the truth is without a chance of it doing any harm. If you're right, you'll be able to say, "I was right all along, and all these woke people were not, and my epistemology was awesome". Before then, please, if anyone might believe you, don't tell them what you consider to be the truth.

I strongly think you're wrong

But can you be trusted to actually think that, given what you say about utility of public admission of opinions in question? For an external observer, it's a coin toss. And the same for the entirety of your reasoning. As an aside, I'd be terrified of a person who can willfully come to believe – or go through the motions of believing – what he or she believes to be morally prudent but epistemically wrong. Who knows what else can get embedded in one's mind in this manner.

I don't understand how this can happen on the EA Forum. Why would anyone believing in this and wanting to do good promote this?

Well, consider that, as it tends to happen in debates, people on the other side may be as perfectly sure about you being misguided and promoting harmful beliefs as you are about them; and that your proud obliviousness with regard to their rationale doesn't do your attempt at persuasion any more good than your unwillingness to debate the object level does.
Consider, further, that your entire model of this problem space really could be wrong and founded on entirely dishonest indoctrination, both about the scholarly object level and about social dynamics and... (read more)

Adding on to this with regards to IQ in particular, I recommend this article and it's followup by academic intelligence researchers debunking misconceptions about their field. To sum up some of their points:

  • IQ test scores are significantly affected by socio-economic and other environmental factors, to the point where one study found adoption from a poor family to a rich one causes a 12-18 point jump in IQ score.
  • The average IQ of the whole populace jumped 18 points in 50 years due to the Flynn effect.
  • The gap in test scores between races has been dropping for decades, including a 5 point drop in the IQ test score gap over 30 years. 
  • With the above points in mind, the remaining IQ test score gap of 9.5 points does not seem particularly large, and does not seem to require any genetic explanation. 

I don’t think one of the claims, that “Twin studies are flawed in methodology. Twins, even identical twins, simply do not have exactly the same DNA”, is true. As I see, it is not supported by the link and the study.

The difference of 5.2 out of 6 billion letters that identical twins have on average is not something that makes their DNA distinct enough to make the correlations between being identical tweens or not and having something in common more often to be automatically invalid.

One of the people involved in the study is cited: “Such genomic differences between identical twins are still very rare. I doubt these differences will have appreciable contribution to phenotypic [or observable] differences in twin studies.”

Twin studies being something we should be able to rely on seems like a part of the current scientific view, and some EA decisions might take such studies into consideration.

I think it’s important not to compromise our intellectual integrity even when we debunk foundations for awful and obviously wrong beliefs that are responsible for so much unfairness and suffering that exist in our world and for so many deaths.

I think if the community uses words that are persuasive b... (read more)

Bob Jacobs
These are two separate links for two separate claims. 'Twin studies are flawed in methodology.' and 'Twins, even identical twins, simply do not have exactly the same DNA.', both of which are true. The confidence in the proposed HBD conclusions is simply not warranted by the evidence. Many twin studies have the assumption that they share 100% of their DNA (which is false) and that they share the exact same environment (which is also false). This leads to underestimating environmental factors  and underestimating non-genetic biological factors. Furthermore, separated twin pairs, identical or fraternal, are generally separated by adoption. This makes them unrepresentative of twins as a whole and there can be some issues of undetected behaviors in the case of behaviors that many people keep secret presently or in their earlier lives.
Oops! Sorry, I only discovered the second link; but before writing my comment, I looked up the first myself. I’m not a biologist and will probably defer to any biologist entering this thread and commenting on the twin studies. Twins (mostly, as the linked study shows) do not have exactly the same DNA. But it doesn’t seem to be relevant. The relevant assumption is that there’s almost no difference between the DNAs of “identical” twins and a large difference between the DNAs of non-identical reared-together twins, which is true despite a couple of random mutations per 6 billion letters. The next two linked articles are paywalled. Is there somewhere to read them? The third is a review of a short book, available after a sign-up, and it says that “some studies on twins are good, some bad”, and the author feels, but “doesn’t actually know” that the reviewed one is good. The reviewed book performed a study on twins and noticed there isn’t much of a difference between the correlation of the similarity of many personality traits with whether people are identical twins, and concluded that, since you’d expect to see a difference if the traits have different degree of heritability, many personality traits are results of the environment. How is this an evidence that twin studies are flawed and shouldn’t be used? If that’s a correct study, it’s just evidence that personality traits are mostly formed by environment (which is something I already believe and have believed for the most of my life), but, e.g., why would this be relevant for a discussion of whether or not some disease has a genetic component to it, when a twin study shows that there is? It’s important to carefully compare the numbers; but obviously there are things that identical twins have in common more often then non-identical twins, because these things are heritable at to larger or lesser degree; like hair color or height. Of course, any study makes some underrepresentation of humanity. But if your study is
Bob Jacobs
It's fine. Studies don't just use identical twins but twins in general. You are equating my two claims and attacking claims that I haven't even made, I never talked about "whether or not some disease has a genetic component to it, when a twin study shows that there is?". I made a claim that twins, even identical twins, don't share exactly the same DNA and provided a link to an article that gave more information, and I made a second claim that twin studies were flawed and provided that claim with a link to an article with more information about that. All this stuff about that it can't help us find diseases or that twin studies "shouldn’t be used" are claims I never made. EDIT:  For the record my study has some biostatistics, but it isn't my strongest field and I'm mostly  leaning on stuff my professors have explained:  I will also probably defer to a biologist/biostatistician.
Kaspar Brandner
As a different perspective to your list, I'd like to reference this thread of 25 threads, which provides extensive research in the opposite direction. Like you, I do not claim that this is all correct (I'm not an expert on this topic), but the evidence is certainly much less clear-cut than one might think from just reading the pieces you provided.
Bob Jacobs
Given my priors and respect for my leisure time I'm not going to read those giant threads. I won't downvote you since I haven't actually read it, but let me ask you a related question: Do you think that out of the billions of possible correlations in the social sciences, the best use of our finite time on earth is to study this one? The incredibly flawed measure of 'low iq' is correlated with the arbitrary socially-contingent western category of 'black people' (almost certainly because of environmental factors). But there are millions of things correlated with the western category of 'black people' and there are millions of things correlated with 'iq'. Furthermore, there are so  many more variables to study that are less flawed and less arbitrary, why should we focus on the one correlation out of billions that racist people –who want to make the world worse for out fellow human beings– want to us to talk about?

I agree with basically everything you say here, but I also think it's a bit unfair to point this out in the context of Kaspar Brandner sharing a lot of links after you did the same thing first (sharing a lot of links). :)

In any case, I think

not discussing the issue >> discussing the issue >> discussing the issue with flawed claims. 

(And I think we're all in trouble as a society because, unfortunately, people disagree about what the flawed claims are and we get sucked into the discussion kind of against our will because flawed claims can feel triggering.) 

Bob Jacobs
Yeah that's fair. I mean I did give summaries, but it's still fair. If I could go back in time I would've posted that comment first and I would've tried to explain my emotions/reasoning process to the HBDers on this forum more. I would have said: I get the allure of taboo studies. I want to be a moral philosopher, but moral philosophers are very smart and they don't get a lot of funding. So even if I work very very hard, I probably won't get to be one. I need a way to stand out, to make people notice I would be a good researcher... oh what's this? Everyone is avoiding these taboo studies. So researching them makes me both stand out and makes me a comparatively high quality researcher since almost no one is competing with me. It's competition/capitalism/moloch that is driving me towards these subjects and even knowing this doesn't make the allure go away. But ultimately I care about people more, and there are often good reasons these taboo's were put in place. Perhaps the readers of this comment feel the same way. I empathize. The smart thing to have done was unpack these mental prioritizations immediately instead of grabbing my research. But I didn't think of it, probably because I wanted to show that I could be a good researcher. Moloch got me. Please don't spend your limited time on this subject, please don't let moloch get you too.

Writing on such topics does the opposite of favoring your academic career. It is rather a form of career suicide, since you will likely get cancelled and ostracized. The topic is extremely taboo, as we can see with the reaction to Bostrom's old email. He didn't even support hereditarianism about IQ gaps, he just said they exist, which even environmentalists accept!

Bob Jacobs
And with good reason, out of the billions of possible correlations to talk about this is one of the very few that will help racists. True, but most people can't cut it in academia and if one fancies themselves a researcher this path will allow you to continue to keep doing that without a lot of intellectual competition. Plus you can still get funding from shady organizations like the Pioneer Fund (I call them shady because they funded the distribution of 'Erbkrank'-a Nazi propaganda film about eugenics- as one of their first projects and because they have ties to white supremacists groups, so their impartiality is suspect)
Kaspar Brandner
Strong disagree here. See the quote of the paper I posted below.

I don't fault you for not reading it all, but it is a good resource for looking up specific topics. (I have summarized a few of the points here.) And I don't think IQ is a flawed measure, since it is an important predictor for many measures of life success. Average national IQ is also fairly strongly correlated with measures of national welfare such as per Capita GDP.

To be clear, I'm not saying studying this question is more important than anything else, just that research on it should not be suppressed, whatever the truth may be. This point was perhaps best put in the conclusion of this great paper on the topic:

The strategy – advocated by some influential scholars – of stigmatizing, suppressing, or downplaying evidence in favor of hereditarianism about group differences has been tried and has not worked. Research on this topic has been done and the results are widely available. Major psychology journals continue to publish work that deals openly with group differences (though researchers still debate about the relative contribution of genes and environment, and the question has not been settled definitively). Any measures that would be effective in preventing further work, such

... (read more)
IMO, I agree with the idea that EA shouldn't invest anything in studying this, though I took a different path. 1. I think IQ differences are real and they matter. 2. However, I think the conclusion that HBD and far-righters/neo-nazis wants us to reach is pretty incorrect, given massive issues with both evidence bases and motivated reasoning/privileging the hypothesis.
Comment erased due to formatting error; apologies. The correct version is here.
Could the people who are heavily downvoting this chain explain why? Is it because people disagree with the claims Mohammad/Sharmake/sapphire are making, or because they think it is violating EA forum norms?

I downvoted it (weakly) because my impression is that "it's pseudoscience" is not a nuanced statement on a topic where there's bad science all over the place on both sides.  Apart from the awfully racially-biased beliefs of many early scientists/geneticists, there has been a lot of pseudoscience from far right sources on this also more recently – that's important to mention – but so has there been pseudoscience in Soviet Russia (Lysenkoism) that goes in the other ideological direction and we're currently undergoing a wave of science denial where it's controversial in some circles to believe that there are any psychological differences whatsoever between men and women.  Inheritance stuff also seems notoriously difficult to pin down because there's a sense in which everything is "partly environmental" (if put babies on the moon, they all end up dead) and you cannot learn much from simple correlation studies (there could still be environmental influences in there).  I think a lot of the argument against genetic influences is about pointing out these limitations of the research and then concluding that, because of the limitations, it must be environmental only. But that'... (read more)

Radical Empath Ismam
You and I have a very opposite reflection of the Sam Harris vs Ezra Klein fiasco. I'd like to hear what you think about Klein's point that environmental factors explain may >100% of the black-white iq gap, and yet this is alien in the race realism discourse.  https://forum.effectivealtruism.org/posts/ALzE9JixLLEexTKSq/cea-statement-on-nick-bostrom-s-email?commentId=YN85c93DD3EiNLFfo There is so much evidence at this point against race realism/ HBD. There is no possibility of it "could be false" without evoking some grand conspiracy. Can we never call it pseudoscience? My goal is to fight for scientific truth, not some anti-racist agenda. Check out Ben Jacob's great resources.

That's a cool point by Klein.

There is so much evidence at this point against race realism/ HBD. There is no possibility of it "could be false" without evoking some grand conspiracy. Can we never call it pseudoscience?

If the consensus is strong enough then yes, we should call it pseudoscience. 

I read the Wikipedia article you linked on the topic and my feeling was that there's some remaining disagreement in many places, but overall it does read as though the science supports environmental factors much more than genetic ones. I'm not 100% on how much I should trust it given political pressure and some yellow flags in the article like their uncritical mention of the Southern Poverty Law Center, which have behaved awfully and at times tried to cancel people like Sam Harris or Maajid Nawaz, who are "clearly good people" in my book. (And they still have Charles Murray on their list of extremists, putting him in the same category as neo-nazis, which is awful and immoral.)

I already looked at the resources by Bob Jacobs and thought some of them seemed a bit condescending in the sense that I'd expect people who feel confident enough to downvote or upvote claims on this topic would alrea... (read more)

I did not downvote any comments, but I am confused by some of the claims.

How is it pseudoscience to say that one is unsure about a topic? How is it hurtful to black people to say this? I do not mean any offense with these questions.

I do understand how it is hurtful to use slurs and I think Bostrom was wrong to do so in the original email, even in context.

Kaspar Brandner
Whether or to which extent it is hurtful is indeed unclear.
Kaspar Brandner
Where is the evidence for this claim? The extensive research on this topic suggests it is not pseudoscience at all.
Radical Empath Ismam

Laying aside whether CEA commenting on this was a virtuous action (I think it was virtuous here): People draw adverse inferences when there is a matter of significant public interest involving a leading figure in a social movement, and no appropriate person or entity from that movement issues a statement. Whether or not you think people should do that, they do, and the harm to public reputation is the same whether or not the inference is justified.

On the other side of the balance, it's not clear what the harm of speaking here is.

I probably suggest clarifying what you refer to with 'his words' here, as I've seen people both criticize his writing from 26 years ago and his apology letter for being racist, while I assume you only refer to his writing from 26 years ago?

Ah, your title says that your statement is about Bostrom's mail, and Bostrom's apology is not a mail but a letter apologizing for his mail from 26 years ago. Might still be worth clarifying, I might not be the only one who's initially confused.

The statement is almost certainly intentionally ambiguous. That's kind of how a lot of PR works: say things directionally and let people read in their preferred details.

Guy Raveh

This might be less than perfectly charitable, but my subjective impression of the past year or so of EA work is something like:
~Neartermists focusing on global poverty: "Look at our efforts towards eradicating tuberculosis! While you're here, don't forget to take a look at what the Lead Exposure Elimination Project has been doing."
~Neartermists focusing on animal welfare: "Here are the specific policy changes we've advocated for that will vastly reduce the amount of suffering necessary for eggs. In terms of more speculative things, we think shrimp might have moral value? Huge implications if true."
~Longtermists focusing on existential risk: "so incidentally here's some racist emails of ours"
"also we stole billions of dollars"
"actually there were two separate theft incidents"
"also we haven't actually done anything about existential risk. you can't hold that against us though because our plans that didn't work still had positive EV"

I recognize that there are many longtermists and existential-risk-oriented people who are making genuine efforts to solve important problems, and I don't want to discount that. But I also think that it's important to make sure that as effective altruists we are actually doing things that make the world better, and separately, it (uncharitably) feels like some longtermists are doing unethical things and then dragging the rest of the movement down with them.

Here's a VERY unharitable idea (that I hope will not be removed because it could be true, and if so might be useful for EAs to think about):

Others have pointed to the rationalist transplant versus EA native divide.  I can't help but feel that this is a big part of the issue we're seeing here.

I would guess that the average "EA native" is motivated primarily by their desire to do good.   They might have strong emotions regarding human happiness and suffering, which might bias them against a letter using prima facie hurtful language.  They are also probably a high decoupler and value stuff like epistemic integrity - after all, EA breaks from intuitive morality a lot - but their first impulses are to consider consequences and  goodness.

I would guess that the average "rationalist transplant" is motivated primarily by their love of epistemic integrity and the like.   They might have a bias in favor of violating social norms, which might bias them in favor of a letter using hurtful language.  They probably also value social welfare (they wouldn't be here if they didn't) but their first impulses favor finding a norm-breaking truth.  It may even be a somew... (read more)

This looks like retconning of history. EA and rationalism go way back, and the entire premise of EA is that determining what makes more good through "rationalist", or more precisely, consequentialist lens is moral. There is no conflict of principles.

The quality of discussion on the value of tolerating Bostrom's (or anyone else's ) opinions on race&IQ is incredibly low, and the discussion is informed by emotion rather than even trivial consequentialist analysis. The failure to approach this issue analytically is a failure both by Rationalist and by old-school EA standards.

I'm arguing not for a "conflict of principles" but a conflict of impulses/biases.  Anecdotally, I see a bias for believing that the truth is probably norm-violative in rationalist communities.  I worry that this biases some people such that their analysis fails to be sufficiently consequentialist, as you describe.
I'm not aware of the two separate theft incidents (or forgot about one), can you tell me more about them?
1. SBF 2. Avraham Eisenberg (with the Mango Markets exploit, which he has now been arrested for)
Thanks; what has Avraham done that makes him longtermist? Did he / does he identify as longtermist?

I really don't like this post.

Factually, I think it removes critical context and is sorely lacking in nuance.

Crucial context that was missing:

  • It was sent 25+ years ago when Bostrom was a student
  • It was sent as part of a conversation about offensive communication styles
  • Bostrom apologised for it at the time within 24 hours
  • Bostrom apologised again for the email now

Beyond the lack of nuance, this feels like it's optimised for PR management and not honest communication or representation of your fully considered beliefs. I find that disappointing. I greatly preferred Habiba's statement on this issue despite it largely expressing similar sentiments because it did feel like honest communication/representation of her beliefs (I've strongly downvoted this post and strongly upvoted that one, despite largely disagreeing with the sentiment expressed).

And I don't really like the obsession with PR management in the community. I think it's bad for epistemic integrity, and it's bad for expected impact of the effective altruism community on a brighter world.

Emotionally, this made me feel disappointed and a bit bitter.

I am very confused. Did someone dig this up and then he wrote that in a scramble, or did he proactively come out with this unilaterally? If it's the latter, we should be applauding his courage in forthrightness for apologizing in his current letter and intentionally letting us know, while naturally condemning his words as a student 26 years ago he made on the mailing list. This post currently does not distinguish between these stances; I consider the apology to be a really important social technology if we want to be humans in a functioning community of other humans rather than subject to the vast impersonal forces of ostracism.

First sentence of the apology says "I have caught wind that somebody has been digging through the archives of the Extropians listserv with a view towards finding embarrassing materials to disseminate about people." So it seems like he is trying to get ahead of a public disclosure by someone else.

My read is that Bostrom had reason to believe that the email would come out either way, and then he elected to get out in front of the probable blowback.

As evidence, here is Émile Torres indicating that they were planning to write something about the email.

That said, it's not entirely clear whether Bostrom knew the email specifically was going to be written about or knew that someone was poking around in the extropian mailing list and then guessed that the email would come out as a result. 

In any case, I think it's unlikely that he posted his apology for the email unprovoked. 

I think this would be true except his apology imo is not a good one. He gets some points for apologizing proactively, but I don't give him many, because the apology doesn't come across to me as sincere to me (but rather defensive). 

I initially strongly upvoted this post but have since retracted my vote. I think the statement is vague as to which "words" it "condemns". It would be better for CEA to take a firm, concrete stance against scientific racism ("SR") specifically. As other people on the forum have pointed out, the promotion of SR in the community is harmful for many reasons: SR ideas have directly harmed people of color, discussion of SR deters people of color from participating in the movement, it makes the movement look bad, and it distracts from the movement's actual prior... (read more)

Yellow (Daryl)
Clarification: is scientific racism something like "there is a scientific paper relating to race and IQ, [discussion on implication]"?
"Scientific racism" is admittedly a bit of a misnomer because "scientific" racism is not scientific.
Yellow (Daryl)
+60% on scientific devaluing on poc(true or false) deterring poc from participating. Not sure if overall would be good though. The clearerThinking podcast w/Magnus Carlson say that allowing misinformation to be voiced may be effective at reducing misinformation. Ex. can point out why the view may fall short.
Yeah I think that's a good point. An interesting perspective is that freedom of speech includes the right to express controversial ideas as well as the right to listen to them. Members of the EA community have the right to learn about ideas that may be classified as scientific racism and decide for themselves whether they are true and false. (And of course, my use of the term "scientific racism" presumes that these ideas are pseudoscience, which other people on the forum have disputed.) However, I really think that the EA Forum is not the right place for these discussions for the reasons I gave above. At least they should be limited to the "Personal Blog" section.

I appreciate this quick and clear statement from CEA.

Someone did the right thing today. Thank you.

You should make public the details of your early involvement with Alameda and stop trying to cancel other people until you've addressed your own past mistakes and wrongdoings.

I'm troubled by this statement. It completely fails to take Bostrom's apology into account in any form. Moreover, accusing Bostrom of racism in this manner could legitimately be viewed as borderline slanderous. The accusation of racism can destroy a persons career, career-prospects, and reputation. In effect it can be a social death sentence. An organisation which wants to uphold the values of consequentialism should be much more careful in assessing the consequences of its public actions for the affected individual.

That's not my reading of the statement (it says "unacceptably racist language" and then condemns the manner of discussion rather than beliefs held). 

It completely fails to take Bostrom's apology into account in any form.

Yeah, but that can be okay if you think it's higher priority to make a public statement about the contents of the email.

I initially didn't think such a statement was necessary because disagreeing with the email seemed like a no-brainer, so I didn't think anyone would have any uncertainty about the views of an organization like CEA.  But apparently some (very few) people are not only defending the apology – which I've done myself – but argue that the original email was ~fine(?). I don't agree with such reactions (and Bostrom doesn't agree either and I see him a sincere person who wouldn't apologize like that if he didn't think he messed up), but they show that the public statement serves a purpose beyond just virtue-signalling to make sure there are no misunderstandings. (Note that it's possible to condemn someone's actions from long ago as"definitely not okay" without saying that the person is awful or evil!) 

Kaspar Brandner
"To make sure there are no misunderstandings" it is arguably a fatal strategy not to acknowledge his apologies and not to mention that the "recklessly flawed and reprehensible words" stem from a very old email. As it is written, the statement simply sounds like it is calling him out for racism, which is an extremely serious accusation.

I think the natural move is to create a chapter within CEA that actively supports Black people. Honestly, i have been to EA conferences, and I can tell there is still work to be done on the diversity part, also including woman representation. Overall I love CEA and want to see how to be more diverse. One place to start might be supporting emerging markets like Africa, not only through donations but programs. For example 80, 000 hours is tailored for someone in the Global North, we need to rethink how does 80K look if we want to end unemployments rates in Southern Africa. 

I thank you for responding quickly and mitigating PR damage. We already got a big PR hit, we don't need another one so soon.

To the comments who criticize it: I feel like people are underrating PR concerns right now.

Curated and popular this week
Relevant opportunities