Hide table of contents

This is a linkpost for https://www.bloomberg.com/news/features/2023-03-07/effective-altruism-s-problems-go-beyond-sam-bankman-fried#xj4y7vzkg

Try non-paywalled link here.

More damning allegations: 

A few quotes:

At the same time, she started to pick up weird vibes. One rationalist man introduced her to another as “perfect ratbait”—rat as in rationalist. She heard stories of sexual misconduct involving male leaders in the scene, but when she asked around, her peers waved the allegations off as minor character flaws unimportant when measured against the threat of an AI apocalypse. Eventually, she began dating an AI researcher in the community. She alleges that he committed sexual misconduct against her, and she filed a report with the San Francisco police. (Like many women in her position, she asked that the man not be named, to shield herself from possible retaliation.) Her allegations polarized the community, she says, and people questioned her mental health as a way to discredit her. Eventually she moved to Canada, where she’s continuing her work in AI and trying to foster a healthier research environment.


Of the subgroups in this scene, effective altruism had by far the most mainstream cachet and billionaire donors behind it, so that shift meant real money and acceptance. In 2016, Holden Karnofsky, then the co-chief executive officer of Open Philanthropy, an EA nonprofit funded by Facebook co-founder Dustin Moskovitz, wrote a blog post explaining his new zeal to prevent AI doomsday. In the following years, Open Philanthropy’s grants for longtermist causes rose from $2 million in 2015 to more than $100 million in 2021.

Open Philanthropy gave $7.7 million to MIRI in 2019, and Buterin gave $5 million worth of cash and crypto. But other individual donors were soon dwarfed by Bankman-Fried, a longtime EA who created the crypto trading platform FTX and became a billionaire in 2021. Before Bankman-Fried’s fortune evaporated last year, he’d convened a group of leading EAs to run his $100-million-a-year Future Fund for longtermist causes.


Even leading EAs have doubts about the shift toward AI. Larissa Hesketh-Rowe, chief operating officer at Leverage Research and the former CEO of the Centre for Effective Altruism, says she was never clear how someone could tell their work was making AI safer. When high-status people in the community said AI risk was a vital research area, others deferred, she says. “No one thinks it explicitly, but you’ll be drawn to agree with the people who, if you agree with them, you’ll be in the cool kids group,” she says. “If you didn’t get it, you weren’t smart enough, or you weren’t good enough.” Hesketh-Rowe, who left her job in 2019, has since become disillusioned with EA and believes the community is engaged in a kind of herd mentality.


In extreme pockets of the rationality community, AI researchers believed their apocalypse-related stress was contributing to psychotic breaks. MIRI employee Jessica Taylor had a job that sometimes involved “imagining extreme AI torture scenarios,” as she described it in a post on LessWrong—the worst possible suffering AI might be able to inflict on people. At work, she says, she and a small team of researchers believed “we might make God, but we might mess up and destroy everything.” In 2017 she was hospitalized for three weeks with delusions that she was “intrinsically evil” and “had destroyed significant parts of the world with my demonic powers,” she wrote in her post. Although she acknowledged taking psychedelics for therapeutic reasons, she also attributed the delusions to her job’s blurring of nightmare scenarios and real life. “In an ordinary patient, having fantasies about being the devil is considered megalomania,” she wrote. “Here the idea naturally followed from my day-to-day social environment and was central to my psychotic breakdown.”


Taylor’s experience wasn’t an isolated incident. It encapsulates the cultural motifs of some rationalists, who often gathered around MIRI or CFAR employees, lived together, and obsessively pushed the edges of social norms, truth and even conscious thought. They referred to outsiders as normies and NPCs, or non-player characters, as in the tertiary townsfolk in a video game who have only a couple things to say and don’t feature in the plot. At house parties, they spent time “debugging” each other, engaging in a confrontational style of interrogation that would supposedly yield more rational thoughts. Sometimes, to probe further, they experimented with psychedelics and tried “jailbreaking” their minds, to crack open their consciousness and make them more influential, or “agentic.” Several people in Taylor’s sphere had similar psychotic episodes. One died by suicide in 2018 and another in 2021.


Within the group, there was an unspoken sense of being the chosen people smart enough to see the truth and save the world, of being “cosmically significant,” says Qiaochu Yuan, a former rationalist.


Yuan started hanging out with the rationalists in 2013 as a math Ph.D. candidate at the University of California at Berkeley. Once he started sincerely entertaining the idea that AI could wipe out humanity in 20 years, he dropped out of school, abandoned the idea of retirement planning, and drifted away from old friends who weren’t dedicating their every waking moment to averting global annihilation. “You can really manipulate people into doing all sorts of crazy stuff if you can convince them that this is how you can help prevent the end of the world,” he says. “Once you get into that frame, it really distorts your ability to care about anything else.”


That inability to care was most apparent when it came to the alleged mistreatment of women in the community, as opportunists used the prospect of impending doom to excuse vile acts of abuse. Within the subculture of rationalists, EAs and AI safety researchers, sexual harassment and abuse are distressingly common, according to interviews with eight women at all levels of the community. Many young, ambitious women described a similar trajectory: They were initially drawn in by the ideas, then became immersed in the social scene. Often that meant attending parties at EA or rationalist group houses or getting added to jargon-filled Facebook Messenger chat groups with hundreds of like-minded people.


The eight women say casual misogyny threaded through the scene. On the low end, Bryk, the rationalist-adjacent writer, says a prominent rationalist once told her condescendingly that she was a “5-year-old in a hot 20-year-old’s body.” Relationships with much older men were common, as was polyamory. Neither is inherently harmful, but several women say those norms became tools to help influential older men get more partners. Keerthana Gopalakrishnan, an AI researcher at Google Brain in her late 20s, attended EA meetups where she was hit on by partnered men who lectured her on how monogamy was outdated and nonmonogamy more evolved. “If you’re a reasonably attractive woman entering an EA community, you get a ton of sexual requests to join polycules, often from poly and partnered men” who are sometimes in positions of influence or are directly funding the movement, she wrote on an EA forum about her experiences. Her post was strongly downvoted, and she eventually removed it.


The community’s guiding precepts could be used to justify this kind of behavior. Many within it argued that rationality led to superior conclusions about the world and rendered the moral codes of NPCs obsolete. Sonia Joseph, the woman who moved to the Bay Area to pursue a career in AI, was encouraged when she was 22 to have dinner with a 40ish startup founder in the rationalist sphere, because he had a close connection to Peter Thiel. At dinner the man bragged that Yudkowsky had modeled a core HPMOR professor on him. Joseph says he also argued that it was normal for a 12-year-old girl to have sexual relationships with adult men and that such relationships were a noble way of transferring knowledge to a younger generation. Then, she says, he followed her home and insisted on staying over. She says he slept on the floor of her living room and that she felt unsafe until he left in the morning.


On the extreme end, five women, some of whom spoke on condition of anonymity because they fear retribution, say men in the community committed sexual assault or misconduct against them. In the aftermath, they say, they often had to deal with professional repercussions along with the emotional and social ones. The social scene overlapped heavily with the AI industry in the Bay Area, including founders, executives, investors and researchers. Women who reported sexual abuse, either to the police or community mediators, say they were branded as trouble and ostracized while the men were protected.


In 2018 two people accused Brent Dill, a rationalist who volunteered and worked for CFAR, of abusing them while they were in relationships with him. They were both 19, and he was about twice their age. Both partners said he used drugs and emotional manipulation to pressure them into extreme BDSM scenarios that went far beyond their comfort level. In response to the allegations, a CFAR committee circulated a summary of an investigation it conducted into earlier claims against Dill, which largely exculpated him. “He is aligned with CFAR’s goals and strategy and should be seen as an ally,” the committee wrote, calling him “an important community hub and driver” who “embodies a rare kind of agency and a sense of heroic responsibility.” (After an outcry, CFAR apologized for its “terribly inadequate” response, disbanded the committee and banned Dill from its events. Dill didn’t respond to requests for comment.)


Rochelle Shen, a startup founder who used to run a rationalist-adjacent group house, heard the same justification from a woman in the community who mediated a sexual misconduct allegation. The mediator repeatedly told Shen to keep the possible repercussions for the man in mind. “You don’t want to ruin his career,” Shen recalls her saying. “You want to think about the consequences for the community.”


One woman in the community, who asked not to be identified for fear of reprisals, says she was sexually abused by a prominent AI researcher. After she confronted him, she says, she had job offers rescinded and conference speaking gigs canceled and was disinvited from AI events. She says others in the community told her allegations of misconduct harmed the advancement of AI safety, and one person suggested an agentic option would be to kill herself.


For some of the women who allege abuse within the community, the most devastating part is the disillusionment. Angela Pang, a 28-year-old who got to know rationalists through posts on Quora, remembers the joy she felt when she discovered a community that thought about the world the same way she did. She’d been experimenting with a vegan diet to reduce animal suffering, and she quickly connected with effective altruism’s ideas about optimization. She says she was assaulted by someone in the community who at first acknowledged having done wrong but later denied it. That backpedaling left her feeling doubly violated. “Everyone believed me, but them believing it wasn’t enough,” she says. “You need people who care a lot about abuse.” Pang grew up in a violent household; she says she once witnessed an incident of domestic violence involving her family in the grocery store. Onlookers stared but continued their shopping. This, she says, felt much the same.


The paper clip maximizer, as it’s called, is a potent meme about the pitfalls of maniacal fixation.

Every AI safety researcher knows about the paper clip maximizer. Few seem to grasp the ways this subculture is mimicking that tunnel vision. As AI becomes more powerful, the stakes will only feel higher to those obsessed with their self-assigned quest to keep it under rein. The collateral damage that’s already occurred won’t matter. They’ll be thinking only of their own kind of paper clip: saving the world.

Sorted by Click to highlight new comments since:
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

Following CatGoddess, I'm going to share more detail on parts of the article that seemed misleading, or left out important context. 

Caveat: I'm not an active member of the in-person EA community or the Bay scene. If there's hot gossip circulating, it probably didn't circulate to me. But I read a lot.

This is a long comment, and my last comment was a long comment, because I've been driving myself crazy trying to figure this stuff out. If the community I (digitally) hang out in is full of bad people and their enablers, I want to find a different community! 

But the level of evidence presented in Bloomberg and TIME makes it hard to understand what's actually going on. I'm bothered enough by the weirdness of the epistemic environment that it drove me to stop lurking  :-/

I name Michael Vassar here, even though his name wasn't mentioned in the article. Someone asked me to remove that name the last time I did this, and I complied. But now that I'm seeing the same things repeated in multiple places and used to make misleading points, I no longer think it makes sense to hide info about serial abusers who have been kicked out of the movement, especially when that info is easy to... (read more)

I hope a fair read of the subtext of your comment is: available evidence points towards  community health concerns being dealt with properly,  and: there's not much more the community could do. I want to try to  steelman an argument in response to this:

I am not very well connected in "hubs" like London and the Bay area, but despite a lack of on the ground information,  I have found examples of poor conduct that go largely unpunished. 

Take the example of Kat Woods and Emerson Spartz. Allegations of toxic and abusive behaviour towards employees were made 4 months ago (months after being reported to CEA). Despite Kat Woods denying these concerns and attempting to dismiss and discredit those who attest to their abusive behaviour, both Kat Woods and Emerson Spartz continue to: post in the EA-forum and get largely upvotes ; employ EA's;  be listed on the EA opportunity board; and  control $100,000s in funding. As far as I can tell, nonlinear incubated projects (which they largely control) also continue to be largely supported by the community. 

 I've accounted further evidence of similar levels of misconduct  by different actors, largely c... (read more)

Take the example of Kat Woods and Emerson Spartz. Allegations of toxic and abusive behaviour towards employees were made 4 months ago (months after being reported to CEA). Despite Kat Woods denying these concerns and attempting to dismiss and discredit those who attest to their abusive behaviour, both Kat Woods and Emerson Spartz continue to: post in the EA-forum and get largely upvotes ; employ EA's; be listed on the EA opportunity board; and control $100,000s in funding. As far as I can tell, nonlinear incubated projects (which they largely control) also continue to be largely supported by the community.

I know of multiple people who are currently investigating this. I expect appropriate consequences to be taken, though it's not super clear to me yet how to make that happen (like, there is no governing body that could currently force nonlinear to do anything, but I think there will be a lot of pressure if the accusations turn out correct).

I've accounted further evidence of similar levels of misconduct by different actors, largely continuing without impediment (I'm currently working on resolving these). And (if I understand correctly) Oliver Habryka, who knows both rational

... (read more)

I also think many commenters are missing a likely iceberg effect here. The base rate of survivors reporting sexual assault to any kind of authority or external watchdog is low. Thus, an assumption that the journalists at Time and Bloomberg identified all, most, or even a sizable fraction of survivors is not warranted on available information.

We would expect the journalists to significantly underidentify potential cases because:

  • Some survivors choose to tell no one, only professional supporters like therapists, or only people they can trust to keep the info confidential. Journalists will almost never find these survivors even with tons of resources.

  • Some survivors could probably be identified by a more extensive journalistic investigation, but journalism isn't a cash cow anymore. The news org has to balance the value of additional investigation that it internalized against the cost of a deeper investigation. (This also explains why news articles likely have a much higher percentage of publicly-known stories than the true percentage of all stories that are publicly known.)

There are also many reasons a survivor known to a journalist may decide not to agree to be a source, like:... (read more)

[Edit: If you want a visual analogy about discovery, but one that doesn't overweight any one perspective, might I suggest the parable of the blind men and the elephant? https://en.wikipedia.org/wiki/Blind_men_and_an_elephant ]

First of all, its a bit patronizing that you imply that people who aren't updating and handwringing on the Bloomberg piece haven't considered iceberg effect and uncounted victims. Iceberg effect has been mentioned in discussion before many times, and to any of us who care about sexual misconduct it was already an obvious possibility.

Second, the opinions of those of us who don't have problems with the EA community any worse than anywhere else (in fact some of us think it is better than other places!), also matter. Frankly I'm tired of current positive reports from women being downgraded and salacious reports (even if very old) being given all the publicity. So it's a bit of a tangent, but I'll say it here: I'm a woman and I enjoy the EA community and support how gender-related experiences are handled when they are reported. [I've been all the way down my side of the iceberg and I have not experienced anything in EA that implies that things are worse here than ... (read more)

I needed to walk away from this thread due to some unrelated stressful drama at work, which seems to have resolved earlier this week. So I took today off to recover from it. :) I wanted to return to this in part to point out what I think are some potential cruxes, since I expect some of the same cruxes will continue to come up in further discussions of these topics down the road.

1. I think we may have different assumptions or beliefs about the credibility of internal data-gathering versus independent data-gathering.  Although the review into handling of the Owen situation is being handled by an outside firm, I don't believe the broader inquiry you linked is.

I generally don't update significantly on internal reporting by an organization which has an incentive to paint a rosy picture of things. That isn't anti-CEA animus; I feel the same way about religious groups, professional sports leagues, and any number of other organizations/movements.

In contrast, an outside professional firm would bring much more credibility to assessing the situation. If you want to get as close as ground truth as possible, you don't want someone with an incentive to sell more newspapers or someone hosti... (read more)

Ivy Mazzola
Thanks for coming back. Hm in my mind, yes if all you are doing is handling immediate reports due to an acute issue (like the acute issue at your church), then yes a non-EA contractor makes sense. However if you want things like ongoing data collection and incident collection for ~the rest of time, it does have to be actually collected within or near to the company/CEA, enough that they and the surveyor can work together. Why would you It seems bad [and risky] to keep the other company on payroll forever and never actually be the owner of the data about your own community?  Additionally I don't trust non-EAs to build a survey that really gives survey respondents the proper choices to select. I think data-collection infrastucture such as a survey should be designed by someone who understands the "shape" and "many facets" of the EA community very well so as to not miss things. Because it is quite the varied community. In my mind, you need optional questions about work settings, social setting, conference settings, courses, workshops, and more. And each of these requires an understanding of what can go wrong in that particular setting, and you will want to also include correlations you are looking for throughout that people can select. So I actually think, ironically, that data-collecting infrastructure and analysis by non-EAs will have more gaps and therefore bias (unintended or intended) than when designed by EA data analysts and survey experts.  That brings me to the middle option (somewhere between CEA and non-EA contract), which is what I understand CEA's CH Team to be doing based on talks/emails with Catherine: commissioning independent data collection and analysis from Rethink Priorities. RP has a skilled/educated professional survey arm. It is not part of Effective Ventures (CEA's parent organisation), so it is still an external investigation and bias should be minimized. If I understand correctly, CEA/CH team will give over their current data to RP [whoops n
Catherine Low
Thanks Ivy and Jason for your thoughts on internal and external investigations of problems of sexual misconduct in EA. There are a few different investigation type things going on at the moment, and some of them aren’t fully scoped or planned. So it is a bit confusing. To clarify, this is where we are at right now: 1. Catherine, Anu and Lukasz from the Community Health team are investigating the experiences of women and gender minorities in EA.  1. Analysing existing data sources (in progress - Rethink Priorities has kindly given us some (as yet) unpublished data from the 2022 Survey to help with this step) 2. We are considering gathering and analysing more data about the experiences of women and gender minorities in EA, and have talked with Rethink Priorities about whether and how they could help. Nothing has been decided yet. To clarify a statement in Ivy’s comment though, we’re not planning to hand over any information we have (e.g. survey data from EAG(x)s or information about sexual misconduct cases raised to our team) to Rethink Priorities as part of this process.    2. The EV board has commissioned an external investigation by an independent law firm into Owen’s behaviour and the Community Health team’s response. 3. The Community Health team are doing our own internal review into our handling of the complaints about Owen and our overall processes for dealing with complaints and concerns. More information about this here.
Any competent outside firm would gather input from stakeholders before releasing a survey. But I hear the broader concern, and note that some sort of internal-external hybrid is possible. The minimal level of outside involvement, to me, would involve serving as a data guardian, data pre-processor, and auditor-of-sorts. This is related to the two reasons I think outside involvement is important: external credibility, and respondent assurance. As far as external validity, I think media reports like this have the capacity to do significant harm to EA's objectives. Longtermist EA remains, on the whole, more talent-constrained and influence-constrained than funding-constrained. The adverse effect on talent joining EA could be considerable. Social influence is underrated; for example, technically solving AI safety might not actually accomplish much without the ability to socially pressure corporations to adopt effective (but profit-reducing) safety methods or convince governments to compel them to do so. When the next article comes out down the road, here's what I think EA would be best served by being able to say if possible: (A) According to a study overseen by a respected independent investigator, the EA community's rate of sexual misconduct is at most no greater than the base rate.  (B) We have best-in-class systems in place for preventing sexual misconduct and supporting survivors, designed in connection with outside experts. We recognize that sexual misconduct does occur, and we have robust systems for responding to reports and taking the steps we can to protect the community. There is independent oversight over the response system. (C) Unfortunately, there isn't that much we can do about problematic individuals who run in EA-adjacent circles but are unaffiliated with institutional EA. (A) isn't externally credible without some independent organization vouching for the analysis in some fashion. In my view, (B) requires at some degree of external oversight to b
Ivy Mazzola
I realized I missed the bit where you talk about how we might not need such intense data to respond now. Yes, I agree with that. I personally expect that most community builders/leaders are already brainstorming ideas, and even implementing them, to make their spaces better for women. I also expect that most EA men will be much more careful moving forward to avoid saying or doing things which can cause discomfort for women. We will see what comes of it. Actually I'm working on a piece about actions individuals can take now... maybe I will DM it to ya with no pressure at all o.o

[Deleting the earlier part of my comment because it involved an anonymized allegation of misconduct I made, that upon reflection, I feel uncomfortable making public.]

I also want to state, in response to Ivy's comment, that I am a woman in EA who has been demoralized by my experience of casual sexism within it. I've not experience sexual harassment. But the way the Bloomberg piece describes the way EA/rats talk about women feels very familiar to me (as someone who interacts only with EAs and not rats). E.g., "5 year old in a hot 20 year old's body" or introducing a woman as "ratbait." 

Hi to reply to your last paragraph, I am sorry you have been on the recieving end of such comments. You say they are not "sexual harassment" but I want to help provide clarity and a path to resolution by suggesting that, depending on context, comments you have recieved may indeed be sexual harassment. Sorry I don't have US/CA law on hand to share but I'd guess it would be similar to the UK law on harassment (it's very short and worth reading!). I recommend readers pay close attention to sections 1.b. and section 4. Also, intention to harass is not a relevant factor usually[1]

While I recommend people try to keep in mind cultural differences as discussed here rather than always assuming bad intent (I've been on the receiving end of some ribbing from actual friends of mine for being "hot" in EA, which I dish back in different ways), it looks to me like you are already being very careful of what you report (as most women are). So I'd like to also encourage you and other women to look closely and consider whether comment you might receive might actually be harassment, intended or even on technicality. If the comment feels demeaning including assuming too much familiarity, plea... (read more)

Thank you for sharing this. I'm angry that you've had these experiences, and I'm grateful you were open to talking about them, even in response to a comment that some found unfriendly. A lot of the frustration in my initial comment comes from not knowing how to react to things like the Bloomberg article as a mostly-lurker in the community. I don't know whether I'm going to end up chatting or collaborating with someone who makes sexist comments or sends cruel messages to accusers; I wish I knew whose work I could safely share, promote, etc., without contributing to the influence of people who make the community worse. While a comment like yours doesn't name names, it at least helps me get a better handle on how much energy I want to put into engaging with these communities.*  *This is a process that takes the form of thousands of small adjustments; no one comment is decisive, but they all help me figure things out.

For example, it seems like a long time since I've heard of Jacy Reese getting EA funding.

Jacy's org received funding from the SFF in 2022, if you consider that EA funding. More weakly, his organization is also advertised on the 80,000 hours job board. He also recently tried to seek funding and recruit on the forum (until he deleted his post, plausibly due to pushback), and thus still benefits from EA institutions and infrastructure even if that doesn't look like direct funding.

Forgive me for using an anonymous account, but I'm in the process of applying for SFF funding and I don't want to risk any uncomfortable consequences. However, I can't stay silent any longer – it's painfully obvious that the SFF has little regard for combating sexual harassment. The fact that Jacy was awarded funding is a blatant example, but what's more concerning is that Michael Vassar, a known EA antagonist, still appears to be involved with the SFF to this day.

It's alarming how Vassar uses his grant-making powers to build alliances in the EA community. He initiated a grant to Peter Eckersley's organization AOI after Peter's death. Peter was strongly against Vassar. Vassar seemed pleased that Peter's successor Deger Turan doesn't have the same moral compass.

That is very concerning; I've now read several separate accounts of his behavior toward others (friends, devotees, partners, strangers), and together they painted a terrible picture. I plan on sending a concerned letter to SFF, though I'm not sure I expect it to do much. Others should consider doing the same.

But... this comment is false as far as I can tell? Like, I didn't express myself ideally in my comment below (which I think deserves the downvotes and the lower visibility), and it's an understandable misunderstanding to think that Michael still has somehow some kind of speculation grant budget, but at least to me it really looks like he is no longer involved.

Just to excerpt the relevant sections from the thread below: 

There is a largish number of people who are involved as speculation grantors who have a pretty small unilateral budget to make grants with, and I had in my mind totally not cached them as "being involved in SFF" (and do also think that's a kind of inaccurate summary, though one that I think is reasonable to have from the outside).


I also have access to an internal spreadsheet of speculation grantors and Michael is not listed on that, which does make me more sympathetic to my misunderstanding since I have looked at this spreadsheet a bunch more recently, and so was quite confident that he wasn't listed on that one. My current best model is that he is no longer a speculation grantor (maybe because his budget went to zero), though I find him being listed on the

... (read more)

Hey can I just check a thing? Do people really think that someone asking other people out [Edit: okay thinking this is all he did has problems, because it requires taking his apology at face value despite how serious CEA took the claims] means that they should never be allowed to return to impactful work and request (and receive) funding? So treat this comment as a poll

Case details: (and agreevote directions below those)

[EDIT: Apologies, I wrote this hastily and it might be that he never did as much as I first implied, but then others feel he might have done worse. I recommend you make your own conclusions about Jacy by (1) reading pseudonym's comment below this one and (2) visiting Jacy's apology yourself: https://forum.effectivealtruism.org/posts/8XdAvioKZjAnzbogf/apology ]

The rest has been edited:]

I don't know that much about the case but IIRC Jacy was apologizing for asking some women out on dates, clumsily. He did this online on FB messenger. I think before that apology he [was alleged to have done some inappropriate things] in the animal advocacy community somehow related to him, but had sworn to not do so again (a promise it looks like he probably kept). Anyway, he was apolo... (read more)

I may chime back about the object level question around the case soon, but I do want to flag in the interim that this comment that suggests "Jacy had asked some women out on dates" is likely to be a misleading interpretation of the actual events. See also this thread, and this comment.

My view is that whether someone receives funding depends on the kind of work they are doing, as well as the level of risk they present to the community. On replaceability - he is pivoting to AI safety work. Would you say his difficult-to-replace nature in the animal space, to whatever extent this is true, translates to his AI safety work? His latest post was about establishing "a network of DM research collaborators and advisors". Is he difficult to replace in this context also?

I think it's fine for him to independently do research, but whether he should be associated with the EA community, receive EA funding for his work, or be in a position where he can be exposed to more victims (recruiting researchers) is less clear to me and depends on the details of what happened.

There has been no additional communication from CEA or Jacy acknowledging what actually happened, or why we can trust that Jacy has ta... (read more)

Ivy Mazzola
A few notes: (epistemic status: thinking out loud) Replaceability of his Digital Minds entrepreneurship: I'll note that I think it is always hard to replace someone who will act as founder of something. Getting something off the ground (going from zero to one) is something very few people are willing, interested, or think of to do. Good entrepreneurs are a scarce breed, or, at least they are when you want more projects and have a funding overhang (we still do for stuff like this). And whenever you narrow it down to any domain, they become even scarcer, and I think if you narrow it down in to a domain that is new (like digital minds research), they become even scarcer than that. If there were job listings for founders for ideas which are already exciting to funders, the zero to one problem could be easier solved eg "Funding has been secured for forming a network of digital mind research collaborators and advisors. Seeking suitable founder, apply here." Maybe Jacy and a dozen other people would apply and then we would know just how replaceable he is. But there are not and no one is doing this (this requires its own entrepreneur). Also there are certain implications of this I'd find very disrespectful to the people who had the key idea in the first place.. it's a bit dehumanizing like everyone is just a cog in the machine.  I think I'll note that usually grantmakers are going to measure this. And maybe we should trust them to, idk, if they are EA anyway, at least until further details are given. Like I think it's fine to ask for details but not fine to assume that the grantmakers made an egregious or community-harming decision (Now I think I have to obligate drop this thing although it's a tangent and I realize it complicates things and goes against my above poll intent).  We still don't know enough: I agree this is weird. If I were him, given the intense community backlash, I'd have next done a detailed writeup of everything I'd done with clarification of every re
Sorry, are you referring to Jacy here? what key idea did he come up with? Agree that it is not CEA's job to punish Jacy for his actions at Brown, but this was largely not what happened. I agree, and lack of this after 3 years should be a reason to update against its existence or the extent Jacy actually cares about this. One quick question-do you think if sexual harassment allegations are true, is the EA community more or less at risk if Jacy is an independent researcher with no interaction to other EA researchers, or if he's actively trying to form a research network, or if he takes a community building role? I think in that set of claims, the one doing the most work is "establish that risk is pretty low", which in Jacy's case, is an open question. To respond to the other parts-EA women are not cannon fodder for non-EA women. The community health team's job is to protect the EA community and should not be based on the extent adjacent communities are adequately managing the situation. The EA community does not exist primarily to internalize the negative externalities of society, and members of the EA community should not be expected to sacrifice themselves like this. Do you have nonpublic info on Jacy? Can you be more clear on the kinds of situations you are imagining? How many of these do you think would result in a ban from CEA events? I guess my view here is that in most situations that would lead to a ban from CEA events, the word "victim" is probably appropriate. I think again this points to the issues around lack of clarity here, as some may be indexing the level of severity based on other things that CEA have banned people for, which are much worse than "Jacy had asked some women out on dates", while you are basing this off other information or taking Jacy's apology at face value etc, which doesn't seem super well justified.
Ivy Mazzola
I mean, if we are talking about entrepreneurship replacability, that if it was his idea to form a network of digital mind research collaborators and advisors, and he wanted to lead it and was capable of doing so, it could be seen as disrespectful to push him off the idea of the project and find someone to replace him on an essentially-identical project.  Okay fair, I'm updating that I am misremembering reading what I thought I did, but if I ever find what I'm thinking of I'll add it. Fair. I mean I kind of wonder if he expected people to get over it (which if it really was minor, he probably would expect), and was recently blindsided by the response to his March post. Maybe we will see a writeup soon (but probably not, you are right) I guess I consider it the wrong question? Like obviously the answer is the former has less risk to the EA community, but I don't think minimizing risk is the only thing that matters?  The degree of risk is the most important thing? Above a certain threshold of risk I would just want to do the most impactful one. We don't know the risk. I agree, and I did specifically include that clause for that reason FWIW. I will go back and italicize it to make it clear. I believe that I really did consider this also negates the rest of that paragraph such as thinking of EA woman as cannon fodder etc. No. I can say that  relevant thing I was imagining (among other scenarios) was something like repeated asking out after saying no (which is technically harassment) or making sexual or attraction-based comments (also harassment depending on badness of comment and whether the context and relationship implies it is disrespectful or degrading) and a response from CEA something like "he clearly has a tendency to make women uncomfortable and this seems net negative for the events, so why honestly even allow him to come and possibly make another mistake, even if he is learning? Let's just ban him and be done with it.". However I acknowledge this is unfoun
Ivy Mazzola
Sorry I should have written 3 years. I think I was rounding down due to lack of clarity on months but clarifying months makes it look like it should have been 3 years or even slightly over 3 years. Sorry about that
Ivy Mazzola
Good points about AI, I just deleted the animal section
Ariel Simnegar 🔸
I agree with you, Ivy. I think it's deeply unfortunate that some paint Jacy with the same brush as predators like Michael Vassar. Is it wrong to ask someone out on Facebook Messenger? I don't mean to diminish how unsolicited romantic advances can make people uncomfortable, but it seems difficult to draw a coherent line between Jacy's actions and any time anyone asks anyone else out. Jacy's public/influential role complicates his actions, but Jacy's frank apology and years-long lack of recidivism speak to the good faith of his effort to re-earn the community's trust. I don't think it's wrong for Jacy to receive funding from the community today.
  Is that what happened? It's never been made public, and the accusations against him in college were much  more serious.

Do you have details of his college expulsion and accusations? I honestly couldn't find them. After going through the whole discussion of his apology I could only find his own letter about it from 10 years prior saying it was an incorrect expulsion and also someone linked some other cases of Brown doing a poor job on sexual misconduct cases: IIRC other courts deemed that the brown committee mishandled cases of students accused of sexual misconduct. It appears in one case (not necessarily Jacy's but I've seen this happen myself elsewhere, so I'd actually bet more likely than not that if it was allowed to happen one time it happened in Jacy's case too) that the students had banded together and written letters of unsubstantiated rumors to the Brown committee (eg, assuming what they'd heard in the gossip mill to be true  and then trying to make sure the committee "knew" the unsubstantiated rumors, perhaps stating them as fact not even relaying how they had heard it), and then the Brown committee actually did use the letters as evidence in the University tribunal. The actual US court said that Brown, in doing this,  went against due process. To reiterate, that was another Brown... (read more)

First, I want to broadly agree that distant information is less valuable, and no one should be judged by their college behavior forever. I learned about the Brown accusation (with some additional information, that I lack permission to pass on, and also don't know the source well enough to pass it on) in 2016 and did nothing beyond talking to the person and passing it on to Julia*, specifically  because I didn't want a few bad choices while young to haunt someone forever.

[*It's been a while, I can't remember whether I told Julia or encouraged the other person to do so, but she got told one way or another]

The reason I think the college accusations are relevant is that, while I tentatively agree he shouldn't face more consequences for the college accusations, they definitely speak to Ariel's claim there's been no recidivism, and in general they shift my probability distribution over what he was apologizing for.

I don't necessarily think these concerns should have prevented the grant, or that SFF has an obligation to explain to me why they gave the grant. I wouldn't have made that grant, for lots of reasons, but that's fine, and I generally think the EA community acts too entitled ... (read more)

Ivy Mazzola
I think that most of your comment is reasonable, so I'm only going to respond to the second-to-last paragraph. Because that is the bit that critiques my comment, my response is going to sound defensive. But I agree with everything else, and I also think what went on with my original comment leads back into what I see as the actual crux, so it's worth me saying what's on my mind: I have long ago edited the original comment where I wrote that. I didn't change that particular wording because I wrote the original on mobile (which I deeply regretted and am now incredibly averse to) so I didn't have fancy strikethrough edit features, even when I tried on PC (I didn't realize it worked like that). Without strikethrough ability, I thought it would be epistemically dishonest to just edit that sentence. Instead I promptly, right after that sentence, told people to make their conclusions elsewhere in a way that I feel clearly tells readers to take that part with a grain of salt. All in all I edited that comment ~5 times. I don't have the spoons to re-edit again given I think it's fine.  More importantly, the transparency of info is obviously a problem if someone like me who usually tries to be pretty airtight on EA Forum things had to edit so much going back and forth from "here's a thing" to "maybe he did worse" to "maybe he did less" to "maybe he did worse" again. That's not okay. And now I feel like I'm getting punished for trying to do what no other outsider of the case was willing to try to do (that I saw)... figure out the ground truth [and what it means for EA behavior] publicly. Honestly trying to figure out what happened regarding Jacy was a heckin nightmare with people coming out of left field about it after each correction I tried to make, including over DM (again not publicly), and giving multiple comments to comb through on multiple other posts and with their own attached threads. It's good people chimed in sharing the existence of different pieces of discussio
I'm sorry. it sounds like you've taken a lot of flak for that comment, and having had that same experience I know it's miserable.  FWIW I was never responding to or criticizing your comment, only Ariel's. Probably I saw it in the front page feed without checking the larger context. Or I only skimmed your comment and didn't notice he was repeating a claim.  Plausibly I'm culpable for not noticing it was a repeated claim rather than original. Maybe the way comments are displayed on the front page with minimal context contributed. 
Ariel Simnegar 🔸
It's all I'm aware of, to the extent of my knowledge. I'm unfamiliar with the accusations against him in college, and could retract my above comment if given sufficient evidence.
Thanks for this context!  My understanding is that SFF regranting is pretty separate from the CEA/Open Phil network, since that I don't hear much about them on the Forum or in other EA spaces. But this is still a useful update about SFF* and a corrective to something I said that could have been misleading. *I don't know anything about Jacy outside that post, and I should acknowledge that it's possible he's apologized and reformed to whatever degree was appropriate and should at some point make his way back to good standing in EA — but I haven't seen positive evidence of that either, so he seemed like a fair example to use.
Just to be clear, $50k really isn't a lot of money and the SFF is not "EA funding" in the sense that many recommenders who participate in the process have little connection to the EA community and that any recommender can unilaterally make grants to organizations. I wouldn't update much on this. The SFF process funds a lot of stuff that is quite controversial, but this does not reflect nor convert into broad community support (and I personally think this is better than the decision that both the LTFF and Open Phil face where a grant is also seen as an endorsement of the org, which frequently muddles people just trying to think about marginal cost-effectiveness).

$50k really isn't a lot of money and the SFF is not "EA funding"

What's an acceptable amount of money, and what's an unacceptable amount of money?

I didn't make a claim personally that SFF was EA funding, which is why I said "if", though I think many people would consider SFF a funder that was EA-aligned. They have an EA forum page, their description is "Our goal is to bring financial support to organizations working to improve humanity’s long-term prospects for survival and flourishing." which sounds pretty EA, and they were included in a List of EA funding opportunities, with no pushback about their inclusion as a "funder that is part of the EA community" (OTOH, they are not included in the airtable by Effective Thesis)

I personally think this is better than the decision that both the LTFF and Open Phil face where a grant is also seen as an endorsement of the org

I don't really understand what you mean by a process that gives an organization $ that isn't seen as endorsement of the organization. Can you clarify what you mean here?

Nathan Young
I agree with the above comment but a minor correction (I think): * I think Jacy is no longer involved with the org officially but unofficially he is very involved.  I am unsure what ought to happen here, but agree the status quo is misleading.

He's currently listed on the website as  co-founder, and he was the one who shared the post that included the call for funding and job application on the EA forum. His bio says "@SentienceInst researching AI safety".

What gives you the impression that he is no longer officially involved?

Nathan Young
Oh yeah I'm wrong.

Michael Vassar is still active in the EA community as a grant giver at SFF.  He recently initiated a grant to the new president of AOI after the death of the founder Peter Eckersley, which casts a bad mark on his successor. 

Peter Eckersley had a strong moral compass and stayed far away from Vassar. The new president, Deger Turan, was either clueless or careless when he sold out Peter's legacy. 

Hey! Angela Pang here. I am working on a unified statement with the person who I am referring to in that excerpt, Robert Cordwell: https://www.facebook.com/angela.pang.1337/posts/pfbid034KYHRVRkcqqMaUemV2BHAjEeti98FFqcBeRsPNfHKxdNjRASTt1dDqZehMp1mjxKl?comment_id=1604215313376156&notif_id=1678378897534407&notif_t=feed_comment&ref=notif

I actually wanted to say that I felt like Julia Wise handled my case extremely respectfully, though there still wasn't enough structural support to reach a solution I would've been satisfied with (Julia Wise wanted to ban him, whereas I was hoping for more options, but it seems like other reports came in a few weeks ago so he's banned now), but that can change.

I consider most of EA quite respectful, though I caught sexual harassment at least once at EAG (which I don't think was reported, since the woman in question called it out quickly and the man apologized). CEA handles reports well, though I've only reported Robert.

My complaint lies with the rationalist culture and certain parts of the rationalist community much, much more than CEA, since the lack of moderation leads to missing stairs feeling empowered. Overall, I think CEA did a decent... (read more)

The article paints a disturbing picture of systematic abuse. Sections of your comment like come off as incredibly trivializing and is further evidence of the dismissive attitude toward these serious problems that the community prefers to downplay and ignore.

I should clarify that "particularly bad" should be "unusually bad", and by "unusually" I mean "unusual by the standards of human behavior in other professional/intellectual communities".

If someone writes an article about the murder epidemic in New York City, and someone else points out that the NYC murder rate is not at all unusual by U.S. standards, and that murder tends to be common throughout human society, is that a trivializing thing to say?

You can believe a lot of things at once:

  • Murder is terrible
  • 433 murders is 433 too many
  • Murderers should be removed from society for a long time
  • NYC should strongly consider taking further action aimed at preventing murder
  • The NYC murder rate doesn't point to NYC being more dangerous than other cities
  • People in NYC shouldn't feel especially unsafe
  • People who want to get involved in theater should still consider moving to NYC
  • Some of the actions NYC could take to try preventing murder would likely be helpful
  • Other actions would likely be unhelpful on net, either failing to prevent murder or causing other serious problems
  • Focusing on the murder rate as a top-priority issue would have both good and bad impacts for NYC, and there may be other problems th
... (read more)
I am encouraging you to try to exercise your empathetic muscles and understand the difference for a sexual assault victim to read a top comment that categorically condemns such actions, insisting that we need to do better as a community, compared to one that says “humans are gonna human”. Your analogy does not share the same relevant features as in this case. We’re not a city, we’re a community. One that should try to be a welcoming space that takes one’s concerns seriously instead of dismissing them because they might not be above the base rate.
Ivy Mazzola
Hey, fenneko literally included: * Murder is terrible * 433 murders is 433 too many * Murderers should be removed from society for a long time * NYC should strongly consider taking further action aimed at preventing murder And they closed with "I am..trying to learn, and I think there are plenty of things EA and rationality could do better on this front." FYI that your comment reads as unfair, or bad-faith,  or possibly even disingenuous. It also reads to me as patronizing to women, expecting that they can't follow the nuance of the discussion and see empathy in kenneko's comment. It is clearly there.  We can still expect that all our members engage in good faith. I'm a sexual assault victim (and of other types of sexual misconduct) and have reported EA-adjacent men who have done troubling things in the community, and I thought kenneko's comment (this one and the top level one) was great.   NOTE: I also believe you have misused the term sexual assault. We aren't just talking about sexual assault. We are talking about all types of sexual harrassment and misconduct. For example, I think only one instance mentioned in Time or Bloomberg was assault. I really am hoping people can keep terminology straight, because the classifications exist for reasons.

Furthermore, if a community wants to command billions of dollars and exert major influence on one of the world's most important industries, it is both totally predictable and appropriate that society will scrutinize that community more rigorously and hold it to a higher standard than a group of "NPCs".

edit: typo

Ivy Mazzola
That's reasonable. But : 1. Let's make sure we are addressing problems that still exist in the EA community (if you would like to discuss the rationality community you can join their discussion on LW) 2. If we decide problems do still exist, let's make sure we are attaching them to the right community, even microcosm of community, as both the EA and rationality community are huge and those not involved don't deserve to be punished for things they handled well or would have handled well 3. If issues remain, let's be fair in who we give the responsibility to handle those remaining issues, eg whatever bad actors remain and whatever issues still exist in whatever community microcosm.
FYI:  IIRC/IIUC, Bryk is the one who made up the thing about my having a harem of submissive mathematicians whom I called my "math pets".   This is false; people sufficiently connected within the community will know that it is false, not least since it'd be widely known and I wouldn't have denied it if it were true.  I am not sure what to do about it simply, if someone's own epistemic location is such that my statements there are unknowable to them as being true. It is known to me that Bryk has gone on repeating the "math pets" allegation, including to journalists, long after it should've been clear to her that it was not true. My own understanding of proper procedure subsequent to this would be to treat Bryk as somebody having made a known false allegation, especially since I don't know of any corresponding later-verified/known-true allegations that she was first to bring forth; and that this implies we ought to cross everything alleged by Bryk off any such lists, unless there's independent witnesses for it, in which case we can consider those witnesses and also reconsider the future degree to which Bryk ought to (not) be considered as an evidential source. (If I am recalling correctly that Jax started the "math pets" thing.)

I think this article paints a fairly misleading picture, in a way that's difficult for me to not construe as deliberate. 

It doesn't provide dates for most of the incidents it describes, despite that many of them happened many years ago, and thereby seems to imply that all the bad stuff brought up is ongoing.  To my knowledge, no MIRI researcher has had a psychotic break in ~a decade. Brent Dill is banned from entering the group house I live in. I was told by a friend that Michael Vassar (the person who followed Sonia Joseph home and slept on her floor despite that it made her uncomfortable, also an alleged perpetrator of sexual assault) is barred from Slate Star Codex meetups. 

The article strongly reads to me as if it's saying that these things aren't the case, that the various transgressors didn't face any repercussions and remained esteemed members of the community.

 Obviously it's bad that people were assaulted, harassed, and abused at all, regardless of how long ago it happened. It's probably good for people to know that these things happened. But the article seems to assume that all these things are still happening, and it seems to be drawing conclusions on ... (read more)

It's unsurprising that the people who were willing to allow Bloomberg to print their names or identifying information about the wrongdoers were associated with situations where the community has rallied against the wrongdoer. It's also unsurprising that those who were met with criticism, retailation, and other efforts to protect the wrongdoer were not willing to allow publication of potentially identifying information. Therefore, I don't think it's warranted to draw inferences about community response in the cases without identifying information based on the response in cases with that information.

It would be helpful if the article mentioned both the status of the wrongdoer at the time of the incident and their current status in the relevant community.

This comment is gold. I believe there is an iceberg effect here-- EA cannot measure the number of times an accuser attempted to say something but got shut down or retaliated against by the community.

Personally, I would like to see discussion shift toward how to create a safe environment for all genders, and how to respond to accusers appropriately.

One book that I recommend is Citadels of Pride, which goes into institutional abuse in Hollywood and the arts scene. The patterns are similar: lack of clear boundaries between work/life, men in positions of power commanding capital, high costs to say anything, lack of feedback mechanisms in reporting. I am thankful that CEA is upping its efforts here; however, I also see that the amorphous nature (especially in the Bay Area) of these subcultures makes things difficult. It seems that most of the toxicity comes from the rationalist community, where there are virtually no mechanisms of feedback.

I am in touch with some of the women in the article, and they tell me that they feel safe speaking up now that they're no longer associated with these circles and have built separate networks. However, I agree that EA is very heterogeneous and diffuse... (read more)

Okay so you have noted 2 possible types of victims:

  1. People who reported and were met with community support (who you expect would feel comfortable using names)
  2. People who reported and were met with criticism (who you expect would not use names)

I just want to (respectfully) flag that there is a possible third and fourth group of victims (and likely more possibilities too tbh)

  1. People who reported and were met with support, but who now want to continue to use a handled incident as proof of problems. These people would avoid using names to avoid claims of dishonestly controlling the narrative.

  2. People who never reported their incidents to the community at all, and therefore could not be met with either support or criticism. If I were in this third group of people who didn't report, I would also not use my name when talking to a journalist, to avoid upset about taking a complaint public before allowing the community to actually handle something they might absolutely have wanted to handle.

I'm not saying what's going on, like it definitely looks like at least one person from group 2 is present. But I just want to note for readers that you also can't simplify the possibility sphere of name-redacted victims in general into only 1 group

With the exception of Brent, who is fully ostracized afaik, I think you seriously understate how much support these abusers still have. My model is sadly that a decent number of important rationalists and EAs just dont care that much about the sort of behavior in the article. CFAR investigated Brent and stood by him until there was public outcry! I will repost what Anna Salomon wrote a year ago, long after his misdeeds were well known. Lots of people have been updating TOWARD Vassar:

I hereby apologize for the role I played in X's ostracism from the community, which AFAICT was both unjust and harmful to both the community and X. There's more to say here, and I don't yet know how to say it well. But the shortest version is that in the years leading up to my original comment X was criticizing me and many in the rationality and EA communities intensely, and, despite our alleged desire to aspire to rationality, I and I think many others did not like having our political foundations criticized/eroded, nor did I and I think various others like having the story I told myself to keep stably “doing my work” criticized/eroded. This, despite the fact that attempting to share reasoning and disa

... (read more)

CFAR investigated Brent and stood by him until there was public outcry! 

This says very bad things about the leadership of CFAR, and probably other CFAR staff (to the extent that they either agreed with leadership or failed to push back hard enough, though the latter can be hard to do).

It seems to say good things about the public that did the outcry, which at the time felt to me like "almost everyone outside of CFAR". Everyone* yelled at a venerable and respected org until they stopped doing bad stuff. Is this a negative update against EA/rationality, or a positive one?

*It's entirely possible that there were private whisper networks supporting Brett/attacking his accusers, or even public posts defending him that I missed. But it felt to me like the overwhelming community sentiment was "get Brent the hell out of here".

I think negative update since lots of the people with bad judgment remained in positions of power. This remains true even if some people were forced out. AFAIK Mike Valentine was forced out of CFAR for his connections to Brent, in particular greenlighting Brent meeting with a very young person alone. Though I dont have proof of this specific incident. Unsurprisingly, post-brent Anna Salomon defended included Mike Vassar. 


To my knowledge, no MIRI researcher has had a psychotic break in ~a decade

It's worth noting that the article was explicit that ex-MIRI researcher Jessica Taylor's psychotic break was in 2017:

In 2017 she was hospitalized for three weeks with delusions that she was “intrinsically evil” and “had destroyed significant parts of the world with my demonic powers,” she wrote in her post.

She also alleged in December 2021 that at least two other MIRI employees had experienced psychosis in the past few years: 

At least two former MIRI employees who were not significantly talking with Vassar or Ziz experienced psychosis in the past few years.

Re: the MIRI employees, it seems relevant that they're "former" rather than current employees, given that you'd expect there to be more former than current employees, and former employees presumably don't have MIRI as a major figure in their lives.

was told by a friend that Michael Vassar is barred from Slate Star Codex meetups. 

He was banned, but still managed to slip through the cracks enough to be invited to an SSC online meetup in 2020.  (To be very clear, this was not organised or endorsed by Scott alexander, who did ban Vasser from his events). 

You can read the  mea culpa from the organiser here.  It really looks to me like Vasser has been treated with a missing stair approach until very recently, where those in the know quietly disinvite him to things but others, even within the community, are unaware.  Even in the comments here where some very harsh allegations are made against him, people are still being urged not to "ostracise" him, which to me seems like an entirely appropriate action. 

Neither Scotts banning of Vassar nor the REACH banning was quiet. It's just that there's no process by which those people who organize Slate Star Codex meetups are made aware. 

It turns out that plenty of people who organize Slate Star Codex meetups are not in touch with Bay Area community drama.  The person who organized that SSC online meetup was from Israel. 

Even in the comments here where some very harsh allegations are made against him

That's because some of the harsh allegations don't seem to hold up. Scott Alexander spent a significant amount of time investigating and came up with:

While I disagree with Jessica's interpretations of a lot of things, I generally agree with her facts (about the Vassar stuff which I have been researching; I know nothing about the climate at MIRI). I think this post gives most of the relevant information mine would give. I agree with (my model of) Jessica that proximity to Michael's ideas (and psychedelics) was not the single unique cause of her problems but may have contributed.

It's just that there's no process by which those people who organize Slate Star Codex meetups are made aware. 

This definitely indicates a mishandling of the situation, that leaves room for improvement. In a better world, somebody would have spotted the talk before it went ahead. As it is now, it made it (falsely) look like he was endorsed by SSC, which I hope we can agree is not something we want.  We already know he's been using his connection with Yud (via HPMOR) to try and seduce people. 

With regards to the latter, if someone was triggering psychotic breaks in my community, I would feel no shame in kicking them out, even if it was unintentional. There is no democratic right to participate in one particular subculture. Ostracism is an appropriate response for far less than this. 

I'm particularly concerned with the Anna Salamon statement that sapphire posted above, where she apologises to him for the ostracisation, and says she recommends inviting him to future SSC meetups. This is going in the exact wrong direction, and seems like an indicator that the rationalists are poorly handling abuse. 


Neither Scotts banning of Vassar nor the REACH banning was quiet.

I think these were relatively quiet. The only public thing I can find about REACH is this post where Ben objects to it, and Scott's listing was just as "Michael A" and then later "Michael V".

Someone on the LessWrong crosspost linked this relevant thing: https://slatestarcodex.com/2015/09/16/cardiologists-and-chinese-robbers/ 

The "chinese robber fallacy" is being overstretched, in my opinion. All it says is that having many examples of X behaviour within a group doesn't necessarily prove that X is worse than average within that group.  But that doesn't mean it isn't worse than average. I could easily imagine the catholic church throwing this type of link out in response to the first bombshell articles about abuse. 

Most importantly, we shouldn't be aiming for average, we should be aiming for excellence. And I think the  poor response to a lot of the incidents described is pretty strong evidence that excellence is not being achieved on this matter. 

I agree that we should be aiming for excellence. If having many examples of behavior X within a group doesn't show that the group is worse at that or better at that than average - if you expect to see the same thing in either case - then being presented with such a list has given you zero evidence on which to update.  They would have written the same article whether behavior X was half as common or twice as common or vanishingly rare. They would have written the same article whether things were handled well or poorly, as shown by their framing things misleadingly and their lies of omission. They had an ax to grind and they've ground it. We should be aiming for excellence but when we get there (or if we've gotten there) it will do absolutely nothing to prevent people from writing these articles.  When someone goes looking for examples of X behavior, knowing that they'll find a list, with the goal of damaging your reputation among third parties, being presented with the list does not seem to me a good impetus for paroxysms of soul-searching and finger pointing. 

In the absence of evidence that rationalism is uniquely good at dealing with sexual harrasment (it isn't), then the prior assumption about the level of misconduct should be "average", not "excellent".  Which means that there is room for improvement. 

Even if these stories do not update your beliefs about the level of misconduct in the communities, they do give you information about how misconduct is happening, and point to areas that can be improved. I must admit I am baffled as to why the immediate response seems to be mostly about attacking the media, instead of trying to use this new information to figure out how to protect your community. 

Ivy Mazzola
I wonder if you both might just believe the same thing here? titotal, do you not think it possible that lumpyproletariat was offering that as one option out of many as a sort of insurance plan that witchhunts against EA not begin? Witchhunts are very easy to start, especially once you know more details and you know you need them, but they are very hard to stop. So I guess I think they agree with you but just want to drop in that reminder for people of that possibility to try to help things go well? Rather than making a claim that base rates is necessarily the case? After all, you both want the community to do better and be "aiming for excellence"?  [I agree it would have been better framed as one reason out of various though. I've been liking the allegory of the blind men and the elephant more for this myself]   I don't really want to get involved in this thread other than saying "I think you guys agree" so it's okay if you consider it a tangent but... I'll just flag that I think this bit isn't accurate in character. What if the lessons were learned back then and the "immediate response" has actually passed? What about this response from the Community Health Team about an ongoing project to help clarify problems and reveal avenues for making the community safer, which probably implies that the rest of us don't have much to do quite yet except maybe help people be patient for that? Another option is, if we want to go our own way, actually to try to figure out the veracity of the media and other sources of info, because questioning the importance of various pieces of info would also lilely be the first step in helping determine which potential interventions might do nothing or do amazingly or do net harm. I wouldnt recommend getting stuck on using such selective data, when there should be better to some soon, but you are certainly welcome to try to use this information to protect the community, and let us know if you think of something for us to do!
I think it's good to both address sexual misconduct and to correct misleading context in media pieces. But if you only mention the latter, it gives the impression that the former doesn't matter. I would highly encourage people who care about both to at least mention that you care about reducing the level of misconduct. It may sound like stating the obvious, but it really does matter.  While I certainly hope everyone cares about both, I can't honestly say I believe that. Going through the lesswrong thread, it honestly looks to me like a lot of people genuinely don't want to think about the issue at all, and I find this concerning. For example, downvoting the thread to 0 seems completely unwarranted. 

None of this was news to the people who use LessWrong. 

The time to have a conversation about what went wrong and what a community can do better, is immediately after you learned that the thing happened. If you search for the names of the people involved, you'll see that LessWrong did that at length.

The worst possible time to bring the topic up again, is when someone writes a misleading article for the express purpose of hurting you, which was not written to be helpful and purposefully lacks the context that it would require in order to be helpful. Why would you give someone a button they can press to make your forum talk for weeks about nothing? 

It was a low-quality article and was downvoted so fewer people saw it. I wish the same had happened here.

The Bloomberg piece was not an update on how misconduct has happened in EA to anyone who has been previously paying attention. 

These stories are horrifying. I want to thank the victims for speaking up, I know it can't be easy. 

It's worth noting that while some of these allegations overlap with the ones in the times article, a lot of them are new. This article also makes more of an effort to distinguish between the EA and Rationalist communities, which are fairly close but separate.  I think most, but not all, of the new allegations  are more closely tied to Rationalism than EA, but I could be wrong. 

The response on the LW forum has been horrifying. https://www.lesswrong.com/posts/bFHKFoBmcAjiQYgau/article-about-abuse-in-lesswrong-and-rationalist-communities   Update: there are some now sane commenters in the mix, although those comments came much later.

I find the comments there to be rather poorly reasoned. Kicking out two abusive people from your community does not mean there is no problem, especially when both cases were terribly handled.  And just because a newspaper has a slant, it doesn't mean that allegations are not real. 

1[comment deleted]

To riff off a particularly disturbing line in the article:

Anyone who as a member of the AI safety community has committed, or is committing, sexual assault is harming the advancement of AI safety, and this Forum poster suggests that an agentic option for those people would be to remove themselves from the community as soon as possible. (I mean go find a non-AI job at Google or something.)

Whoever suggested to a survivor that they should consider death by suicide should also leave ASAP.

[Edit to add: My sentiment is not limited to sexual assault; many forms of sexual misconduct that do not involve assault warrant the same sentiment.]

I share this sentiment.

What you're referring to in the last sentence sounds like evil that doesn't even bother to hide.

But this other part maybe warrants a bit of engagement:

She says others in the community told her allegations of misconduct harmed the advancement of AI safety,

If the allegations are true and serious, then I think it makes sense even just on deterrence grounds for people to have their pursuits harmed, no matter their entanglement with EA/AI safety or their ability to contribute to important causes. In addition, even if we went with the act utilitarian logic of "how much good can this person do?," I don't buy that interpersonally callous, predatory individuals are a good thing for a research community (no matter how smart or accomplished they seem). Finding out that someone does stuff that warrants their exclusion from the community (and damages its reputation) is really strong evidence that they weren't serious enough about having positive impact. One would have to be scarily good at mental gymnastics to think otherwise, to think that this isn't a bad sign about someone's commitment and orientation to have impact. (It's already suspicious most researchers in EA ... (read more)

Her post was strongly downvoted, and she eventually removed it.

This claim seems misleading at best

Edit: That's not to say I disagree with the central thrust of the article -- I find it plausible, and I wish the community health team was able to handle this problem more effectively. I hope they are trying to figure out want went wrong in cases like Angela Pang's.

Hey! Angela Pang here. I am working on a unified statement with the person who I am referring to in that excerpt, Robert Cordwell: https://www.facebook.com/angela.pang.1337/posts/pfbid034KYHRVRkcqqMaUemV2BHAjEeti98FFqcBeRsPNfHKxdNjRASTt1dDqZehMp1mjxKl?comment_id=1604215313376156&notif_id=1678378897534407&notif_t=feed_comment&ref=notif

I actually wanted to say that I felt like Julia Wise handled my case extremely respectfully, though there still wasn't enough structural support to reach a solution I would've been satisfied with (Julia Wise wanted to ban him, whereas I was hoping for more options), but that can change.

My complaint lies with the culture and certain parts of the rationalist community much, much more than CEA, since the lack of moderation leads to abusers feeling empowered. Overall, I think CEA did a decent job with my case at least, and I appreciate Julia Wise's help.

Among various emotions, I'm really sad and disappointed at hearing about the multiple survivor reports that the relevant community's response to their stories was survivor-blaming and/or retaliation. In my view, that speaks to a more widespread pathology that can't be minimized by claiming there were a few bad apples in the past who have been dealt with. It seems to reflect a more widely accepted, yet profoundly disturbing, idea that those with power and skill can trample on others who are expected to not rock the boat.

Minor compared to much more important points other people can be making, but highlighting this line:

At dinner the man bragged that Yudkowsky had modeled a core HPMOR professor on him.

Wow, this is an interesting framing on Yudkowsky writing him in as literal Voldemort

Maybe there's a lesson about trustworthiness and interpersonal dynamics here somewhere.

 I think journalists are often imprecise and I wouldn't read too much into the particular synonym of "said" that was chosen.

Some of the behaviour described here happened solely in the rationalist community. This isn't a rationalist forum. While we can discuss it, we don't need to defend the rationalists (and I'd say I am one). They can look after themselves and there is a reason that EA and rationalism are separate. I think at times we want to take responsibility for everything, but some of this just didn't happen in EA community spaces. 

Some of this behaviour is in EA spaces and that's different. 

Seems to me worth discussing needs a bit here. Like there seem to be different parties in this discusion. So what are their needs?

Accusers - I guess these people want to feel safe and secure and to think they will be taken seriously and that their need for personal safety won't be ignored again. Perhaps want to not to feel insulted, as some discussion of this might leave them feeling, even if they are no longer part of the community. They also seem to want to feel safe and so to have the names in the article to stay anonymous.

Those worried about these events - I guess these people desire to feel safe and secure in EA and confident that their friends will be safe and secure in EA. They want to be able to not think about this stuff very much. Perhaps it makes them anxious and disturbs their work.

Rationalists - Rationalists generally want or events to be discussed accurately and precisely. In particularly, a story that's generally accurate but gets key details wrong seems to upset them. They desire clarity on who has been kicked out of the community and when these events happened. In short, to be able to judge if the community performed well or badly here.

[Rough group] - I sense a gro... (read more)

Yes! I was someone who upvoted the comments pointing out the problems with the article, because I think understanding the details of the problem is essential for solving it. However, I definitely don’t want any known or unknown victims to get the impression we are dismissing their experiences. Any sexual misconduct is far too much, and truly terrible things have happened.

I'm actually confused about why this got so many downvotes, as I didn't think I was saying anything controversial. Can someone explain?

This is one of those situations where I'd prefer to see the gender and background of the commenter before reading the comment so I can understand and adjust for their bias.

Because that's what allows you to really estimate the epistemic status of the commentary, and not what seems to be logic/rationality behind it. I imagine it's not that you used to think but I think that's how confirmation bias works in cases like this. 

So: I am a woman, I had experience of abuse of power dynamics and manipulation of the ideas of saving the world + exploitation by high-status men within the rational and EA community. Watched it from the inside (not in the Bay Area even), I can confirm most of the general points in their dynamics.

Society and its set of dynamics are so varied that you simply cannot make enough adjustments (if you really want to maintain the status quo).

I see a different power dynamic than you (by you I mean some commenters who say the article is exaggerating) and it's not about a few individual black sheeps, it's about rotten toxic institutions in general that you are doomed to reproduce over and over again until you are completely transparent about your motives, respect the person and her happiness, and start your efforts to save the world more modestly, without a bullshit about the heroic responsibility that turns people into functions. Only from an excess of internal resources, interest and prosperity, we will save the world so that it does not turn into another hell that is not worth saving.

One of the quotes is:

Effective altruism swung toward AI safety. “There was this pressure,” says a former member of the community, who spoke on condition of anonymity for fear of reprisals. “If you were smart enough or quantitative enough, you would drop everything you were doing and work on AI safety.”

I think the implication here is that if you are working on global poverty or animal welfare, you must not be smart enough or quantitative enough. I'm not deeply involved so I don't know if this quote is accurate or not.

Edit: This statement is about my personal experience in the biggest EA AI safety hub. It’s not intended to be anything more than anecdotal evidence, and leaves plenty of room for other experiences. Thanks to others for pointing out this wasn’t clear. 

I'm part of the AI-oriented community this part is referring to, and have felt a lot of pressure to abandon work on other cause areas to work on AI safety (which I have rejected). In my experience it is not condescending at all. They definitely do not consider people who work in other cause areas less smart or quantitative. They are passionate about their cause so these conversations come up, but the conversations are very respectful and display deep open-mindedness. A lot of the pressure is also not intentional but just comes from the fact that everyone around you is working on AI.

They definitely do not consider people who work in other cause areas less smart or quantitative. They are passionate about their cause so these conversations come up, but the conversations are very respectful and display deep open-mindedness.

I think this is an empirical question, and likely varies between communities, so "definitely do not..." seems too strong. For example, here's Gregory Lewis, a fairly senior and well-respected EA, commenting on different cause areas (emphasis added):

Yet I think I'd be surprised if it wasn't the case that among those working 'in' EA, the majority work on the far future, and a plurality work on AI. It also agrees with my impression that the most involved in the EA community strongly skew towards the far future cause area in general and AI in particular. I think they do so, bluntly, because these people have better access to the balance of reason, which in fact favours these being the most important things to work on.

I wouldn't be surprised if other people shared this view.

Thanks for sharing! Yeah I meant that only to refer to the people I know well enough to know their opinions and the general vibe I've gotten in the biggest EA AI safety hub. Mine  is just anecdotal evidence and leaves a lot of room for other perspectives. Sorry I didn’t say that well enough.
Oh I see! My mistake, I misunderstood what you were referring to, thanks for clarifying!

Hi Sonia,

You may not have the whole picture. 

In 2019, I was leaked a document circulating at the Centre for Effective Altruism, the central coordinating body of the EA movement. Some people in leadership positions were testing a new measure of value to apply to people: a metric called PELTIV, which stood for “Potential Expected Long-Term Instrumental Value.” It was to be used by CEA staff to score attendees of EA conferences, to generate a “database for tracking leads” and identify individuals who were likely to develop high “dedication” to EA — a list that was to be shared across CEA and the career consultancy 80,000 Hours. There were two separate tables, one to assess people who might donate money and one for people who might directly work for EA.


What I saw was clearly a draft. Under a table titled “crappy uncalibrated talent table,” someone had tried to assign relative scores to these dimensions. For example, a candidate with a normal IQ of 100 would be subtracted PELTIV points, because points could only be earned above an IQ of 120. Low PELTIV value was assigned to applicants who worked to reduce global poverty or mitigate climate change, while the highest value was assigned to those who directly worked for EA organizations or on artificial intelligence.

Source: https://www.vox.com/future-perfect/23569519/effective-altrusim-sam-bankman-fried-will-macaskill-ea-risk-decentralization-philanthropy

Thanks for sharing this important information! 

I want to add a couple important points from the Vox article that weren't explicit in your comment. 

-This proposal was discarded

-The professional field scores were not necessarily supposed to be measuring intelligence.  PELTIV was intended to measure many different things. To me professional field fits more into the "value aligned" category, although I respect that other interpretations based on other personal experiences with high status EAs could be just as valid.

I agree that work on AI safety is a higher priority for much of EA leadership than other cause areas now.

Absolutely true that it was ultimately not used and AI safety is higher priority for leadership.  But proposals like this, especially by organizers of CEA, are definitely condescending and non-respectful and is not an appropriate way to treat fellow EAs working on climate change / poverty / animal welfare or other important cause areas. The recent fixation of certain EAs on AI/ longtermism renders everything else less valuable in comparison and treating EAs not working on AI safety as "NPCs" (people who don't ultimately matter) is completely unacceptable.
Yes, as I shared below, I intended my statement to be anecdotal evidence that leaves a lot of room for other perspectives. Although the only NPCs quote I noticed was referring to something else. Can you share the NPCs quote that referred to what we are discussing?

I suggest a period of talking about feelings and things we agree and then we wait 3 days to actually discuss the object level claims. Often I think we don't get consensus on the easy stuff before we move to the hard stuff.

I imagine we mostly agree that people being upset is sad and that, in a better world, none of these people would be upset.

Personally I'm sad and a bit frustrated and a bit defensive.

This is so much more damning than the Time article. It includes deeply disturbing details instead of references to people's feelings. We need to do so much more more soul searching over this than we did over the Time article. [Edit: I've been very critical of the Time article, and don't have an opinion about whether we should be doing more soul-searching on sexual misconduct overall than we are already]. I found the contrast between the two descriptions of Joseph's dinner with the older man particularly troubling.

This is the description in the Time article... (read more)

It's definitely important! It's also important to note that this person has likely already been banned from CEA events for 5 years and some other EA spaces: https://forum.effectivealtruism.org/posts/JCyX29F77Jak5gbwq/ea-sexual-harassment-and-abuse?commentId=jKJ4kLq8e6RZtTe2P

I honestly can't comment on how rationalists feel about it and what they have to learn. But I don't think non-rat EAs necessarily have to do "so much more soul searching"[edit: than we are already doing] about this. After all this entire piece is basically about the rationality community.

Oh awesome! That's a huge relief that this specific person has likely already been dealt with. It's a shame they didn't mention that in this article either.

Mentioning that in the article would have defeated the purpose of writing it, for the person who wrote it. 

It is a shame – and I would guess a very deliberate one.

I've been a user on LessWrong for a long time and these events have resurfaced several times that I remember already, always instigated by something like this article, and many people discovering the evidence and allegations about them jumps to the conclusion that 'the community' needs to do some "soul searching" about it all.

And this recurring dynamic is extra frustrating and heated because the 'community members', including people that are purely/mostly just users of the site, aren't even the same group of people each time. Older users try to point out the history and new users sort themselves into warring groups, e.g. 'this community/site is awful/terrible/toxic/"rape culture"' or 'WTF, I had nothing to do with this!?'.

Having observed several different kinds of 'communities' try to handle this stuff, rationality/LessWrong and EA groups are – far and away – much better at actually effectively addressing it than anyone else.

People should almost certainly remain vigilant against bad behavior – as I'm sure they are – but they should also be proud of doing as good of a job as they have, especially given how hard of a job it is.

Given the gender ratio in EA and rationality, it would be surprising if women in EA/rationality didn’t experience more harassment than women in other social settings with more even gender ratios.

Consider a simplified case: suppose 1% of guys harass women and EA/rationality events are 10% women. Then in a group of 1000 EAs/rationalists there would be 9 harassers targeting 100 women. But if the gender ratio was even, then there would be 5 harassers targeting 500 women. So the probability of each woman being targeted by a harasser is lower in a group with mor... (read more)

I agree it wouldn't exactly be surprising by default but both communities are very high conscientiousness and EA specifically is meant to be a community of altruists? I know you sort of mentioned that, but honestly I think it should count for quite a lot if we are just doing conjecture here? 

And again the two communities (EA and rationality) are getting tied together here? On gender ratio: EA has a 1:2 gender ratio which is honestly not that horrible for a community that grew out tech, philosophy, and econ. Obviously I want to improve it very very much but I kinda wish people would stop saying it is so incredibly uneven in such a way that it is implying that we can expect sexual misconduct to be egregious under the surface? 1:2 is about the gender ratio of the climbing gym I attend and I don't expect sexual misconduct to be egregious under the surface there? (but I do expect there to be some instances women have reported and I would expect that even if it were 1:1) Now compare that 1:2 ratio to the 1:9 gender ratio at best in rationality, well yeah that's probably gonna feel bad as a woman within rationality: even if rationalist conscientiousness bore out so that rat men do 1/... (read more)

Yes, this kind of 'idle conjecture' seems epistemically risky. It's too easy to invent reasons that point in any particular direction.

Personally, I think this article was kind of sloppily written, but I still think the situation it describes is worth spending a lot of time trying to understand. 

My sense is that a lot of people really care about us handling this well so I want to try and do so

And in the ones where we know the accused, do people think the right thing happened?

In those where we don't, I'd like to know what outcomes people would have liked to have happened. 

I guess personally I struggle since it feels like there is energy to "do something" but I don't understand ho... (read more)

[EDIT: Okay I guess the current top comment is enough. FWIW I never meant to imply that the discussion already happening was not of good quality but just that I don't want to see people's time and energy wasted, nor do I want to see people's concern spiked for little reason. I still hope, if this not a job for forum mods, that the community health team chimes in much like they did here on the crossposted Time piece but yes perhaps this is not the job for mods and maybe I should weakly hold that it is not necesssary anyway]

Mods, can you please write a comme... (read more)

I agree with the sentiment here.
[comment deleted]1
Curated and popular this week
Relevant opportunities