This is a linkpost for https://www.bloomberg.com/news/features/2023-03-07/effective-altruism-s-problems-go-beyond-sam-bankman-fried#xj4y7vzkg
Try non-paywalled link here.
More damning allegations:
A few quotes:
At the same time, she started to pick up weird vibes. One rationalist man introduced her to another as “perfect ratbait”—rat as in rationalist. She heard stories of sexual misconduct involving male leaders in the scene, but when she asked around, her peers waved the allegations off as minor character flaws unimportant when measured against the threat of an AI apocalypse. Eventually, she began dating an AI researcher in the community. She alleges that he committed sexual misconduct against her, and she filed a report with the San Francisco police. (Like many women in her position, she asked that the man not be named, to shield herself from possible retaliation.) Her allegations polarized the community, she says, and people questioned her mental health as a way to discredit her. Eventually she moved to Canada, where she’s continuing her work in AI and trying to foster a healthier research environment.
Of the subgroups in this scene, effective altruism had by far the most mainstream cachet and billionaire donors behind it, so that shift meant real money and acceptance. In 2016, Holden Karnofsky, then the co-chief executive officer of Open Philanthropy, an EA nonprofit funded by Facebook co-founder Dustin Moskovitz, wrote a blog post explaining his new zeal to prevent AI doomsday. In the following years, Open Philanthropy’s grants for longtermist causes rose from $2 million in 2015 to more than $100 million in 2021.
Open Philanthropy gave $7.7 million to MIRI in 2019, and Buterin gave $5 million worth of cash and crypto. But other individual donors were soon dwarfed by Bankman-Fried, a longtime EA who created the crypto trading platform FTX and became a billionaire in 2021. Before Bankman-Fried’s fortune evaporated last year, he’d convened a group of leading EAs to run his $100-million-a-year Future Fund for longtermist causes.
Even leading EAs have doubts about the shift toward AI. Larissa Hesketh-Rowe, chief operating officer at Leverage Research and the former CEO of the Centre for Effective Altruism, says she was never clear how someone could tell their work was making AI safer. When high-status people in the community said AI risk was a vital research area, others deferred, she says. “No one thinks it explicitly, but you’ll be drawn to agree with the people who, if you agree with them, you’ll be in the cool kids group,” she says. “If you didn’t get it, you weren’t smart enough, or you weren’t good enough.” Hesketh-Rowe, who left her job in 2019, has since become disillusioned with EA and believes the community is engaged in a kind of herd mentality.
In extreme pockets of the rationality community, AI researchers believed their apocalypse-related stress was contributing to psychotic breaks. MIRI employee Jessica Taylor had a job that sometimes involved “imagining extreme AI torture scenarios,” as she described it in a post on LessWrong—the worst possible suffering AI might be able to inflict on people. At work, she says, she and a small team of researchers believed “we might make God, but we might mess up and destroy everything.” In 2017 she was hospitalized for three weeks with delusions that she was “intrinsically evil” and “had destroyed significant parts of the world with my demonic powers,” she wrote in her post. Although she acknowledged taking psychedelics for therapeutic reasons, she also attributed the delusions to her job’s blurring of nightmare scenarios and real life. “In an ordinary patient, having fantasies about being the devil is considered megalomania,” she wrote. “Here the idea naturally followed from my day-to-day social environment and was central to my psychotic breakdown.”
Taylor’s experience wasn’t an isolated incident. It encapsulates the cultural motifs of some rationalists, who often gathered around MIRI or CFAR employees, lived together, and obsessively pushed the edges of social norms, truth and even conscious thought. They referred to outsiders as normies and NPCs, or non-player characters, as in the tertiary townsfolk in a video game who have only a couple things to say and don’t feature in the plot. At house parties, they spent time “debugging” each other, engaging in a confrontational style of interrogation that would supposedly yield more rational thoughts. Sometimes, to probe further, they experimented with psychedelics and tried “jailbreaking” their minds, to crack open their consciousness and make them more influential, or “agentic.” Several people in Taylor’s sphere had similar psychotic episodes. One died by suicide in 2018 and another in 2021.
Within the group, there was an unspoken sense of being the chosen people smart enough to see the truth and save the world, of being “cosmically significant,” says Qiaochu Yuan, a former rationalist.
Yuan started hanging out with the rationalists in 2013 as a math Ph.D. candidate at the University of California at Berkeley. Once he started sincerely entertaining the idea that AI could wipe out humanity in 20 years, he dropped out of school, abandoned the idea of retirement planning, and drifted away from old friends who weren’t dedicating their every waking moment to averting global annihilation. “You can really manipulate people into doing all sorts of crazy stuff if you can convince them that this is how you can help prevent the end of the world,” he says. “Once you get into that frame, it really distorts your ability to care about anything else.”
That inability to care was most apparent when it came to the alleged mistreatment of women in the community, as opportunists used the prospect of impending doom to excuse vile acts of abuse. Within the subculture of rationalists, EAs and AI safety researchers, sexual harassment and abuse are distressingly common, according to interviews with eight women at all levels of the community. Many young, ambitious women described a similar trajectory: They were initially drawn in by the ideas, then became immersed in the social scene. Often that meant attending parties at EA or rationalist group houses or getting added to jargon-filled Facebook Messenger chat groups with hundreds of like-minded people.
The eight women say casual misogyny threaded through the scene. On the low end, Bryk, the rationalist-adjacent writer, says a prominent rationalist once told her condescendingly that she was a “5-year-old in a hot 20-year-old’s body.” Relationships with much older men were common, as was polyamory. Neither is inherently harmful, but several women say those norms became tools to help influential older men get more partners. Keerthana Gopalakrishnan, an AI researcher at Google Brain in her late 20s, attended EA meetups where she was hit on by partnered men who lectured her on how monogamy was outdated and nonmonogamy more evolved. “If you’re a reasonably attractive woman entering an EA community, you get a ton of sexual requests to join polycules, often from poly and partnered men” who are sometimes in positions of influence or are directly funding the movement, she wrote on an EA forum about her experiences. Her post was strongly downvoted, and she eventually removed it.
The community’s guiding precepts could be used to justify this kind of behavior. Many within it argued that rationality led to superior conclusions about the world and rendered the moral codes of NPCs obsolete. Sonia Joseph, the woman who moved to the Bay Area to pursue a career in AI, was encouraged when she was 22 to have dinner with a 40ish startup founder in the rationalist sphere, because he had a close connection to Peter Thiel. At dinner the man bragged that Yudkowsky had modeled a core HPMOR professor on him. Joseph says he also argued that it was normal for a 12-year-old girl to have sexual relationships with adult men and that such relationships were a noble way of transferring knowledge to a younger generation. Then, she says, he followed her home and insisted on staying over. She says he slept on the floor of her living room and that she felt unsafe until he left in the morning.
On the extreme end, five women, some of whom spoke on condition of anonymity because they fear retribution, say men in the community committed sexual assault or misconduct against them. In the aftermath, they say, they often had to deal with professional repercussions along with the emotional and social ones. The social scene overlapped heavily with the AI industry in the Bay Area, including founders, executives, investors and researchers. Women who reported sexual abuse, either to the police or community mediators, say they were branded as trouble and ostracized while the men were protected.
In 2018 two people accused Brent Dill, a rationalist who volunteered and worked for CFAR, of abusing them while they were in relationships with him. They were both 19, and he was about twice their age. Both partners said he used drugs and emotional manipulation to pressure them into extreme BDSM scenarios that went far beyond their comfort level. In response to the allegations, a CFAR committee circulated a summary of an investigation it conducted into earlier claims against Dill, which largely exculpated him. “He is aligned with CFAR’s goals and strategy and should be seen as an ally,” the committee wrote, calling him “an important community hub and driver” who “embodies a rare kind of agency and a sense of heroic responsibility.” (After an outcry, CFAR apologized for its “terribly inadequate” response, disbanded the committee and banned Dill from its events. Dill didn’t respond to requests for comment.)
Rochelle Shen, a startup founder who used to run a rationalist-adjacent group house, heard the same justification from a woman in the community who mediated a sexual misconduct allegation. The mediator repeatedly told Shen to keep the possible repercussions for the man in mind. “You don’t want to ruin his career,” Shen recalls her saying. “You want to think about the consequences for the community.”
One woman in the community, who asked not to be identified for fear of reprisals, says she was sexually abused by a prominent AI researcher. After she confronted him, she says, she had job offers rescinded and conference speaking gigs canceled and was disinvited from AI events. She says others in the community told her allegations of misconduct harmed the advancement of AI safety, and one person suggested an agentic option would be to kill herself.
For some of the women who allege abuse within the community, the most devastating part is the disillusionment. Angela Pang, a 28-year-old who got to know rationalists through posts on Quora, remembers the joy she felt when she discovered a community that thought about the world the same way she did. She’d been experimenting with a vegan diet to reduce animal suffering, and she quickly connected with effective altruism’s ideas about optimization. She says she was assaulted by someone in the community who at first acknowledged having done wrong but later denied it. That backpedaling left her feeling doubly violated. “Everyone believed me, but them believing it wasn’t enough,” she says. “You need people who care a lot about abuse.” Pang grew up in a violent household; she says she once witnessed an incident of domestic violence involving her family in the grocery store. Onlookers stared but continued their shopping. This, she says, felt much the same.
The paper clip maximizer, as it’s called, is a potent meme about the pitfalls of maniacal fixation.
Every AI safety researcher knows about the paper clip maximizer. Few seem to grasp the ways this subculture is mimicking that tunnel vision. As AI becomes more powerful, the stakes will only feel higher to those obsessed with their self-assigned quest to keep it under rein. The collateral damage that’s already occurred won’t matter. They’ll be thinking only of their own kind of paper clip: saving the world.
I mean, I don't think it makes sense to force everyone even vaguely nearby in the EA social network to defer to some EA consensus about who it is OK to engage with vs. not. I think it's important that people can somehow signal "I don't want my actions to be taken as an endorsement of the EA community, please don't try to interpret my actions as trying very hard to reflect the EA consensus ". The SFF has tried to do this in a lot of its writing and communications, and I think it's pretty important for people to be able to do that somehow.
I also think it's really important for grantmakers to somehow communicate that a grant from them is not a broad endorsement of all aspects of a project. This is actually frequently a huge obstacle to good grantmaking and there are all kinds of grants that things like the LTFF and OpenPhil and other grantmakers are averse to making because the grant would somehow inextricably come an endorsement of the whole organization or all individuals involved, and this prevents many good grants from happening.
Historically I have tried to fix this on the LTFF by just publishing my hesitations and concerns with individuals and organizations when I recommended them a grant, but that is a lot of work. It also stresses a lot of people out and gets you a lot of pushback, so I don't think this is currently a solution that people can just adopt tomorrow (though I would love for more people to do this).
I do also think that $50k is really not very much, and also the grant isn't "to Jacy", but to an organization that Jacy is involved in, which I think reduces potential harm by another factor of 2-3 or so. I personally quite disliked this grant, and think it's a bad grant, but I don't think it's a grant that I feel actually causes much harm. It just seems like a bad use of money that maybe causes a few thousand expected dollars in negative externalities.
There are grants that I would have paid serious money to prevent (like some historical grants to Leverage Research), but this is not one of them, and I think I would update a bit downwards on the judgement of a process that produces grants like this, but not much more.
On the other hand, I think the $200k speculation grant budget to Michael is actually a good idea, though definitely a high-variance choice and I could totally be convinced it's a bad idea. $200k as a regrantor is just really not very much (I think substantially less power than $50k of direct funding, for example), and as far as I can tell the grants he recommended were extremely unlikely to cause any problems in terms of weird power dynamics or pressure (I don't feel comfortable going into details since speculation grant info is private, but I do think I can share my overall assessment as these grants having been very unlikely to give rise to any undue pressure).
I do think this is a reasonable thing to worry about. I also think Michael is exactly the kind of person who has in the past suggested projects and perspectives that have been extremely valuable to me, and I think giving him some money to highlight more of those is totally a worthwhile investment. If I was doing something like giving him a speculation grant budget, I do think I would probably take some kind of precaution and block some grants that look maybe like they would involve weird power-dynamics, but it doesn't look like it ever came to this, so it's hard for me to tell what process might have been in place.
I also separately just want to iterate my own personal epistemic state, which is that I haven't heard of any credible allegations of sexual abuse for Michael. I personally find being around him quite stressful, I know many people who have had bad experiences with him, and he seems very manipulative in a ton of different ways, but I do want to distinguish this from the specific ground truth of sexual abuse and assault. As I said before, I don't currently want him to be part of my community or the Effective Altruism community more broadly.
I might totally be missing something, and if Michael was more closely involved in the community and would pose more of a direct risk, I would probably spend the (very considerable) effort investigating this in more detail, and I would currently be happy for someone else to investigate this in more detail and share as much as they can, but I also don't currently have the personal capacity with all my other responsibilities to investigate this much further (and my guess is the community health team also thinks there is no immediate urgent need given that he is really not very involved in community things these days).
I do want to avoid contributing to an information cascade that somehow updates on there being tons of confirmed evidence of sexual assault for someone, when at least I haven't seen that evidence. The only evidence I know of are these two tweet threads by a person named Jax (who I don't know), and in that tweet thread they also say a bunch of stuff that seems really extremely likely to be false to me, or extremely likely to lack a ton of relevant crucial context, so that I don't consider those allegations to be sufficient for me to update me very much.
Again, there is totally a chance I am missing information here, and I have my own evidence (unrelated to any sexual abuse or assault stuff) that makes me not want Michael be involved in the EA and Rationality community more than he currently is, and if other people have additional evidence I would love to see it, since this does seem at least somewhat important.