I think that someone should write a detailed history of the effective altruism movement. The history that currently exists on the forum is pretty limited, and I’m not aware of much other material, so I think there’s room for substantial improvement. An oral history was already suggested in this post.

I tentatively planned to write this post before FTX collapsed, but the reasons for writing this are probably even more compelling now than they were beforehand. I think a comprehensive written history would help…

  1. Develop an EA ethos/identity based on a shared intellectual history and provide a launch pad for future developments (e.g. longtermism and an influx of money). I remember reading about a community member who mostly thought about global health getting on board with AI safety when they met a civil rights attorney who was concerned about it. A demonstration of shared values allowed for that development.
  2. Build trust within the movement. As the community grows, it can no longer rely on everyone knowing everyone else, and needs external tools to keep everyone on the same page. Aesthetics have been suggested as one option, and I think that may be part of the solution, in concert with a written history.
  3. Mitigate existential risk to the EA movement. See EA criticism #6 in Peter Wildeford’s post and this post about ways in which EA could fail. Assuming the book would help the movement develop an identity and shared trust, it could lower risk to the movement.
  4. Understand the strengths and weaknesses of the movement, and what has historically been done well and what has been done poorly.

There are a few ways this could happen.

  1. Open Phil (which already has a History of Philanthropy focus area) or CEA could actively seek out someone for the role and fund them for the duration of the project. This process would give the writer the credibility needed to get time with important EA people.
  2. A would-be writer could request a grant, perhaps from the EA Infrastructure Fund.
  3. An already-established EA journalist like Kelsey Piper could do it. There would be a high opportunity cost associated with this option, of course, since they’re already doing valuable work. On the other hand, they would already have the credibility and baseline knowledge required to do a great job.

I’d be interested in hearing people’s thoughts on this, or if I missed a resource that already exists.

54

0
0

Reactions

0
0
Comments16


Sorted by Click to highlight new comments since:

A few months ago I received a grant to spend six months researching the history of effective altruism, conducting interviews with early EAs, and sharing my findings on a dedicated website. Unfortunately, the funds for this grant came from the Future Fund, and have been affected by the collapse of FTX. I still intend to carry out this project eventually, but finding alternative funding sources is not a high priority for me, since my current projects are more urgent and perhaps more important.

If you think I should prioritize this project, or have thoughts on how it should be carried out, feel free to get in touch.

You should definitely prioritize it! What about creating an open source wiki of sorts to crowd source information?

You could always double check / get citations later on.

You mention opportunity cost, but I think it's worth further emphasizing. To do this well, you'd need somebody who has been around a while (or at least a lot of time and cooperation from people who have). You'd need them to manage different perspectives and opinions about various things that happened. You'd need them to be a very good writer. And you'd need the writer to be someone people trust--my perspective is "Open Phil hired this person" would probably not be sufficient for trust.

There are people who could do this: Kelsey Piper is one as you suggest. But these are all pretty unusual characteristics and the opportunity costs for the sort of person who could do this well just seem really massive. I might be wrong about this, but that's my first thought when reading your post.

I don't know that I'm the kind of person OP is thinking of, but beyond opportunity cost there's also a question of reportorial distance/objectivity. I've thought a lot about whether to do a project like this and one sticking point is (a) I identify as an EA (b) I donate to GiveWell and signed the GWWC pledge (c) many of my friends are EAs, so I'm not sure any book I produce would be perceived as having sufficient credibility among non-EA readers.

I'd encourage you to consider taking it on. Even if identifying as an EA would reduce the credibility for outsiders, I'm sure whatever you produced would be a wonderful starting point for anyone else tackling it down the line.

People enjoyed reading Winston Churchill's history of the war and he was hardly a neutral observer! Pretty clear which side he wanted to win.

See also: Thomas Young's history of abolitionism, Friedrich Engels' history of Marxism.

I’d also say take it on. Someone objective can always rewrite it later, but if we don’t save it now we could lose a lot.

Definitely agree with Chris here!  Worst case scenario, you create useful material for someone else who tackles it down the line, best case scenario, you write the whole thing yourself.

I wonder whether Larissa MacFarquhar would be interested? She wrote about the early EA community in her 2015 book Strangers Drowning (chapters "At Once Rational and Ardent" and "From the Point of View of the Universe") and also wrote a 2011 profile of Derek Parfit.

That would certainly be great if she would. I actually first heard about EA when I read Strangers Drowning in 2016! It's very well written.

A possible middle ground is to make efforts to ensure that important source material was preserved, to keep option value of doing this project later. That would presumably require significantly fewer resources, and wouldn't require opportunity costs from "the sort of person who could do [the writing of a book] well."

Great point!  A historian or archivist could take on this role.  Maybe CEA could hire one?  I’d say it fits within their mission “to nurture a community of people who are thinking carefully about the world’s biggest problems and taking impactful action to solve them.”

I think opportunity cost is well worth mentioning, but I don't know that I think it's as high as you believe it to be.

Choosing someone who has been around a while is optional.  The value of having an experienced community member do it is built-in trust, access, and understanding.  The costs are the writer's time (though that cost is decreasing as more people start writing about EA professionally) and the time of those being interviewed.  I would also note that while there's lots of work for technical people in EA, writers in the community may not have found such great opportunities for impact.

Having a relative outsider take on the project would add objectivity, as Dylan noted.  Objectivity would both improve credibility to outsiders and increase that likelihood of robust criticism being made.  I also think there are just a lot of pretty great writers in the world who might find EA interesting.  Perhaps you just get different benefits from different types of writers.

There's a cost to waiting as well.  The longer you wait, the more likely it is that important parts of the story will be forgotten or deleted.

I expect a project like this is not worth the cost. I imagine doing this well would require dozens of hours of interviews with people who are more senior in the EA movement, and I think many of those people’s time is often quite valuable.

Regarding the pros you mention:

  1. I’m not convinced that building more EA ethos/identity based around shared history is a good thing. I expect this would make it even harder to pivot to new things or treat EA as a question, it also wouldn’t be unifying for many folks (e.g. who having been thinking about AI safety for a decade or who don’t buy longtermism). According to me, the bulk of people who call themselves EAs, like most groups, are too slow to update on new arguments and information and I would expect that having a written and agreed upon history would not help with this. Then again, my point might be made better if I could reference common historical cases of what I mean lol

  2. I don’t see how this helps build trust.

  3. I don’t see how having a written history makes the movement less likely to die. I also don’t know what it looks like for the EA movement to die or how bad this actually is; the EA movement is largely instrumental toward other things I care about: reducing suffering, increasing the chances of good stuff in the universe, my and my friends’ happiness to a lesser extent.

  4. This does seem like a value add to me, though the project I’m imagining only does a medium job at this given it’s goal is not “chronology of mistakes and missteps”. Maybe worth checking out https://www.openphilanthropy.org/research/some-case-studies-in-early-field-growth/

With ideas like this I sometimes ask myself “why hasn’t somebody done this yet”. Some reasons that come to mind: too busy doing other things they think are important, might come across as self aggrandizing, who’s going to read it?-and ways I expect it to get read are weird and indoctorinaty (“welcome to the club, here’s a book about our history”, as opposed to “oh, you want to do lots of good, here are some ideas that might be useful”), it doesn’t directly improve the world and the indirect path to impact is shakier than other meta things.

I’m not saying this is necessarily a bad idea. But so far I don’t see strong reasons to do this over the many other things openphil/cea/Kelsey piper/interviewees could be doing.

I’ve addressed the point on costs in other commentary, so we may just disagree there!

  1. I think the core idea is that the EA ethos is about constantly asking how we can do the most good and updating based on new information.  So the book would hopefully codify that spirit rather than just talk about how great we’re doing.
  2. I find it easier to trust people whose motivations I understand and who have demonstrated strong character in the past.  History can give a better sense of those two things.  Reading about Julia Wise in Strangers Drowning, for example, did that for me.
  3. Humans often think about things in terms of stories.  If you want someone to care about global poverty, you have a few ways of approaching it.  You could tell them how many people live in extreme poverty and that by donating to GiveDirectly they’ll get way more QALYs per dollar than they would by donating elsewhere.  You could also tell them about your path to donating, and share a story from the GiveDirectly website about how a participant benefited the money they received.  In my experience, that’s the better strategy.  And absolutely, the EA community exists to serve a purpose.  Right now I think it’s reasonably good at doing the things that I care about, so I want it to continue to exist.
  4. Agreed!

I think there could be a particular audience for this book, and it likely wouldn’t be EA newbies.  The project could also take on a lot of different forms, from empirical report to personal history, depending on the writer.  Hopefully the right person sees this and decides to go for it if and when it makes sense!  Regardless, your commentary is appreciated.
 

Curated and popular this week
 ·  · 10m read
 · 
Regulation cannot be written in blood alone. There’s this fantasy of easy, free support for the AI Safety position coming from what’s commonly called a “warning shot”. The idea is that AI will cause smaller disasters before it causes a really big one, and that when people see this they will realize we’ve been right all along and easily do what we suggest. I can’t count how many times someone (ostensibly from my own side) has said something to me like “we just have to hope for warning shots”. It’s the AI Safety version of “regulation is written in blood”. But that’s not how it works. Here’s what I think about the myth that warning shots will come to save the day: 1) Awful. I will never hope for a disaster. That’s what I’m trying to prevent. Hoping for disasters to make our job easier is callous and it takes us off track to be thinking about the silver lining of failing in our mission. 2) A disaster does not automatically a warning shot make. People have to be prepared with a world model that includes what the significance of the event would be to experience it as a warning shot that kicks them into gear. 3) The way to make warning shots effective if (God forbid) they happen is to work hard at convincing others of the risk and what to do about it based on the evidence we already have— the very thing we should be doing in the absence of warning shots. If these smaller scale disasters happen, they will only serve as warning shots if we put a lot of work into educating the public to understand what they mean before they happen. The default “warning shot” event outcome is confusion, misattribution, or normalizing the tragedy. Let’s imagine what one of these macabrely hoped-for “warning shot” scenarios feels like from the inside. Say one of the commonly proposed warning shot scenario occurs: a misaligned AI causes several thousand deaths. Say the deaths are of ICU patients because the AI in charge of their machines decides that costs and suffering would be minimize
 ·  · 14m read
 · 
This is a transcript of my opening talk at EA Global: London 2025. In my talk, I challenge the misconception that EA is populated by “cold, uncaring, spreadsheet-obsessed robots” and explain how EA principles serve as tools for putting compassion into practice, translating our feelings about the world's problems into effective action. Key points:  * Most people involved in EA are here because of their feelings, not despite them. Many of us are driven by emotions like anger about neglected global health needs, sadness about animal suffering, or fear about AI risks. What distinguishes us as a community isn't that we don't feel; it's that we don't stop at feeling — we act. Two examples: * When USAID cuts threatened critical health programs, GiveWell mobilized $24 million in emergency funding within weeks. * People from the EA ecosystem spotted AI risks years ahead of the mainstream and pioneered funding for the field starting in 2015, helping transform AI safety from a fringe concern into a thriving research field. * We don't make spreadsheets because we lack care. We make them because we care deeply. In the face of tremendous suffering, prioritization helps us take decisive, thoughtful action instead of freezing or leaving impact on the table. * Surveys show that personal connections are the most common way that people first discover EA. When we share our own stories — explaining not just what we do but why it matters to us emotionally — we help others see that EA offers a concrete way to turn their compassion into meaningful impact. You can also watch my full talk on YouTube. ---------------------------------------- One year ago, I stood on this stage as the new CEO of the Centre for Effective Altruism to talk about the journey effective altruism is on. Among other key messages, my talk made this point: if we want to get to where we want to go, we need to be better at telling our own stories rather than leaving that to critics and commentators. Since
 ·  · 3m read
 · 
A friend of mine who worked as a social worker in a hospital told me a story that stuck with me. She had a conversation with an in-patient having a very difficult time. It was helpful, but as she was leaving, they told her wistfully 'You get to go home'. She found it hard to hear—it felt like an admonition. It was hard not to feel guilt over indeed getting to leave the facility and try to stop thinking about it, when others didn't have that luxury. The story really stuck with me. I resonate with the guilt of being in the fortunate position of being able to go back to my comfortable home and chill with my family while so many beings can't escape the horrible situations they're in, or whose very chance at existence depends on our work. Hearing the story was helpful for dealing with that guilt. Thinking about my friend's situation it was clear why she felt guilty. But also clear that it was absolutely crucial that she did go home. She was only going to be able to keep showing up to work and having useful conversations with people if she allowed herself proper respite. It might be unfair for her patients that she got to take the break they didn't, but it was also very clearly in their best interests for her to do it. Having a clear-cut example like that to think about when feeling guilt over taking time off is useful. But I also find the framing useful beyond the obvious cases. When morality feels all-consuming Effective altruism can sometimes feel all consuming. Any spending decision you make affects how much you can donate. Any activity you choose to do takes time away from work you could be doing to help others. Morality can feel as if it's making claims on even the things which are most important to you, and most personal. Often the narratives with which we push back on such feelings also involve optimisation. We think through how many hours per week we can work without burning out, and how much stress we can handle before it becomes a problem. I do find that