Apply the history of effective altruism topic to posts about the origins and development of effective altruism. This article describes the formation of the effective altruism movement and community, some of its precursors, and some major developments from 2010-22.
The Centre for Effective Altruism and other central EA organizations were founded in the 2000s and 2010s, but effective altruism has much earlier roots in various philosophical theories and communities related to doing good and rationality.
Effective altruism has been influenced by older philosophical ideas and traditions.
Academic philosopher Peter Singer wrote âFamine, Affluence and Moralityâ in 1972, partly in response to a contemporary famine in Bangladesh. He argues that if one can use oneâs wealth to reduce anotherâs extreme suffering at small cost to oneself, then one is obligated to do so. In this work, Singer introduces the well-known âdrowning childâ thought experiment: most of us, if we passed a child drowning in a shallow pond, would consider ourselves obligated to dive in and rescue them, even if we ruined a nice suit to do so. Similarly, we should feel obligated to sacrifice luxuries to save âdrowning childrenâ living far away from us: people dying of poverty, disease, or other preventable ills.[1] Written in 1996, Peter Ungerâs Living High and Letting Die makes similar arguments[2]. Arguments such as these inspired philosopher Toby Ord to form the effective giving community Giving What We Can, and are often cited in effective altruism outreach.
Philosophy has also influenced effective altruists to care about the long-term future. Nick Bostrom argued in âAstronomical waste: The Opportunity Cost of Delayed Technological Developmentâ (2003) that delaying technological development might have extremely large (astronomical) opportunity costs, since it hinders humanity from colonizing the stars and creating many billions of happy lives[3]. This idea has been influential on longtermism and on many EAs who prioritize the mitigation of existential and catastrophic risks.
Many who would go on to be interested in EA congregated on Felificia, an online âforum for utilitarianismâ. Some highly-involved EAs who were previously active on Felificia include Jacy Reese Anthis, Tom Ash, Sam Bankman-Fried, Ryan Carey, Sasha Cooper, RuairĂ Donnelly, Oscar Horta, Will MacAskill, Holly Morgan, Toby Ord, Carl Shulman, Pablo Stafforini, Brian Tomasik, Rob Wiblin, Peter Wildeford, and Boris Yakubchik.[4][5][6]
Since the 1990s, economists working in development economics have used randomized controlled trials to test whether interventions to prevent disease or alleviate poverty actually work (some notable figures are Esther Duflo, Abhijit Banerjee and Michael Cremer). This evidence-based approach to poverty alleviation has influenced effective altruistsâ work in global health and development, notably GiveDirectly, an unconditional cash transfers organization, and Charity Entrepreneurship (originally Charity Science), an organization that incubates evidence-based charities.[7]
From the early 2000s, the rationality community arose around the group blogs Overcoming Bias and later LessWrong. The rationality community shares with effective altruism an interest in clear and careful thinking and evidence-based reasoning, as well as a concern about severe risks from misaligned AI. The two communities have historically had a large overlap and continue to do so.
In 2007, charity evaluator GiveWell was founded. Some colleagues working in the finance industry had formed an informal group to research the best charitable giving opportunities, but they found that there often wasnât good data on this. Two members of the group, Holden Karnofsky and Elie Hassenfeld, funded by other members, set up GiveWell in order to research altruistic interventions full time. [8]
Â
Between 2009 and 2012, the effective altruism community began to crystallize. Many central EA organizations were founded during this period, and the name âeffective altruismâ was first coined in 2011.
In November 2009, Giving What We Can (GWWC) was launched by Toby Ord, William McAskill and Bernadette Young. Ord wrote in September 2007:  âAs an undergraduate, I often argued into the night with my friends about political and ethical matters. I regularly received the retort: âwell if you believe that, why don't you just give all of your money to people starving in Africaâ. This was meant to show that my position was absurd, but as time passed and I thought more about ethics, I found the suggestion increasingly sensible: why not indeed?â[9]
He calculated that the Fred Hollows foundation could cure someone of blindness for around ÂŁ24. If he worked as a professor but continued to live a modest student lifestyle, heâd have a good life and still be able to cure around 42,000 people of blindness through donations over the course of his life. He set up GWWC to help and inspire others who were in the same position. This was a community of people committed to effective giving and giving substantial proportions of their income. Within their first year, 64 people had joined, pledging collectively $21 million.[10][11]
In February 2011, Will McAskill and Ben Todd soft-launched 80,000 Hours (then High Impact Careers), a project aiming to offer advice on how to have a positive impact with your career. At the same time, the GWWC community was growing. In August 2011, MacAskill and others were pushing for GWWC and 80,000 Hours to incorporate together as a charity, under an umbrella organization, so they could take paid staff. 17 community members voted on a name for the umbrella organization, and âCentre for Effective Altruism (CEA)â won. Though CEA wasnât intended to be a public presence, the name âeffective altruismâ caught on.[12]
In 2011, GiveWell partnered with Good Ventures, the philanthropic foundation of Facebook co-founder Dustin Moskovitz and Cari Tuna. This partnership named itself the Open Philanthropy Project in 2014 and began operating independently in 2017.[13]
Â
In these years, the EA movement continued to grow and come to public prominence, and a number of works that would go on to be influential were written. In 2013, Nick Beckstead published his PhD thesis âOn the overwhelming importance of shaping the far futureâ and Nick Bostromâs Superintelligence was published in 2014. [14][15]These influenced many EAs to become more interested in longtermism and AI safety respectively.
2013 saw the first EA Summit, a 7-day event in the San Francisco Bay Area organized by Leverage Research and attended by staff from the Center for Applied Rationality, the High Impact Network, GiveWell, The Life You Can Save, 80,000 Hours, Giving What We Can, Effective Animal Altruism, and the Machine Intelligence Research Institute (MIRI). Keynote speakers were Peter Singer, Peter Thiel, Jaan Tallinn, and Holden Karnofsky.[16]
In 2014, the first EA survey was run by Rethink Charity (then .impact) and Charity Science. In this survey, 813 respondents considered themselves members of the EA movement. Global poverty was the most popular cause area, and the top three charities were GiveWellâs highly-rated charities for 2013: the Against Malaria Foundation, the Schistosomiasis Control Initiative, and GiveDirectly (with MIRI the fourth largest).[17]
In 2014 the Good Done Right conference was held.[18][19]This conference focussed on moral philosophy from an EA perspective and was held in Oxford. The speakers were Elizabeth Ashford, Nick Beckstead, Nick Bostrom, Owen Cotton-Barratt, Norman Daniels, Rachel Glennerster, Michelle Hutchinson, Jeremy Lauer, Will MacAskill, Toby Ord and Derek Parfit.
In 2015, the Local Effective Altruism Network (LEAN) was formed (as a project of .impact, later Rethink Charity). It began as a grassroots project with the aim to âhelp local EA groups come into existence and thrive.â [20]This network seeded many local EA groups and maintained early resource collections.
In 2015, three books about EA were published: How To Be Great At Doing Good by Nick Cooney, The Most Good You Can Do by Peter Singer, and Doing Good Better by Will MacAskill.  [21][22]
Â
2016-2022 saw the EA movement continue to grow. More and more people identified as community members: in 2019, David Moss estimated that there were 2315 highly-engaged EAs and 4700-10,000 EAs in the community overall.[23]Â Some notable EA organizations founded during this period were Charity Entrepreneurship (2019) and Rethink Priorities (2018).
The first EA survey in 2014 asked community members which causes they supported (they could pick multiple answers). The top cause in 2014 was global poverty, with 71% of surveyed EAs saying they supported that. Other popular cause areas were meta-charity (supported by 51%) and longtermism and existential risk (54% said they supported at least one of âfar futureâ, âAI riskâ and âx-riskâ).[24]
Over the following years, the proportion of community members prioritizing the far future, AI safety, and other existential risks grew slightly. In the 2019 survey, 22% of EAs reported that since joining EA, theyâd changed their cause area prioritization from something else to long-term future or existential risk reduction. [25][26]
This slight shift in community sentiment was mirrored in EA philanthropy. Before 2014, GiveWell funded causes in global health and development exclusively. From around 2014, the Open Philanthropy Project (a major EA funder) increasingly began to fund interventions aimed at reducing existential risk and improving the long-term future; there also arose smaller funding bodies who prioritized the long-term future from an EA perspective (for example, the Survival and Flourishing Fund, and the FTX Foundation). In 2022, 34.3% of EA funding went to longtermism and existential risk. [27]
In 2020, Toby Ord wrote The Precipice, a book about existential risk. In 2022, Will MacAskillâs What We Owe the Future was published, making the case for longtermism to a wider audience.
Earning to give is the idea that one can have an outsized positive impact by taking a high-paying job in an industry such as finance or tech, and donating large portions of oneâs income to effective charities. Earning to give was popular early on in EA, but central EA organizations gradually de-emphasised this strategy, starting in around 2015. This was in part because billionaire funders such as Dustin Moskovitz and (later) Sam Bankman-Fried had become interested in EA and committed large amounts of funding; funding seemed abundant, whereas people with the skills and inclination to actually solve the worldâs pressing problems were more scarce.[28][29]
There may also have been more opportunities for EAs to work directly on pressing issues, or on community building and âmetaâ work, as the EA community grew. Itâs possible that the shift was related to the fact that more EAs became interested in longtermism and AI safety; compared to global development, these fields had relatively small funding gaps, but a large need for talent, since they are less well-known cause areas and since much AI safety work requires deep technical expertise.[30]
In the later 2010s and early 2020s, EA received more media attention than it had previously, both positive and negative. For example, the New Yorker published a positive profile of Will MacAskill in August 2022, [31] whereas in February 2023 Time published an expose of alleged sexual harassment and abuse within the community.[32]
The EA community faced a significant crisis in late 2022 and 2023 when the FTX cryptocurrency exchange went bankrupt and CEO Sam Bankman-Fried was accused of fraud. Prior to the crisis, Bankman-Fried had been a well-respected member of the EA community, and his FTX Foundation had been a significant source of funding for EA projects, so the crisis caused both practical disruption and emotional distress.