Related: Effective Altruism, Environmentalism, and Climate Change: An Introduction

I could perhaps have listed "effective environmental altruism" as focus area 5. The environmental movement in general is large and well-known, but I'm not aware of many effective altruists who take environmentalism to be the most important cause for them to work on, after closely investigating the above focus areas. In contrast, the groups and people named above tend to have influenced each other, and have considered all these focus areas explicitly. For this reason, I've left "effective environmental altruism" off the list, though perhaps a popular focus on effective environmental altruism could arise in the future.

- Luke Muehlhauser, 2013, Four Focus Areas of Effective Altruism

Multiple EA organizations have, very recently, begun exploring or updating research on important, neglected, and tractable interventions on climate change, and how they help. I find this promising, because I believe there are many bridges effective altruism can build with the movements surrounding climate activism, as we're all motivated to help the same classes of beneficiaries in broad ways: currently living humans; non-human animals; and future generations (both human and non-human). It seems climate change advocacy and mitigation has become a cause in its own right for millions of people worldwide because it cuts across all these areas. I believe, even beyond the climate though, effective altruism can extend concern among environmentalists to causes more broadly, including issues such as:

  • poverty alleviation
  • global health initiatives
  • farm animal welfare
  • wild animal welfare
  • global catastrophic and existential risk mitigation

Giving What We Can has a nearly complete profile up on the effects of climate change on human well-being in two parts: one on the importance and neglectedness of climate change's impact on poverty and global health, GWWC's classic foci, and one on tractable interventions in the space. They've also released an updated report on Cool Earth, what GWWC considers a top charity in reducing carbon emissions, as well as going into great detail to explain the cost of donating to Cool Earth to save a life relative to their top-recommended charities such as the Against Malaria Foundation, and the other impacts Cool Earth has on developing communities. 

I consider all these reports highly impressive. Originally, I was thinking I'd have to go digging myself to find these results and report the research for everyone else to read. I think this might be wholly unnecessary because of Giving What We Can's quality work. What's new in these reports most of us didn't already know? Well, Giving What We Can compares the potential impact of different types of climate-change interventions, including policy advocacy to reduce emissions, direct action, and geoengineering. These reports also give an overview of the range of plausible and non-negligible impacts of climate change on human well-being through the 21st century, including the potential impact of the extreme tail-risks of runaway climate change. I consider it worthy reading even if poverty alleviation, global health, or climate change aren't your own top-priority causes.

80,000 Hours has recently published a new profile on why effective altruists might consider themselves best suited to prioritizing mitigation of the extreme tail risks from climate change, and how they would use their careers to do so as either advocates or researchers. This also includes recommendations on what are some of the most concrete actions one can take towards effectively mitigating the worst impacts of climate change. That's great, because I'm sure many of us know friends who are committed environmentalists, and these recommendations work just as well, if not better, for them in pursuit of a high-impact career. Spread the news! This is part of 80,000 Hours' new and ongoing series on the biggest problems in the world, and what can be done about them.


The Centre for the Study of Existential Risk, out of the University of Cambridge, is hiring a post-doctoral research associate for environmental risk. This research would be in the same vein of what 80,000 Hours recommends researchers pursue (see above). I'd like to highlight this is a great opportunity because this may be the best current opportunity to have a direct individual impact on this cause for many effective altruists, outside of research in engineering and climate science. From the job posting:

This first hire is likely to seed a broader programme in this space for us, in collaboration with a range of partners in Cambridge. Relevant disciplines might include: biology, ecology, conservation, mathematical modelling, planetary science, anthropology, psychology, human geography, decision and policy sciences. Please share the word as widely as possible! As Huw's and my own networks are not primarily in environmental and climate risk, we are very grateful for the help of our colleagues and friends in reaching the right networks. 

Even if you yourself or your close friends aren't fit for this job, I recommend reaching out as far as you can to help get this position filled. As stated, this first hire is likely to precipitate a broader programme, and the ability to influence that marks the potential for huge impact from one individual. The deadline for applications is May 11th. Applications can be submitted here.


Students for High-Impact Charity is a recently launched EA project, aiming to design curriculum and provide an introductory education on effective altruism to secondary school students, covering a wide range of causes, including effective environmental and climate change-related charities. They're currently seeking volunteer students or teachers to help run a course, or one of the individual modules, in schools all around the world. Contact information is here, or you can learn more and get involved directly by joining their Facebook group, SHIC Roots.


Other recent efforts from the EA community at large are the recent announcement of an independent research team making a renewed effort to evaluate effective interventions and organizations on emission reduction and climate change mitigation, as well as environmental pollution and habitat destruction, in greater depth. From Josie Segar, a co-leader of the team:

The aim of the project is to critically evaluate whether environmental issues and organisations might fit well into the remit of Effective Altruism. Environmental issues, including anthropogenic climate change, have been cited by major international organisations such as WHO and the IPCC as important issues with huge potential for severely negatively impacting sentient lives globally. These organisations are beginning to reframe these topics such that they align very well with many of the core issues that EA already tackles, e.g. global health, poverty, animal welfare and existential risk. Therefore, given the scope and scale of these problems, we think it possible that they align themselves with the fundamental values and objectives of EA.

There is also an Effective Environmentalism Facebook group, now boasting 75+ members.

 


Other EA organizations which focus on many causes, including the extreme tail-risks from climate change, include the Open Philanthropy Project, and particularly their cause report on geoengineering, and the Global Catastrophic Risks Institute.

Environmentalist, effective altruist, both or neither, if you and/or your friends care deeply about climate change, I encourage you to get to pursue some of the above avenues. Why? Because this is an opportunity to get on the ground floor to build climate change mitigation as its own cause within effective altruism, and the work of a handful of individuals do now could carve out a new niche in not one, but multiple social movements, and steer the trajectory of a cause for years to come, as effective altruists have already done in dynamic new disciplines like evidence-based charity evaluation, AI safety, and awareness campaigns for other neglected causes. As Kaj Sotala wrote,

At the moment, the core of effective altruism is formed of smart, driven, and caring people from all around the world. When you become an effective altruist and start participating, you are joining a community of some of the most interesting people on Earth.


10

0
0

Reactions

0
0

More posts like this

Comments4


Sorted by Click to highlight new comments since:

yes nice summary

I liked the Founders Pledge work on this also:

https://founderspledge.com/research/Cause%20Report%20-%20Climate%20Change.pdf

I agree there's an overlap with poverty and catastrophic risk. I used to work on cloud feedbacks and options for slowing global warming, but (seeing the slow progress of governments) I switched to working on a possible consequence of climate change, namely multiple bread basket failure (MBBF) and abrupt global famine, and how to prevent that. So with David Denkenberger I co-founded ALLFED.info - could ALLFED be considered a climate initiative in the new review?

Thanks Evan, that is a really useful summary.

Strongly agreed!

Nice writeup, I learned a worrying amount of stuff. The fact that you were able to find all of this information, yet I needed a writeup before hearing about any of this indicates an inferiority in my ability to find information.

Did you think "I wonder about EA and climate" and then looked specifically for that information, or it rather "I am seeing this new pattern of climate interest, I should make a collection of this information"?

If it's the latter, then I'm very dissatisfied with the fact I was aware of none of these things in advance. I'd love to hear a bit about your reading habits, as I'm clearly not as well informed as you.

also, none of the ggwc links worked for me. If I remove the 'preview.' part, I get a 404.

Curated and popular this week
 ·  · 13m read
 · 
Notes  The following text explores, in a speculative manner, the evolutionary question: Did high-intensity affective states, specifically Pain, emerge early in evolutionary history, or did they develop gradually over time? Note: We are not neuroscientists; our work draws on our evolutionary biology background and our efforts to develop welfare metrics that accurately reflect reality and effectively reduce suffering. We hope these ideas may interest researchers in neuroscience, comparative cognition, and animal welfare science. This discussion is part of a broader manuscript in progress, focusing on interspecific comparisons of affective capacities—a critical question for advancing animal welfare science and estimating the Welfare Footprint of animal-sourced products.     Key points  Ultimate question: Do primitive sentient organisms experience extreme pain intensities, or fine-grained pain intensity discrimination, or both? Scientific framing: Pain functions as a biological signalling system that guides behavior by encoding motivational importance. The evolution of Pain signalling —its intensity range and resolution (i.e., the granularity with which differences in Pain intensity can be perceived)— can be viewed as an optimization problem, where neural architectures must balance computational efficiency, survival-driven signal prioritization, and adaptive flexibility. Mathematical clarification: Resolution is a fundamental requirement for encoding and processing information. Pain varies not only in overall intensity but also in granularity—how finely intensity levels can be distinguished.  Hypothetical Evolutionary Pathways: by analysing affective intensity (low, high) and resolution (low, high) as independent dimensions, we describe four illustrative evolutionary scenarios that provide a structured framework to examine whether primitive sentient organisms can experience Pain of high intensity, nuanced affective intensities, both, or neither.     Introdu
 ·  · 2m read
 · 
A while back (as I've just been reminded by a discussion on another thread), David Thorstad wrote a bunch of posts critiquing the idea that small reductions in extinction risk have very high value, because the expected number of people who will exist in the future is very high: https://reflectivealtruism.com/category/my-papers/mistakes-in-moral-mathematics/. The arguments are quite complicated, but the basic points are that the expected number of people in the future is much lower than longtermists estimate because: -Longtermists tend to neglect the fact that even if your intervention blocks one extinction risk, there are others it might fail to block; surviving for billions  (or more) of years likely  requires driving extinction risk very low for a long period of time, and if we are not likely to survive that long, even conditional on longtermist interventions against one extinction risk succeeding, the value of preventing extinction (conditional on more happy people being valuable) is much lower.  -Longtermists tend to assume that in the future population will be roughly as large as the available resources can support. But ever since the industrial revolution, as countries get richer, their fertility rate falls and falls until it is below replacement. So we can't just assume future population sizes will be near the limits of what the available resources will support. Thorstad goes on to argue that this weakens the case for longtermism generally, not just the value of extinction risk reductions, since the case for longtermism is that future expected population  is many times the current population, or at least could be given plausible levels of longtermist extinction risk reduction effort. He also notes that if he can find multiple common mistakes in longtermist estimates of expected future population, we should expect that those estimates might be off in other ways. (At this point I would note that they could also be missing factors that bias their estimates of
 ·  · 7m read
 · 
The company released a model it classified as risky — without meeting requirements it previously promised This is the full text of a post first published on Obsolete, a Substack that I write about the intersection of capitalism, geopolitics, and artificial intelligence. I’m a freelance journalist and the author of a forthcoming book called Obsolete: Power, Profit, and the Race to Build Machine Superintelligence. Consider subscribing to stay up to date with my work. After publication, this article was updated to include an additional response from Anthropic and to clarify that while the company's version history webpage doesn't explicitly highlight changes to the original ASL-4 commitment, discussion of these changes can be found in a redline PDF linked on that page. Anthropic just released Claude 4 Opus, its most capable AI model to date. But in doing so, the company may have abandoned one of its earliest promises. In September 2023, Anthropic published its Responsible Scaling Policy (RSP), a first-of-its-kind safety framework that promises to gate increasingly capable AI systems behind increasingly robust safeguards. Other leading AI companies followed suit, releasing their own versions of RSPs. The US lacks binding regulations on frontier AI systems, and these plans remain voluntary. The core idea behind the RSP and similar frameworks is to assess AI models for dangerous capabilities, like being able to self-replicate in the wild or help novices make bioweapons. The results of these evaluations determine the risk level of the model. If the model is found to be too risky, the company commits to not releasing it until sufficient mitigation measures are in place. Earlier today, TIME published then temporarily removed an article revealing that the yet-to-be announced Claude 4 Opus is the first Anthropic model to trigger the company's AI Safety Level 3 (ASL-3) protections, after safety evaluators found it may be able to assist novices in building bioweapons. (The