Hide table of contents

Part 1 (15 mins.)

Helping in the present or in the future?

A commonly held view within the EA community is that it's incredibly important to start from thinking about what it really means to make a difference, before thinking about specific ways of doing so. It’s hard to do the most good if we haven’t tried to get a clearer picture of what doing good means, and as we saw in chapter 3, clarifying our views here can be quite a complex task.

One of the core commitments of effective altruism is to the ethical ideal of impartiality. Although in normal life we may reasonably have special obligations (eg. to friends and family), in their altruistic efforts aspiring effective altruists strive to avoid privileging the interests of others based on arbitrary factors such as their appearance, race, gender, or nationality. 

Longtermism posits that we should also avoid privileging the interests of individuals based on when they might live.

In this chapter's exercise we’ll be reflecting on some prompts to help you start considering what you think about this question, i.e. "Do the interests of people who are not alive yet matter as much as the interests of people living today?"

Please read this short description of temporal discounting and then spend a couple minutes thinking through each prompt, and note down your thoughts - feel free to jot down uncertainties, or open questions you have that seem relevant. We encourage you to note down your thought process, but feel free to simply report your intuitions and gut feelings. 

Of course, these thought experiments all assume an unrealistic level of certainty about your options and their outcomes. For the purpose of this exercise, however, we encourage you to accept the premise of the thought experiments instead of trying to find loopholes. The idea is to isolate one particular aspect of a situation (e.g., the timing of our impact) and try to get at our moral intuitions about just that aspect

  1. Suppose that you could save 100 people today by burying toxic waste that will, in 200 years, leak out and kill thousands. Would you choose to save the 100 now and kill the thousands later? Does it make a difference whether the toxic waste leaks out 200 years from now or 2000?
  2. Imagine you donate enough money to the Against Malaria Foundation (AMF) to save a life. Unfortunately, there’s an administrative error with the currency transfer service you used, and AMF isn’t able to use your money until 5 years after you donated. Public health experts expect malaria rates to remain high over the next 5 years, so AMF expects your donation will be just as impactful in 5 years time. Many of the lives that AMF saves are of children under 5, and so the life your money saves is of someone who hadn’t been born yet when you donated.

    If you had known this at the time, would you have been less excited about the donation?

Part 2 (30 mins.)

One question (among many) that is relevant to this topic is “when will we develop human-level AI?”. 

It’s obviously not possible to just look this up, or to gather direct data on this question. So we need to gather what data and arguments we have, and make a judgment call. This applies to AI and other existential risks, but also to most questions that we’re interested in - “How many chickens will move to better changes if we pursue this advocacy campaign?”, “How much do we need to spend on bednets to save a life?”.

These judgements are really important: they could make a big difference to the impact we have. 

Unfortunately, we don’t yet have definitive answers to these questions, but we can aim to become “well-calibrated.” This means that when you say you’re 50% confident, you’re right about 50% of the time, not more, not less; when you say you're 90% confident, you're right about 90% of the time; and so on. 

This exercise aims to help you become well calibrated. The app you’ll use contains thousands of questions - enough for many hours of calibration training - that will measure how accurate your predictions are and chart your improvement over time. Nobody is perfectly calibrated; in fact, most of us are overconfident. But various studies show that this kind of training can quickly improve the accuracy of your predictions. 

Of course, most of the time we can’t check the answers to the questions life presents us with, and the predictions we’re trying to make in real life are aimed at complex events. The Calibrate Your Judgment tool helps you practice on simpler situations where the answer is already known, providing you with immediate feedback to help you improve.

Have a go using the Calibrate Your Judgment app for around 30 minutes! 


 

New Answer
New Comment


10 Answers sorted by

I'm finding the app feedback misleading and none of the explanations in the About/FAQ page are expanding in my Chrome and Opera.

Thanks for flagging! I've sent a bug report to the developers of the app

Edit: they fixed it

1. Yes we bury it, if the toxic waste will cause an existential catastrophe.. If it does not, then the answer is no. The 100 people we save today have a moral responsibility toward the thousands who will suffer in 200 years. Additionally, we have certainty about the 100 lives today, but only an estimate about the thousands in the future, a massive risk that cannot be ignored. Ultimately, when lives are at stake, no choice feels morally comfortable. This is a dilemma I would never wish to face.

 

2. I would have been less excited about the donation if my intention was to save a life in those five years.  However, I would still make the donation because my commitment is to save a life, regardless of when that happens. A life today is just as valuable as a life in 10 or even 100 years.

1. Toxic Waste Problem:

The 100 people living today, whoever is responsible for this toxic waste, can´t make thousands of people in 200 years pay for this mistake. It is wrong to bury the toxic waste and save people now if we are sure that this will cause even more deaths in 200 years for 2 reasons:

a) the number of people affected.

b) the lack of decision power and choice that the affected people have.

Logically speaking, it makes no sense to think differently if the leak were to happen in 2000 years and kill thousands of people, however, here I wouldn´t be so confident in my choice. To explain why I don´t feel confident, I am forced to bend and question the premises of the experiment. I hope that in 2000 years people will be more advanced and have the means to avoid toxic waste poisoning, so admitting that in 2000 years people will die because of toxic waste buried now would mean to me that we aren´t so bright and great, and we don´t have much potential. This would radically change the way I think about so many other topics.

Saving now 100 people in hopes that, later on, humans would know what to do disregards the dilemma because this implies that nobody dies (and that´s not the case, someone will die, either a hundred or thousands). Saving now 100 people puts the weight of acting on future people´s shoulders. If we didn´t bury the waste, they wouldn´t need to find a solution for it, in the first place.

Let´s imagine that we take option A and save 100 people today in the hopes of finding a way to save thousands in 200 years. Let´s imagine that this equals 6-7 generations of people (if new babies are born every 30 days on average). This means that our grandchildren´s grandchildren would be among the possibly poisoned and killed people. Let that sink in, and now, we should focus on whether future generations will be able to react fast enough. 

When is it time to start coming up with ideas to avoid or survive the leak? Is it 5 years before it happens enough? 2 months? How do they know when it will exactly happen? I wouldn´t be very confident in their ability to react in time. The second generation will trust that the third generation will come up with a solution, and the third generation will hope the same about the fourth. 

Besides, why would they care? The example of their ancestors will deter them from caring enough. Why should generations 2 to 5 pay for the research and the countermeasures for a problem that they didn´t cause, and won´t suffer? We can apply the same logic to 2000 years.

2. Donating to AMF problem:

It will be fine by me. I would trust the experts and hope that inflation rates really don´t have a negative effect on the donation´s potential, and I would hope that some technology or means needed to fight malaria get cheaper and that my donation can do better in 5 years than today. I would only be worried if AMF closes down in the meantime!

  1. Yes, we should bury the toxic waste if leaving it uncontained would cause an existential catastrophe. If the risk does not reach that threshold, then the answer is no. The well-being of the 100 people we save today must be weighed against the suffering of thousands in the future. While we have certainty about the immediate impact, our projections for long-term harm remain probabilistic—but the potential scale of suffering makes this risk morally significant. Ethical decision-making in such cases is fraught with uncertainty, yet prioritizing actions that maximize overall well-being remains our guiding principle. This is the kind of trade-off no one would want to make, but moral responsibility compels us to act with the best available evidence.
  2. Similarly, my enthusiasm for a donation might be lower if my primary goal were to save a life specifically within the next five years. However, my commitment is to maximize lives saved, regardless of the time frame. A life saved today holds the same intrinsic value as a life saved decades or even centuries from now. let us  aim to allocate resources where they generate the greatest long-term impact, ensuring that our actions create the most significant positive difference over time.

PART 1

  1. The lives of the 100 people living today aren't worth 10x more than the lives of the thousands living in the future, so I wouldn't bury the waste.

  2. I would have still donated; I don't see much of a difference, and the time when the beneficiaries are alive isn't a morally significant factor.

PART 2 My judgement is terrible but my confidence is very low so let's hope they cancel out.

Part-I Case1 Saving or helping more is always better than a few. So the decision is always for those thousand people who are going to exist for the future 200 years. As there is every possibility that the future of humanity is going to be better if and only if we don't deliberately or ignorantly make it worse. So being a member of EA community I have every responsibility to think for those in future even they are not in a position to influence the decisions in their favour.

Case-II If the malaria rate remains high then it is a good reason to believe that my donation which cannot be used before 5 years is of atleast same value that it would have been for now. Moreover the life lost or sufferings of all child are same even if they don't exist now. Ultimate aim of my donation is to reduce suffering and death irrespective of its time or location.

While I am not a longtermist, I would not choose an action that would directly put the lives of others at risk even in 200 years. In the scenario, we are told that the toxic waste shall leak, therefore, it’s definite that there shall be thousands of livers lost. Compared to the 100 lives that would be lost now, I would not risk that many lives even though they are far in the future. While we have talked about discount functions, it would be immoral to treat human lives in that way. 

In the second scenario where we are asked about 200 years or 2000 years, temporal discounting comes at a higher rate. Thinking that far into the future is hard because it would need me to think of other things that might have happened, such as existential catastrophes that might wipe out humanity before then. In that case, I would do more evaluation such that if I have confidence that humanity would be wiped out in that time, then I would save the 100 people in the current time. However, this would only be in a case where I am very confident that humanity shall be lost by that time, meaning the toxic waste I bury would have no effect on people in that future.

Week 5 exercise.

A. I would save 100 people now by hurrying the waste as there are high chances that technology will be advanced after a decade and we might be able to save thousands of people in the future too. I will be working to save thousands of people in the future by contributing to research. B. I'd still be excited as even if it's about someone who isn't born yet I'd still be able to save them.

The exercise purposefully asks us to ignore any "loopholes", and focus on the dilemma of either saving 100 people now or saving >1000 in the future. What would you choose being these the only 2 choices? What you suggest opens the door to saving everyone, however, the exercise doesn´t include this third option.

1
Zahra Irfan
Well then, it's truly really hard to choose. Anyone who thinks rationally would go with the option which offers saving more lives but I personally think that the choice of saving 100 people now is still better. We should be open to all possibilities. What I'm going to say now might sound foolish but if we can't find any good solutions by that time we can always dig that waste out (Which isn't possible ik) 👀

A. I will safe the 100 now that needs to be saved,and in the next 200 years to come,I will work out measures or path ways,which they can follow, towards minimizing their casualties then,since it's a must,that the occurrence,must occur. B. I would still be excited because, irrespective of the timing and looking at the issues,that made the transaction not to be reflective as at when I transferred,the bottom line for me is,it was used for same purpose, irrespective of time variations.Asuch, I will not be upset about the time variation,when it was used.

Curated and popular this week
 ·  · 5m read
 · 
This work has come out of my Undergraduate dissertation. I haven't shared or discussed these results much before putting this up.  Message me if you'd like the code :) Edit: 16th April. After helpful comments, especially from Geoffrey, I now believe this method only identifies shifts in the happiness scale (not stretches). Have edited to make this clearer. TLDR * Life satisfaction (LS) appears flat over time, despite massive economic growth — the “Easterlin Paradox.” * Some argue that happiness is rising, but we’re reporting it more conservatively — a phenomenon called rescaling. * I test rescaling using long-run German panel data, looking at whether the association between reported happiness and three “get-me-out-of-here” actions (divorce, job resignation, and hospitalisation) changes over time. * If people are getting happier (and rescaling is occuring) the probability of these actions should become less linked to reported LS — but they don’t. * I find little evidence of rescaling. We should probably take self-reported happiness scores at face value. 1. Background: The Happiness Paradox Humans today live longer, richer, and healthier lives in history — yet we seem no seem for it. Self-reported life satisfaction (LS), usually measured on a 0–10 scale, has remained remarkably flatover the last few decades, even in countries like Germany, the UK, China, and India that have experienced huge GDP growth. As Michael Plant has written, the empirical evidence for this is fairly strong. This is the Easterlin Paradox. It is a paradox, because at a point in time, income is strongly linked to happiness, as I've written on the forum before. This should feel uncomfortable for anyone who believes that economic progress should make lives better — including (me) and others in the EA/Progress Studies worlds. Assuming agree on the empirical facts (i.e., self-reported happiness isn't increasing), there are a few potential explanations: * Hedonic adaptation: as life gets
 ·  · 38m read
 · 
In recent months, the CEOs of leading AI companies have grown increasingly confident about rapid progress: * OpenAI's Sam Altman: Shifted from saying in November "the rate of progress continues" to declaring in January "we are now confident we know how to build AGI" * Anthropic's Dario Amodei: Stated in January "I'm more confident than I've ever been that we're close to powerful capabilities... in the next 2-3 years" * Google DeepMind's Demis Hassabis: Changed from "as soon as 10 years" in autumn to "probably three to five years away" by January. What explains the shift? Is it just hype? Or could we really have Artificial General Intelligence (AGI)[1] by 2028? In this article, I look at what's driven recent progress, estimate how far those drivers can continue, and explain why they're likely to continue for at least four more years. In particular, while in 2024 progress in LLM chatbots seemed to slow, a new approach started to work: teaching the models to reason using reinforcement learning. In just a year, this let them surpass human PhDs at answering difficult scientific reasoning questions, and achieve expert-level performance on one-hour coding tasks. We don't know how capable AGI will become, but extrapolating the recent rate of progress suggests that, by 2028, we could reach AI models with beyond-human reasoning abilities, expert-level knowledge in every domain, and that can autonomously complete multi-week projects, and progress would likely continue from there.  On this set of software engineering & computer use tasks, in 2020 AI was only able to do tasks that would typically take a human expert a couple of seconds. By 2024, that had risen to almost an hour. If the trend continues, by 2028 it'll reach several weeks.  No longer mere chatbots, these 'agent' models might soon satisfy many people's definitions of AGI — roughly, AI systems that match human performance at most knowledge work (see definition in footnote). This means that, while the compa
 ·  · 4m read
 · 
SUMMARY:  ALLFED is launching an emergency appeal on the EA Forum due to a serious funding shortfall. Without new support, ALLFED will be forced to cut half our budget in the coming months, drastically reducing our capacity to help build global food system resilience for catastrophic scenarios like nuclear winter, a severe pandemic, or infrastructure breakdown. ALLFED is seeking $800,000 over the course of 2025 to sustain its team, continue policy-relevant research, and move forward with pilot projects that could save lives in a catastrophe. As funding priorities shift toward AI safety, we believe resilient food solutions remain a highly cost-effective way to protect the future. If you’re able to support or share this appeal, please visit allfed.info/donate. Donate to ALLFED FULL ARTICLE: I (David Denkenberger) am writing alongside two of my team-mates, as ALLFED’s co-founder, to ask for your support. This is the first time in Alliance to Feed the Earth in Disaster’s (ALLFED’s) 8 year existence that we have reached out on the EA Forum with a direct funding appeal outside of Marginal Funding Week/our annual updates. I am doing so because ALLFED’s funding situation is serious, and because so much of ALLFED’s progress to date has been made possible through the support, feedback, and collaboration of the EA community.  Read our funding appeal At ALLFED, we are deeply grateful to all our supporters, including the Survival and Flourishing Fund, which has provided the majority of our funding for years. At the end of 2024, we learned we would be receiving far less support than expected due to a shift in SFF’s strategic priorities toward AI safety. Without additional funding, ALLFED will need to shrink. I believe the marginal cost effectiveness for improving the future and saving lives of resilience is competitive with AI Safety, even if timelines are short, because of potential AI-induced catastrophes. That is why we are asking people to donate to this emergency appeal
Relevant opportunities
12
82
· · 3m read