Hide table of contents

Non-longtermists, what would you like to be called. It's a category that is going to get used so we might as well have a name for it. And now is the time to select one you like.

17

0
0

Reactions

0
0
New Answer
New Comment


21 Answers sorted by

Stop saying longtermist. It doesn't map well into people's understanding.

Say catastrophic risk, animal welfare and global health.

I love this. Just talk about the specific area you are into, rather than using generalisations. I would never use words like "neartermist" or "longtermist" outside EA circles anyway, as they lack any real meaning at all.

Evidence-based Effective Altruists

The founding premise of EA was that you need to weigh evidence.  This distinction is saying the longtermists have abandoned the founding premise of the movement.  

2
pseudonym
But many non-longtermists also care about future people, this hasn't seemed to stop longtermists from using a term that implies non-longtermists (used to be called short-termism!?) don't care about the future.
1
Dawn Drescher
Yeah, one could say that I’m a longtermist (though the term doesn’t fit well), and one key thing that caused that was gradual disillusionment with evidence-based anything over the course of a few years – because of the low-quality standard metrics of many fields, the low external validity of RCTs, the difficulty with running controlled experiments on anything that matters, complex cluelessness, the allure of highly leveraged foundational and policy interventions, etc. EA for me is about doing the most good. RCTs and such were just a tool that seemed promising to me at the time.
1
ludwigbald
I think this is broadly a correct take. Longtermists care about expected value. Instead, classic EA is about following the evidence.

I reject the ideas that this needs a name. Bundling everything that is not longtermism under one category is not very sensical.

It's just not a good category. Like using "nant" as a word to describe everything that isn't a plant.

I prefer the term "non-longtermism" / "non-longtermist" if you must use a term for this concept.

Non-longtermists

Fine but sounds a bit like a pirate. 

Global Health and Wellbeing'ers = Glohwelbs :)

Doesn't capture all neartermists, but for me, person-affecting EA

2
Nathan Young
Maybe suggest it as an option.
1
Neil Natarajan
I confess I’m not entirely sure how you got there after reading the link post. Not that I disagree (I’m personally fine w/ being called a neartermist, I think it sounds good, but open to p much anything) “It seems to me a generally bad practice to take the positive part of the phrase a movement or philosophy uses to describe itself, and then negate that to describe people outside the movement” seems to imply that we shouldn’t be “not longtermist”s
3
david_reinstein
I see what you mean. I guess my point is “neartermist” sounds like it’s a coherent ideology in opposition to longtermism. “Not longtermist” is not a banner to march behind or a team, it’s just a factual description (in lower case).

How well does this represent your views to people unfamiliar with it as a term in population ethics?

It might sound as if you're an EA only concerned about affecting persons (as in humans, or animals with personhood).

3
freedomandutility
Very badly, probably, but I was assuming that most EAs will be familiar with the term.

On the other hand, this would exclude people whose main issue with longtermism is epistemic in nature. But maybe it’s too hard to come up with an acceptable catch-all term.

Keep-it-realists? (Sorry, for non serious comment)

I'm not upvoting, but I laughed.

I don't feel like rejecting Longtermism necessarily implies being a welfarist?

Centurians/Centurist/Centurion

Only aim to impact the next hundred years/lifetime. Which is already optimistic. Limited to no more than 2-3 lifetimes. Maybe due to cluelessness/put your own oxygen mask before helping others for kinda thing 

Bonus points on this? Coincidentally, Centurions is also an 80's animated series about fighting against singularity brought by a human. Shows that Centurions also care about AI since it's within their lifetime or the next? 

Longdistancers (emphasizing neutrality wrt spatial distance from beneficiaries, vs temporal distance for longtermism)

Hyperbolic discounters

Ok this one made me laugh

This one actually made me laugh out loud. 

I'm not a neartermist myself, but I suggest the term "interpretable altruist". Interpretability is really important to how many people in this group carry out effective altruism, and it's important to me as well.

Global Wellbeing-ers

Thanks for these comments, they're bangers. 

Haha! I want to upvote because funny, but that would be unhelpful. xD

Comments8
Sorted by Click to highlight new comments since:

The difficulty is in the name "longtermist".  It asserts ownership over concern for the future.

People who disagree with the ideas that carry this banner are also concerned for the future.

A general issue I see with the answers here is they assume opposition to longtermism necessarily need be philosophical. The case for actually doing anything different on longtermist grounds relies on a long chain of quasi-empirical speculation, and it seems perfectly coherent to me to just object to some induction along the way, and fall onto the side of (say) global health or economic development while still believing in something like aggregative utilitarianism. 

So I feel like a term would need to be more general and/or more focused on actions. 'Pragmatist' comes to mind, though it would need some distinction from the existing philosophical school. 'Altruistic pragmatist'? Maybe 'pragmatermist' if you don't mind neologisms (and if it doesn't turn out to etymologically imply something like 'end of facts' )

I think another part of the problem is that, for the same reasons, 'longtermism' has substantial mission creep/motte-and-bailey-itis. Like if I say I'm not a longtermist in EA circles, supporters will probably hit me with an argument for a totalising population ethic . But if I say I am one it feels like I'm supporting a bunch of academic research projects about which I might be quite sceptical. So maybe 'longtermism' is the concept that should be under the microscope, rather than its negation.

This was discussed before. See here.

Yes, though I couldn't see too many suggested names nor broad agreement.

I like evidence-based EA, but I’d also like to see some suggestions based on “feedback cycles.” I think the key thing that “neartermists” have that longtermists lack are informative feedback cycles.

Do current person-affecting ethicists become longtermist if we achieve negligible senescence? Will virtue-ethicists too if we can predict how their virtue will develop over time? Do development economists become longtermists if we develop Foundation-style Psychohistory? We don't have a singular term for "not a virtue ethicist" other than "non-virtue ethicist" and there's no commonality amongst nonlongtermists other than being the out-group to longtermists.

Neartermist = explicitly sets a high effective discount rate (either due to uncertainty or a pure rate of time preference) should not include non-consequentialists or people with types of person-affecting views resulting in a low concern for future generations.

Curated and popular this week
 ·  · 5m read
 · 
This work has come out of my Undergraduate dissertation. I haven't shared or discussed these results much before putting this up.  Message me if you'd like the code :) Edit: 16th April. After helpful comments, especially from Geoffrey, I now believe this method only identifies shifts in the happiness scale (not stretches). Have edited to make this clearer. TLDR * Life satisfaction (LS) appears flat over time, despite massive economic growth — the “Easterlin Paradox.” * Some argue that happiness is rising, but we’re reporting it more conservatively — a phenomenon called rescaling. * I test rescaling using long-run German panel data, looking at whether the association between reported happiness and three “get-me-out-of-here” actions (divorce, job resignation, and hospitalisation) changes over time. * If people are getting happier (and rescaling is occuring) the probability of these actions should become less linked to reported LS — but they don’t. * I find little evidence of rescaling. We should probably take self-reported happiness scores at face value. 1. Background: The Happiness Paradox Humans today live longer, richer, and healthier lives in history — yet we seem no seem for it. Self-reported life satisfaction (LS), usually measured on a 0–10 scale, has remained remarkably flatover the last few decades, even in countries like Germany, the UK, China, and India that have experienced huge GDP growth. As Michael Plant has written, the empirical evidence for this is fairly strong. This is the Easterlin Paradox. It is a paradox, because at a point in time, income is strongly linked to happiness, as I've written on the forum before. This should feel uncomfortable for anyone who believes that economic progress should make lives better — including (me) and others in the EA/Progress Studies worlds. Assuming agree on the empirical facts (i.e., self-reported happiness isn't increasing), there are a few potential explanations: * Hedonic adaptation: as life gets
 ·  · 38m read
 · 
In recent months, the CEOs of leading AI companies have grown increasingly confident about rapid progress: * OpenAI's Sam Altman: Shifted from saying in November "the rate of progress continues" to declaring in January "we are now confident we know how to build AGI" * Anthropic's Dario Amodei: Stated in January "I'm more confident than I've ever been that we're close to powerful capabilities... in the next 2-3 years" * Google DeepMind's Demis Hassabis: Changed from "as soon as 10 years" in autumn to "probably three to five years away" by January. What explains the shift? Is it just hype? Or could we really have Artificial General Intelligence (AGI)[1] by 2028? In this article, I look at what's driven recent progress, estimate how far those drivers can continue, and explain why they're likely to continue for at least four more years. In particular, while in 2024 progress in LLM chatbots seemed to slow, a new approach started to work: teaching the models to reason using reinforcement learning. In just a year, this let them surpass human PhDs at answering difficult scientific reasoning questions, and achieve expert-level performance on one-hour coding tasks. We don't know how capable AGI will become, but extrapolating the recent rate of progress suggests that, by 2028, we could reach AI models with beyond-human reasoning abilities, expert-level knowledge in every domain, and that can autonomously complete multi-week projects, and progress would likely continue from there.  On this set of software engineering & computer use tasks, in 2020 AI was only able to do tasks that would typically take a human expert a couple of seconds. By 2024, that had risen to almost an hour. If the trend continues, by 2028 it'll reach several weeks.  No longer mere chatbots, these 'agent' models might soon satisfy many people's definitions of AGI — roughly, AI systems that match human performance at most knowledge work (see definition in footnote). This means that, while the compa
 ·  · 4m read
 · 
SUMMARY:  ALLFED is launching an emergency appeal on the EA Forum due to a serious funding shortfall. Without new support, ALLFED will be forced to cut half our budget in the coming months, drastically reducing our capacity to help build global food system resilience for catastrophic scenarios like nuclear winter, a severe pandemic, or infrastructure breakdown. ALLFED is seeking $800,000 over the course of 2025 to sustain its team, continue policy-relevant research, and move forward with pilot projects that could save lives in a catastrophe. As funding priorities shift toward AI safety, we believe resilient food solutions remain a highly cost-effective way to protect the future. If you’re able to support or share this appeal, please visit allfed.info/donate. Donate to ALLFED FULL ARTICLE: I (David Denkenberger) am writing alongside two of my team-mates, as ALLFED’s co-founder, to ask for your support. This is the first time in Alliance to Feed the Earth in Disaster’s (ALLFED’s) 8 year existence that we have reached out on the EA Forum with a direct funding appeal outside of Marginal Funding Week/our annual updates. I am doing so because ALLFED’s funding situation is serious, and because so much of ALLFED’s progress to date has been made possible through the support, feedback, and collaboration of the EA community.  Read our funding appeal At ALLFED, we are deeply grateful to all our supporters, including the Survival and Flourishing Fund, which has provided the majority of our funding for years. At the end of 2024, we learned we would be receiving far less support than expected due to a shift in SFF’s strategic priorities toward AI safety. Without additional funding, ALLFED will need to shrink. I believe the marginal cost effectiveness for improving the future and saving lives of resilience is competitive with AI Safety, even if timelines are short, because of potential AI-induced catastrophes. That is why we are asking people to donate to this emergency appeal