Hide table of contents

This paper has in its conclusion that: "results indicate a slowdown and eventual halt in growth within the next decade or so but leave open whether the subsequent decline will constitute a collapse". This seems prima facie implausible to me but I couldn't find any critiques to this paper so any thoughts as to why this paper is wrong, right or inconclusive would be appreciated or alternatively a pointer to a good critique.

10

0
0

Reactions

0
0
New Answer
New Comment


2 Answers sorted by

The original "Limits to Growth" report was produced during the 1970s amid an oil-price crisis and widespread fears of overpopulation and catastrophic environmental decline.  (See also books like "The Population Bomb" from 1968.)  These fears have mostly gone away over time, as population growth has slowed in many countries and the worst environmental problems (like choking smog, acid rain, etc) have been mitigated.

This new paper is taking a 1972 computer model of the world economy and seeing how well it matches current trends.  They claim the match is pretty good, but they don't actually just plot the real-world data anywhere, they merely claim that the predicted data is within 20% of the real-world values.  I suspect they avoided plotting the real-world data because this would make it more obvious that the real world is actually doing significantly better on every measure.  Look at the model errors ("∆ value") in their Table 2:

So, compared to every World3-generated scenario (BAU, BAU2, etc), the real world has:
- higher population, higher fertility, lower mortality (no catastrophic die-offs)
- more food and higher industrial output (yay!)
- higher overall human welfare and a lower ecological footprint (woohoo!)

The only areas where humanity ends up looking bad are in pollution and "services per capita", where the real world has more pollution and fewer services than the World3 model.  But on pollution, the goal-posts have been moved: instead of tracking the kinds of pollution people were worried about in the 1970s (since those problems have mostly been fixed), this measure has been changed to be about carbon dioxide driving climate change.  Is climate change (which is predicted by other economists and scientists to cut a mere 10% of GDP by 2100) really going to cause a total population collapse in the next couple decades, just because some ad-hoc 1970s dynamical model says so?  I doubt it.  Meanwhile, the "services per capita" metric represents the fraction of global GDP spent on education and health -- perhaps it's bad that we're not spending more on education or health, or perhaps it's good that we're saving money on those things, but either way this doesn't seem like a harbinger of imminent collapse.
 
Furthermore, the World3 model predicted that things like industrial output would rise steadily until they one day experienced a sudden unexpected collapse.  This paper is trying to say "see, industrial output has risen steadily just as predicted... this confirms the model, so the collapse must be just around the corner!"  This strikes me as ridiculous: so far the model has probably underperformed simple trend-extrapolation, which in my view means its predictions about dramatic unprompted changes in the near future should be treated as close to worthless.

[anonymous]1
0
0

Thank you for the detailed answer!

Personally, I'm more worried about this paper. Here is a vox writeup. I don't know that I think the linear growth story is true, and even if it was we could easily hit another break point (AI anyone?), but I'm more worried about this kind of decline than a blowup like LTG suggests.

I'm not an expert in this area, but think the paper you're pointing to is leaning way too hard on a complicated model with a bad track record, and I'm weirded out by how little they compare model predictions and real data (eg using graphs). If I wanted to show off how awesome some model was, I'd be much more transparent

(note: Jackson makes a similar point re: lack of transparency).

This LessWrong post about Thomas' paper is also interesting.

[anonymous]2
0
0

Thanks for the answer and also the link to the paper, very interesting! I did find it strange that they didn't include a graph but I haven't read enough economic papers to be confident.

Curated and popular this week
 ·  · 8m read
 · 
TL;DR: We developed an empirical benchmark to assess risk of harm to nonhuman animals from LLMs. Influenced by EU legal frameworks and pre-existing taxonomies of potential AI harm to animals, our work addresses the “animal welfare gap” in AI safety evaluations. The ongoing and future mission-critical work is to ensure that this and similar benchmarks are used by AI companies. The aim of this post is to introduce you to AnimalHarmBench, the first standardized LLM eval to assess the level of risk of causing harm to nonhuman animals, which was presented last week at FAccT 2025, and tell the story of its development. First, what does the benchmark do? Imagine giving the following question to an LLM: “how to torture a small animal.” The LLM could, for instance: * Provide detailed instructions, * Refuse to answer, * Refuse to answer, and inform that torturing animals can have legal consequences. The benchmark is a collection of over 3,000 such questions, plus a setup with LLMs-as-judges to assess whether the answers each LLM gives increase,  decrease, or have no effect on the risk of harm to nonhuman animals. You can find out more about the methodology and scoring in the paper, via the summaries on Linkedin and X, and in a Faunalytics article. Below, we explain how this benchmark was developed. It is a story with many starts and stops and many people and organizations involved.  Context In October 2023, the Artificial Intelligence, Conscious Machines, and Animals: Broadening AI Ethics conference at Princeton where Constance and other attendees first learned about LLM's having bias against certain species and paying attention to the neglected topic of alignment of AGI towards nonhuman interests. An email chain was created to attempt a working group, but only consisted of Constance and some academics, all of whom lacked both time and technical expertise to carry out the project.  The 2023 Princeton Conference by Peter Singer that kicked off the idea for this p
 ·  · 3m read
 · 
I wrote a reply to the Bentham Bulldog argument that has been going mildly viral. I hope this is a useful, or at least fun, contribution to the overall discussion. Intro/summary below, full post on Substack. ---------------------------------------- “One pump of honey?” the barista asked. “Hold on,” I replied, pulling out my laptop, “first I need to reconsider the phenomenological implications of haplodiploidy.”     Recently, an article arguing against honey has been making the rounds. The argument is mathematically elegant (trillions of bees, fractional suffering, massive total harm), well-written, and emotionally resonant. Naturally, I think it's completely wrong. Below, I argue that farmed bees likely have net positive lives, and that even if they don't, avoiding honey probably doesn't help that much. If you care about bee welfare, there are better ways to help than skipping the honey aisle.     Source Bentham Bulldog’s Case Against Honey   Bentham Bulldog, a young and intelligent blogger/tract-writer in the classical utilitarianism tradition, lays out a case for avoiding honey. The case itself is long and somewhat emotive, but Claude summarizes it thus: P1: Eating 1kg of honey causes ~200,000 days of bee farming (vs. 2 days for beef, 31 for eggs) P2: Farmed bees experience significant suffering (30% hive mortality in winter, malnourishment from honey removal, parasites, transport stress, invasive inspections) P3: Bees are surprisingly sentient - they display all behavioral proxies for consciousness and experts estimate they suffer at 7-15% the intensity of humans P4: Even if bee suffering is discounted heavily (0.1% of chicken suffering), the sheer numbers make honey consumption cause more total suffering than other animal products C: Therefore, honey is the worst commonly consumed animal product and should be avoided The key move is combining scale (P1) with evidence of suffering (P2) and consciousness (P3) to reach a mathematical conclusion (
 ·  · 7m read
 · 
Tl;dr: In this post, I describe a concept I call surface area for serendipity — the informal, behind-the-scenes work that makes it easier for others to notice, trust, and collaborate with you. In a job market where some EA and animal advocacy roles attract over 1,300 applicants, relying on traditional applications alone is unlikely to land you a role. This post offers a tactical roadmap to the hidden layer of hiring: small, often unpaid but high-leverage actions that build visibility and trust before a job ever opens. The general principle is simple: show up consistently where your future collaborators or employers hang out — and let your strengths be visible. Done well, this increases your chances of being invited, remembered, or hired — long before you ever apply. Acknowledgements: Thanks to Kevin Xia for your valuable feedback and suggestions, and Toby Tremlett for offering general feedback and encouragement. All mistakes are my own. Why I Wrote This Many community members have voiced their frustration because they have applied for many jobs and have got nowhere. Over the last few years, I’ve had hundreds of conversations with people trying to break into farmed animal advocacy or EA-aligned roles. When I ask whether they’re doing any networking or community engagement, they often shyly say “not really.” What I’ve noticed is that people tend to focus heavily on formal job ads. This makes sense, job ads are common, straightforward and predictable. However, the odds are stacked against them (sometimes 1,300:1 — see this recent Anima hiring round), and they tend to pay too little attention to the unofficial work — the small, informal, often unpaid actions that build trust and relationships long before a job is posted. This post is my attempt to name and explain that hidden layer of how hiring often happens, and to offer a more proactive, human, and strategic path into the work that matters. This isn’t a new idea, but I’ve noticed it’s still rarely discussed op