Take the 2025 EA Forum Survey to help inform our strategy and prioritiesTake the survey
1 min read 4

23

A critical failure mode in many discussions of technological risk is the assumption that maintaining status quo for technology would lead to maintaining the status quo for society. Lewis Anslow suggests that this "propensity to treat technological stagnation as safer than technological acceleration" is a fallacy. I agree that it is an important failure of reasoning among some EAs, and want to call it out clearly.

One obvious example of this, flagged by Anslow, is the anti-nuclear movement. It was not an explicitly pro-coal position, but because there was continued pressure for economic growth, the result of delaying nuclear technology wasn't less power usage, it was more coal. To the extent that they succeeded narrowly, they damaged the environment.  

The risk from artificial intelligence systems today is very arguably significant, but stopping future progress won't reduce the impact of extant innovations. Stopping where we are today would still lead to continued problems with mass disinformation assisted by generative AI, and we'll see continued progress towards automation of huge parts of modern work even without more capable systems as the systems which exist are deployed in new ways. It also seems vanishingly unlikely that the pressures on middle class jobs, artists, and writers will decrease even if we rolled back the last 5 years of progress in AI - but we wouldn't have the accompanying productivity gains which could be used to pay for UBI or other programs.  

All of that said, the perils of stasis aren't necessarily greater than those of progress.  This is not a question of safety versus risk, it is a risk-risk tradeoff. And the balance of the tradeoff is debatable - it is entirely possible to agree that there are significant risks to technological stasis, and even that AI would solve problems, and still debate what strategy to promote - whether it is safer to accelerate through the time of perils, promote differential technological development, or to shut it all down.

23

4
0

Reactions

4
0
Comments4


Sorted by Click to highlight new comments since:

Specifically addressing your AI art point: In this case, you risk this fallacy being used to prop up technologies which solve problems they created. Which, I suspect, is part of the popular backlash against AI art in the first place. These justifications continue to be used by fossil fuel companies in developing ‘biofuelds’, ‘sustainable aviation fuel’, etc., and it’s not possible to falsify that some future iteration of the current harmful technology might exist; meanwhile the companies continue to pollute, often at greater and greater scales. There is a big difference between these companies developing sustainable fuels on the side, and redirecting 100% of their resources to that development. I suspect you might feel the same way about AI safety vs. general AI development.

Maybe we can amend the framing to exclude this somehow, because I really like the rest of this (the nuclear energy example felt particularly salient). To differentiate your examples, nuclear power intended to replace an existing harmful energy source, but AI art doesn’t replace… harmful manual artists? So I would perhaps frame it as occurring only when a promising new technology has potential harms today, but has some long tail of probabilities that could make it less harmful (rather than better) than the technology it replaces.

This is a really interesting take, and I agree with many elements. There is one element I want to explore more, and one I'd like to contest. 

Firstly, I find a lot of the acceleration vs deceleration debate to be mostly theoretical and academic - not unlike debating whether or not it is better to have tides or to stop them and have a still ocean. At the end of the day (four times a day in most places, if we're being pedantic) the tide is still going to do its thing. It's the same with technical progress. Could you make it harder to innovate and improve technology? Yes. But realistically speaking having a pause or freeze of status quo in anything approaching an effective manner is just not possible. It's the same issue I had with signing an open letter declaring a freeze. You can get everyone in the nation to sign an open letter saying "Don't commit crimes", but that isn't going to solve the crime problem. But that's a bit of a tangent and I don't want to hijack your post nor your comments with unrelated debate.

Secondly, I think the nuclear and AI debates are quite poor comparisons. Much of this is anecdotal, having worked in both industries in a regulation role. Firstly, the very high levels of anti-nuclear campaigning and risk aversion have resulted in nuclear energy being a very heavily (and effectively) regulated industry. If it was not for the amount of anti-nuclear sentiment, I don't think we'd have that level of security today. I think that's partly what makes it so safe. I agree when you discuss the risk tradeoffs between coal and nuclear that it's not as clear-cut as may be imagined, but I don't think it supports the core argument very well. Also, nuclear energy and AI are such different industries to undertake risk reduction in - mostly because of the leverages of control you have through licensing, resources, and capital. However, this may be because of the aforementioned lobbying resulting in very burdensome regulation and perhaps AI will be similarly easy to regulate in future.

It's also very possible that I'm misinterpreting your point, so please do let me know if that's the case.

 Ultimately I agree with your core point that this is a fallacy seen in much AI Safety reasoning, and that even stopping now would be shutting the stable door after the horse has bolted, but I think that there is a middle ground where speed of improvement and slower safeguards is a good way to lessen risk. I actually think nuclear energy is a good example of this, rather than a poor one.

First, to your second point, I agree that they aren't comparable, so I don't want to respond to your discussion. I was not, in this specific post, arguing that anything about safety in the two domains is comparable. The claim, which you agree to in the final paragraph, is that there is an underlying fallacy which is present both places. 

However, returning to your first tangential point, the claim that the acceleration versus deceleration debate is theoretical and academic seems hard to support. Domains where everyone is dedicated to minimizing regulation and going full speed ahead are vastly different than those where people agree that significant care is needed, and where there is significant regulation and public debate. You seem to explicitly admit exactly this when you say that nuclear power is very different than AI because of the "very high levels of anti-nuclear campaigning and risk aversion" - that is, public pressure against nuclear seemed to have stopped the metaphorical tide. So I'm confused about your beliefs here.

No worries, there was always a chance I was misinterpreting the claim in that section. Happy for us to skip that.

For my second section I was talking more about stasis in the more full sense ie a pause in innovation in certain areas. Some are asking for full stasis for a period of time in the name of safety, others for a slow-down. I agree that safe stasis is a fallacy for the reasons I outlined, and agree with most of your points - particularly everything being a risk-risk tradeoff. I'm not entirely sold on the plausability of slowdowns or pauses from a logistical deployment perspective, which is where I think I got bogged down in the reeds in my response there.

 

Curated and popular this week
 ·  · 1m read
 · 
This morning I was looking into Switzerland's new animal welfare labelling law. I was going through the list of abuses that are now required to be documented on labels, and one of them made me do a double-take: "Frogs: Leg removal without anaesthesia."  This confused me. Why are we talking about anaesthesia? Shouldn't the frogs be dead before having their legs removed? It turns out the answer is no; standard industry practice is to cut their legs off while they are fully conscious. They remain alive and responsive for up to 15 minutes afterward. As far as I can tell, there are zero welfare regulations in any major producing country. The scientific evidence for frog sentience is robust - they have nociceptors, opioid receptors, demonstrate pain avoidance learning, and show cognitive abilities including spatial mapping and rule-based learning.  It's hard to find data on the scale of this issue, but estimates put the order of magnitude at billions of frogs annually. I could not find any organisations working directly on frog welfare interventions.  Here are the organizations I found that come closest: * Animal Welfare Institute has documented the issue and published reports, but their focus appears more on the ecological impact and population decline rather than welfare reforms * PETA has conducted investigations and released footage, but their approach is typically to advocate for complete elimination of the practice rather than welfare improvements * Pro Wildlife, Defenders of Wildlife focus on conservation and sustainability rather than welfare standards This issue seems tractable. There is scientific research on humane euthanasia methods for amphibians, but this research is primarily for laboratory settings rather than commercial operations. The EU imports the majority of traded frog legs through just a few countries such as Indonesia and Vietnam, creating clear policy leverage points. A major retailer (Carrefour) just stopped selling frog legs after welfar
 ·  · 10m read
 · 
This is a cross post written by Andy Masley, not me. I found it really interesting and wanted to see what EAs/rationalists thought of his arguments.  This post was inspired by similar posts by Tyler Cowen and Fergus McCullough. My argument is that while most drinkers are unlikely to be harmed by alcohol, alcohol is drastically harming so many people that we should denormalize alcohol and avoid funding the alcohol industry, and the best way to do that is to stop drinking. This post is not meant to be an objective cost-benefit analysis of alcohol. I may be missing hard-to-measure benefits of alcohol for individuals and societies. My goal here is to highlight specific blindspots a lot of people have to the negative impacts of alcohol, which personally convinced me to stop drinking, but I do not want to imply that this is a fully objective analysis. It seems very hard to create a true cost-benefit analysis, so we each have to make decisions about alcohol given limited information. I’ve never had problems with alcohol. It’s been a fun part of my life and my friends’ lives. I never expected to stop drinking or to write this post. Before I read more about it, I thought of alcohol like junk food: something fun that does not harm most people, but that a few people are moderately harmed by. I thought of alcoholism, like overeating junk food, as a problem of personal responsibility: it’s the addict’s job (along with their friends, family, and doctors) to fix it, rather than the job of everyday consumers. Now I think of alcohol more like tobacco: many people use it without harming themselves, but so many people are being drastically harmed by it (especially and disproportionately the most vulnerable people in society) that everyone has a responsibility to denormalize it. You are not likely to be harmed by alcohol. The average drinker probably suffers few if any negative effects. My argument is about how our collective decision to drink affects other people. This post is not
 ·  · 5m read
 · 
Today, Forethought and I are releasing an essay series called Better Futures, here.[1] It’s been something like eight years in the making, so I’m pretty happy it’s finally out! It asks: when looking to the future, should we focus on surviving, or on flourishing? In practice at least, future-oriented altruists tend to focus on ensuring we survive (or are not permanently disempowered by some valueless AIs). But maybe we should focus on future flourishing, instead.  Why?  Well, even if we survive, we probably just get a future that’s a small fraction as good as it could have been. We could, instead, try to help guide society to be on track to a truly wonderful future.    That is, I think there’s more at stake when it comes to flourishing than when it comes to survival. So maybe that should be our main focus. The whole essay series is out today. But I’ll post summaries of each essay over the course of the next couple of weeks. And the first episode of Forethought’s video podcast is on the topic, and out now, too. The first essay is Introducing Better Futures: along with the supplement, it gives the basic case for focusing on trying to make the future wonderful, rather than just ensuring we get any ok future at all. It’s based on a simple two-factor model: that the value of the future is the product of our chance of “Surviving” and of the value of the future, if we do Survive, i.e. our “Flourishing”.  (“not-Surviving”, here, means anything that locks us into a near-0 value future in the near-term: extinction from a bio-catastrophe counts but if valueless superintelligence disempowers us without causing human extinction, that counts, too. I think this is how “existential catastrophe” is often used in practice.) The key thought is: maybe we’re closer to the “ceiling” on Survival than we are to the “ceiling” of Flourishing.  Most people (though not everyone) thinks we’re much more likely than not to Survive this century.  Metaculus puts *extinction* risk at about 4