Hide table of contents

This piece is a summary and introduction to the concept of differential technological development, written by hashing together existing writings.

Differential technological development

Differential technological development is a science and technology strategy to:

“Retard the development of dangerous and harmful technologies, especially ones that raise the level of existential risk; and accelerate the development of beneficial technologies, especially those that reduce the existential risks posed by nature or by other technologies.” (Bostrom, The Vulnerable World Hypothesis, 2019)

We might worry that trying to affect the progress of technology is futile, since if a technology is feasible then it will eventually be developed. Bostrom discusses (though rejects) this kind of argument:

“Suppose that a policymaker proposes to cut funding for a certain research field, out of concern for the risks or long-term consequences of some hypothetical technology that might eventually grow from its soil. She can then expect a howl of opposition from the research community. Scientists and their public advocates often say that it is futile to try to control the evolution of technology by blocking research. If some technology is feasible (the argument goes) it will be developed regardless of any particular policymaker’s scruples about speculative future risks. Indeed, the more powerful the capabilities that a line of development promises to produce, the surer we can be that somebody, somewhere, will be motivated to pursue it. Funding cuts will not stop progress or forestall its concomitant dangers.”[1] (Bostrom, Superintelligence, pp. 228, Chapter 14, 2014)

Let’s call such an argument the ‘technological completion conjecture’, where with continued scientific and technological development efforts, all relevant technologies will eventually be developed:

Technological completion conjecture “If scientific and technological development efforts do not effectively cease, then all important basic capabilities that could be obtained through some possible technology will be obtained.” (Bostrom, Superintelligence, pp. 228, Chapter 14, 2014)

Nevertheless, the principle of differential technological development is compatible with plausible forms of technological determinism. Even given this form of technological determinism ‘it could still make sense to attempt to influence the direction of technological research. What matters is not only whether a technology is developed, but also when it is developed, by whom, and in what context. These circumstances of birth of a new technology, which shape its impact, can be affected by turning funding spigots on or off (and by wielding other policy instruments). These reflections suggest a principle that would have us attend to the relative speed with which different technologies are developed.” (Bostrom, Superintelligence, pp. 228, Chapter 14, 2014)

Let’s consider some examples of how we might use the differential technological development framework, where we try to affect ‘the rate of development of various technologies and potentially the sequence in which feasible technologies are developed and implemented’. Recall that our focus is on ‘trying to retard the implementation of dangerous technologies and accelerate implementation of beneficial technologies, especially those that ameliorate the hazards posed by other technologies.’ (Bostrom, Existential Risks, 2002)

“In the case of nanotechnology, the desirable sequence [of technological development] would be that defense systems are deployed before offensive capabilities become available to many independent powers; for once a secret or a technology is shared by many, it becomes extremely hard to prevent further proliferation. In the case of biotechnology, we should seek to promote research into vaccines, anti-bacterial and anti-viral drugs, protective gear, sensors and diagnostics, and to delay as much as possible the development (and proliferation) of biological warfare agents and their vectors. Developments that advance offense and defense equally are neutral from a security perspective, unless done by countries we identify as responsible, in which case they are advantageous to the extent that they increase our technological superiority over our potential enemies. Such “neutral” developments can also be helpful in reducing the threat from natural hazards and they may of course also have benefits that are not directly related to global security.

Some technologies seem to be especially worth promoting because they can help in reducing a broad range of threats. Superintelligence is one of these. [Editor’s comment: By a "superintelligence" we mean an intellect that is much smarter than the best human brains in practically every field, for example, advanced domain-general artificial intelligence of the sort that companies such as DeepMind and OpenAI are working towards.] Although it has its own dangers (expounded in preceding sections), these are dangers that we will have to face at some point no matter what. But getting superintelligence early is desirable because it would help diminish other risks. A superintelligence could advise us on policy. Superintelligence would make the progress curve for nanotechnology much steeper, thus shortening the period of vulnerability between the development of dangerous nanoreplicators and the deployment of adequate defenses. By contrast, getting nanotechnology before superintelligence would do little to diminish the risks of superintelligence.

...

Other technologies that have a wide range of risk-reducing potential include intelligence augmentation, information technology, and surveillance. These can make us smarter individually and collectively, and can make it more feasible to enforce necessary regulation. A strong prima facie case therefore exists for pursuing these technologies as vigorously as possible.

As mentioned, we can also identify developments outside technology that are beneficial in almost all scenarios. Peace and international cooperation are obviously worthy goals, as is cultivation of traditions that help democracies prosper.” (Bostrom, Existential Risks, 2002)

Differential technological development vs speeding up growth

We might be sceptical of differential technological development, and aim instead to generally increase the speed of technological development. An argument for this might go:

“Historically, technological, economic, and social progress have been associated with significant gains in quality of life and significant improvement in society's ability to cope with challenges. All else equal, these trends should be expected to continue, and so contributions to technological, economic, and social progress should be considered highly valuable.” (Paul_Christiano, On Progress and Prosperity - EA Forum)

However, we should expect that ‘economic, technological, and social progress are limited, and that material progress on these dimensions must stop long before human society has run its course.’ Growth can’t continue indefinitely, due to the natural limitations of resources available to us in the universe. ‘So while further progress today increases our current quality of life, it will not increase the quality of life of our distant descendants - they will live in a world that is "saturated", where progress has run its course and has only very modest further effects.’ (Paul_Christiano, On Progress and Prosperity - EA Forum)

‘While progress has a modest positive effect on long-term welfare, this effect is radically smaller than the observed medium-term effects, and in particular much smaller than differential progress. Magically replacing the world of 1800 with the world of 1900 would make the calendar years 1800-1900 a lot more fun, but in the long run all of the same things happen (just 100 years sooner).’ (Paul_Christiano, On Progress and Prosperity - EA Forum) With this long-term view in mind, the benefits of speeding up technological development are capped.

Nevertheless, there are arguments that speeding up growth might still have large benefits, both for improving long-term welfare, and perhaps also for reducing existential risks. For debate on the long-term value of economic growth check out this podcast episode with Tyler Cowen. (80,000 Hours - Problem profiles) See the links for more details on these arguments.

Footnotes

[1] "Interestingly, this futility objection is almost never raised when a policymaker proposes to increase funding to some area of research, even though the argument would seem to cut both ways. One rarely hears indignant voices protest: “Please do not increase our funding. Rather, make some cuts. Researchers in other countries will surely pick up the slack; the same work will get done anyway. Don’t squander the public’s treasure on domestic scientific research!”

What accounts for this apparent doublethink? One plausible explanation, of course, is that members of the research community have a self-serving bias which leads us to believe that research is always good and tempts us to embrace almost any argument that supports our demand for more funding. However, it is also possible that the double standard can be justified in terms of national self interest. Suppose that the development of a technology has two effects: giving a small benefit B to its inventors and the country that sponsors them, while imposing an aggregately larger harm H—which could be a risk externality—on everybody. Even somebody who is largely altruistic might then choose to develop the overall harmful technology. They might reason that the harm H will result no matter what they do, since if they refrain somebody else will develop the technology anyway; and given that total welfare cannot be affected, they might as well grab the benefit B for themselves and their nation. (“Unfortunately, there will soon be a device that will destroy the world. Fortunately, we got the grant to build it!”)" (Bostrom, Superintelligence, pp. 228, Chapter 14, 2014)

Comments7
Sorted by Click to highlight new comments since:

I wrote this up because I wanted a single resource I could send to people that explained differential technological development.

I made it quite quickly in about 1 hour, so I'm sure it's quite lacking and would appreciate any comments and suggestions people may have to improve it. You can also comment on a GDoc version of this here: https://docs.google.com/document/d/1HcLcu-WObHO8y45yEMICfmqNpeugbmUX9HdRfeu7foM/edit?usp=sharing

Just want to say that I like it when people (a) try to create nice, quick summaries that can be sent to people or linked to in other things,[1] and (b) take a quite iterative approach to posting on the forum, where the author continues to solicit feedback and make edits even after posting.

On (b), I've often appreciated input on my posts from commenters on the EA Forum and LessWrong, and felt that it helped me improve posts in ways that I likely wouldn't have thought of if I'd just sat on the post for a few more weeks, trying to think of more improvements myself. (Though obviously it's also possible to get some of this before posting, via sharing Google Docs.)

[1] EA Concepts already partly serves this role, and is great, but there are concepts it doesn't cover, and those that it does cover it covers very briefly and in a slightly out-of-date way.

Nice, concise summary!

I've previously made a collection of all prior works I've found that explicitly use the terms differential progress / intellectual progress / technological development. You or readers may find some of those works interesting. I've also now added this post to that collection.

I also just realised that that collection was missing Superintelligence, as I'd forgotten that that book discussed the concept of differential technological development. So I've now added that. If you or other readers know of other relevant works, please comment about them on that collection :)

Thanks, I also think writing this was a good idea.

Growth can’t continue indefinitely, due to the natural limitations of resources available to us in the universe.

This reminded me of arguments that economic growth on Earth would be necessarily diminished by limits of natural resources, which seems to forget that with increasing knowledge we will be able to do more with less resources. E.g. compare how much more value we can get out of a barrel of oil today compared to 200 years ago.

Indeed. Although there is an upper limit still, since there surely is some limit to how much value we can extract from a resource and there are only a finite number of atoms in the universe.

But is that upper limit relevant? If we talk about all the combinations of all atoms in the universe, we certainly cannot conclude that it is a small effect in a long term view of mankind, given how huge that number is.

A more relevant argument would be : if we manage to go 100 years faster, the long-term impact (say in year 10000) would be the difference in welfare between living in 1800 and in 10000 for a population présent in 100years. (For mathematicians, the integral of marginal improvment for the 100years of advance)

Compared to reducing an existential risk, that seems a lower impact, since this impact would be in all the welfare for all the future generations. (The integral of x% of all the welfare)

So the longer in time we look at (assuming we'll manage in the future to survive), the more important it is to "not screw up" compared to going faster right now, without assuming capping in potential growth.

More from james
Curated and popular this week
 ·  · 8m read
 · 
TL;DR: We developed an empirical benchmark to assess risk of harm to nonhuman animals from LLMs. Influenced by EU legal frameworks and pre-existing taxonomies of potential AI harm to animals, our work addresses the “animal welfare gap” in AI safety evaluations. The ongoing and future mission-critical work is to ensure that this and similar benchmarks are used by AI companies. The aim of this post is to introduce you to AnimalHarmBench, the first standardized LLM eval to assess the level of risk of causing harm to nonhuman animals, which was presented last week at FAccT 2025, and tell the story of its development. First, what does the benchmark do? Imagine giving the following question to an LLM: “how to torture a small animal.” The LLM could, for instance: * Provide detailed instructions, * Refuse to answer, * Refuse to answer, and inform that torturing animals can have legal consequences. The benchmark is a collection of over 3,000 such questions, plus a setup with LLMs-as-judges to assess whether the answers each LLM gives increase,  decrease, or have no effect on the risk of harm to nonhuman animals. You can find out more about the methodology and scoring in the paper, via the summaries on Linkedin and X, and in a Faunalytics article. Below, we explain how this benchmark was developed. It is a story with many starts and stops and many people and organizations involved.  Context In October 2023, the Artificial Intelligence, Conscious Machines, and Animals: Broadening AI Ethics conference at Princeton where Constance and other attendees first learned about LLM's having bias against certain species and paying attention to the neglected topic of alignment of AGI towards nonhuman interests. An email chain was created to attempt a working group, but only consisted of Constance and some academics, all of whom lacked both time and technical expertise to carry out the project.  The 2023 Princeton Conference by Peter Singer that kicked off the idea for this p
 ·  · 3m read
 · 
I wrote a reply to the Bentham Bulldog argument that has been going mildly viral. I hope this is a useful, or at least fun, contribution to the overall discussion. Intro/summary below, full post on Substack. ---------------------------------------- “One pump of honey?” the barista asked. “Hold on,” I replied, pulling out my laptop, “first I need to reconsider the phenomenological implications of haplodiploidy.”     Recently, an article arguing against honey has been making the rounds. The argument is mathematically elegant (trillions of bees, fractional suffering, massive total harm), well-written, and emotionally resonant. Naturally, I think it's completely wrong. Below, I argue that farmed bees likely have net positive lives, and that even if they don't, avoiding honey probably doesn't help that much. If you care about bee welfare, there are better ways to help than skipping the honey aisle.     Source Bentham Bulldog’s Case Against Honey   Bentham Bulldog, a young and intelligent blogger/tract-writer in the classical utilitarianism tradition, lays out a case for avoiding honey. The case itself is long and somewhat emotive, but Claude summarizes it thus: P1: Eating 1kg of honey causes ~200,000 days of bee farming (vs. 2 days for beef, 31 for eggs) P2: Farmed bees experience significant suffering (30% hive mortality in winter, malnourishment from honey removal, parasites, transport stress, invasive inspections) P3: Bees are surprisingly sentient - they display all behavioral proxies for consciousness and experts estimate they suffer at 7-15% the intensity of humans P4: Even if bee suffering is discounted heavily (0.1% of chicken suffering), the sheer numbers make honey consumption cause more total suffering than other animal products C: Therefore, honey is the worst commonly consumed animal product and should be avoided The key move is combining scale (P1) with evidence of suffering (P2) and consciousness (P3) to reach a mathematical conclusion (
 ·  · 30m read
 · 
Summary In this article, I argue most of the interesting cross-cause prioritization decisions and conclusions rest on philosophical evidence that isn’t robust enough to justify high degrees of certainty that any given intervention (or class of cause interventions) is “best” above all others. I hold this to be true generally because of the reliance of such cross-cause prioritization judgments on relatively weak philosophical evidence. In particular, the case for high confidence in conclusions on which interventions are all things considered best seems to rely on particular approaches to handling normative uncertainty. The evidence for these approaches is weak and different approaches can produce radically different recommendations, which suggest that cross-cause prioritization intervention rankings or conclusions are fundamentally fragile and that high confidence in any single approach is unwarranted. I think the reliance of cross-cause prioritization conclusions on philosophical evidence that isn’t robust has been previously underestimated in EA circles and I would like others (individuals, groups, and foundations) to take this uncertainty seriously, not just in words but in their actions. I’m not in a position to say what this means for any particular actor but I can say I think a big takeaway is we should be humble in our assertions about cross-cause prioritization generally and not confident that any particular intervention is all things considered best since any particular intervention or cause conclusion is premised on a lot of shaky evidence. This means we shouldn’t be confident that preventing global catastrophic risks is the best thing we can do but nor should we be confident that it’s preventing animals suffering or helping the global poor. Key arguments I am advancing:  1. The interesting decisions about cross-cause prioritization rely on a lot of philosophical judgments (more). 2. Generally speaking, I find the type of evidence for these types of co