Hide table of contents

This problem profile was written for 80,000 Hours. It's based largely on Thoughts on nanotechnology strategy research as an EA cause area and OpenPhil’s cause report into atomically precise manufacturing, and is written for an audience broadly unfamiliar with EA.

Summary

Both the risks and benefits of advances in atomically precise manufacturing seem like they might be significant, and there is currently little effort to shape the trajectory of this technology. However, there is also relatively little investment going into developing atomically precise manufacturing, which reduces the urgency of the issue.

Our overall view

Potentially top depending on fit

Working on this problem could be among the best ways of improving the long-term future, but there are fewer high-impact opportunities to work on this issue than on our top priority problems.

Why could risks from atomically precise manufacturing be a pressing problem?

Atomically precise manufacturing is a form of particularly advanced nanotechnology. With atomically precise manufacturing we could build products out of individual atoms and molecules, allowing us to perfectly create a very wide range of products with very few flaws. Effectively, this would be like having perfect 3D printers that can produce anything.

Atomically precise manufacturing might be feasible, and there’s incentive to develop it

Molecular machines – tiny mechanical machines on the scale of just a few individual molecules – exist in nature, which means they’re definitely possible to create. In fact, we’ve actually produced a wide variety of simple artificial molecular machines. With time, we may be able to design and produce machines as small and complex as biological organisms.

This could be useful for a number of reasons:

  • Cheap energy — we may be able to produce even more efficient batteries and solar cells. The use of semiconductors in these devices means that they’re already one of the things we make to the highest degree of precision – it’s possible that even more precision could improve these devices further.
  • Carbon capture and storage — we may be able to produce cheap, high-performance nanoscale devices to remove CO2 from the atmosphere.
  • Medicine — we may be able to produce devices on the scale of human cells for targeting medical issues.
  • Cheap manufacturing — we may be able to take apart most things (even e.g. trash) and use the individual atoms to produce anything else, meaning that everything we manufacture could theoretically be made in one location using only extremely cheap inputs.

While we’re confident it’s possible to create the technology required for simple atomically precise manufacturing, it’s not clear that we could create technology advanced enough to do everything in the above list. But if we can do any of the above, it seems like there will be significant incentive to develop and produce this technology.

There are risks associated with atomically precise manufacturing

There appear to be substantial (and perhaps even existential) risks associated with developing atomically precise manufacturing:

  • Widespread access to atomically precise manufacturing could lead to widespread ability to unilaterally produce things as destructive as nuclear weapons or catastrophic pandemics.
  • States with this technology may have an incentive to produce new kinds of weapons with atomically precise manufacturing (even if most people don’t have access to this technology).
  • Manufacturing that can disassemble and produce products from extremely cheap inputs could lead to a much faster design and prototyping cycle for weapons.
  • We may be able to produce more powerful computers, which could increase risks from transformative AI.

That said, there are also reasons to think that atomically precise manufacturing could decrease existential risks. For example:

  • Medical improvements could increase pandemic resilience.
  • Atomically precise manufacturing might be better at producing defensive or strategically useful weapons that pose less risk to humanity as a whole, reducing risks from things like nuclear weapons.

Overall, we’re not sure whether developing atomically precise manufacturing would be good or bad.

When can we expect to develop this technology?

It’s really hard to say.

Ben Snodin, a researcher at Rethink Priorities, guessed there’s something like a 4–5% probability of advanced nanotechnology[1] arriving by 2040 (though he emphasises the guess is unstable).

A process for automating the advancement of science and technology may significantly shorten how long it takes us to develop atomically precise manufacturing. However, such a process would be transformative in its own right, so it’s not clear to us whether any work done on the development of atomically precise manufacturing now would be relevant if the technology is developed as a result of automated research.

We might be able to reduce the risks – though we’re pretty unsure how

We’re not currently sure what the best things to do to reduce this risk might be — we think more research in this area would be valuable.

But it does seem plausible that there are things that could be done, particularly through developing better strategies for managing the development of nanotechnology. These include:

  • Identifying and accelerating any particular areas of technical research that could make atomically precise manufacturing more likely to be beneficial overall.
  • Researching policy recommendations around atomically precise manufacturing.

Work in this area is extremely neglected

Snodin estimates that there is around one full-time equivalent person working on nanotechnology strategy as of 2022. We think that at least 2-3 people should be working full time on this issue.

What are the major arguments against this problem being pressing?

There are several possible reasons to think this problem isn’t a high priority to work on:

  • It’s not clear that atomically precise manufacturing is completely feasible. Snodin guesses that there’s a 20–70% chance that advanced nanotechnology is possible in principle to build.
  • It might be too early in the technology’s development for work on this now to have any concrete impact on atomically precise manufacturing’s ultimate trajectory.
  • It seems very likely that there will be warning signs about the development of atomically precise manufacturing (for example, commercial research entities forming around the subject). This means we may be able to mitigate the risks in the future without doing any work on the subject now.
  • There could be harms to engaging in work around atomically precise manufacturing. For example, if the technology would truly be harmful overall, then speeding up its development through raising interest in the topic could cause harm. Also, ill-considered initial work in reducing risks from atomically precise manufacturing may put other people off working on the topic in the future. (For more, see our article on avoiding accidentally doing harm.)
  • Other problems might be more pressing overall. For example, we think there are more concrete ways to reduce the risks from catastrophic pandemics, and we think that we’re more likely to face an existential threat from nuclear weapons or transformative artificial intelligence.

What can you do to help?

At the moment — due to huge uncertainties — we think that research into what the best possible approaches might be to mitigate risks from atomically precise manufacturing might be extremely valuable.

What are the key organisations in this space?

Who might be a good fit for this work?

Some qualities indicating a high level of personal fit for this work include:

  • A background in a relevant academic area (including chemistry, physics, biology, and/or materials science).
  • The judgement needed to avoid accidental harm.
  • The willingness and ability to work on a difficult, unexplored, and speculative problem – this involves figuring out what is important for yourself and working on it without much guidance – so being entrepreneurial, independent, and patient will be very helpful.

Learn more about atomically precise manufacturing

This problem profile is primarily based on:

  1. ^

    Snodin defines advanced nanotechnology as:

    any highly advanced technology involving nanoscale machinery that allows us to finely image and control processes at the nanoscale, with manufacturing capabilities roughly matching, or exceeding, those of consequential APM.

    This is technology that is similar in nature and similarly impactful to atomically precise manufacturing.

53

0
0

Reactions

0
0

More posts like this

Comments4


Sorted by Click to highlight new comments since:
Ofer
17
0
0

Hi there!

There could be harms to engaging in work around atomically precise manufacturing. For example, if the technology would truly be harmful overall, then speeding up its development through raising interest in the topic could cause harm.

I agree. Was there a meta effort to evaluate whether the potential harms from publishing such an article ("written for an audience broadly unfamiliar with EA") outweigh the potential benefits?

Yes, there was!

Widespread access to atomically precise manufacturing could lead to widespread ability to unilaterally produce things as destructive as nuclear weapons.

See below an excerpt from the atomically precise manufacturing (APM) section of Michael Aird's shallow review of tech developments that could increase risks from nuclear weapons (link):

The concern related to nuclear risk is that APM could potentially make proliferation and/or huge nuclear arsenal sizes much more likely. Specifically:

  • APM could make it much harder to monitor/control who has nuclear weapons, including even non-state groups or perhaps individuals.
  • APM could make it much more likely that non-state groups or individuals would not only attain one or a few but rather many nuclear weapons.
  • APM could make it so that nuclear-armed states can more cheaply, easily, and quickly build hundreds, thousands, tens of thousands, or even millions of nuclear weapons.
    • The enhanced ease and speed of doing this could undermine arms race stability.
      • This might also mean that even just the perception of APM being on the horizon could undermine strategic stability, even before APM has arrived.
    • And nuclear conflicts involving huge numbers of warheads are much more likely to cause an existential catastrophe, or otherwise increase existential risk, than nuclear conflicts involving the tens, hundreds, or thousands of nuclear weapons each nuclear-armed state currently has.
    • A counterpoint is that at least some nuclear-armed states already have the physical capacity to make many more nuclear warheads than they do (given enough time to build up nuclear manufacturing infrastructure), but having many more warheads doesn’t appear to be what these states are aiming for. This is some evidence that nuclear weapon production being easier to make might not result in huge arsenal sizes. But:
      • This is only some evidence.
      • And some nuclear-armed states (e.g., North Korea, Pakistan, and maybe India) may be developing nuclear weapons about as fast as they reasonably can, given their economies. As such, perhaps having access to APM would be likely to substantially affect at least their arsenal sizes.

(Note that we haven’t spent much time thinking about these points, nor seen much prior discussion of them, so this should all be taken as tentative and speculative.)

Nice to see this! I remember being surprised a few years back that nobody in EA besides Drexler was talking about APM, so it's nice to see a formal public writeup clarifying what's going on with it. I'm leery of infohazards here, but conditional on it being reasonable to publish such an article at all, this seems like a solid version of that article. 

Re: key organizations, a few thoughts:

  • FHI seems like another natural place, since Drexler's there and (I assume) they're pretty open to hosting other people working on APM. 
  • I would be curious if Ben Snodin has a take on whether Rethink Priorities is a particularly good place to work, relative to e.g. being an independent researcher or working at FHI. RP's General Longtermism team could host work on APM risk, and proximity to Ben is useful, but AFAIK Ben is not currently doing APM-related work and doesn't particularly plan to.
Curated and popular this week
 ·  · 22m read
 · 
The cause prioritization landscape in EA is changing. Prominent groups have shut down, others have been founded, and everyone’s trying to figure out how to prepare for AI. This is the third in a series of posts critically examining the state of cause prioritization and strategies for moving forward. Executive Summary * An increasingly common argument is that we should prioritize work in AI over work in other cause areas (e.g. farmed animal welfare, reducing nuclear risks) because the impending AI revolution undermines the value of working in those other areas. * We consider three versions of the argument: * Aligned superintelligent AI will solve many of the problems that we currently face in other cause areas. * Misaligned AI will be so disastrous that none of the existing problems will matter because we’ll all be dead or worse. * AI will be so disruptive that our current theories of change will all be obsolete, so the best thing to do is wait, build resources, and reformulate plans until after the AI revolution. * We identify some key cruxes of these arguments, and present reasons to be skeptical of them. A more direct case needs to be made for these cruxes before we rely on them in making important cause prioritization decisions. * Even on short timelines, the AI transition may be a protracted and patchy process, leaving many opportunities to act on longer timelines. * Work in other cause areas will often make essential contributions to the AI transition going well. * Projects that require cultural, social, and legal changes for success, and projects where opposing sides will both benefit from AI, will be more resistant to being solved by AI. * Many of the reasons why AI might undermine projects in other cause areas (e.g. its unpredictable and destabilizing effects) would seem to undermine lots of work on AI as well. * While an impending AI revolution should affect how we approach and prioritize non-AI (and AI) projects, doing this wisel
 ·  · 4m read
 · 
TLDR When we look across all jobs globally, many of us in the EA community occupy positions that would rank in the 99.9th percentile or higher by our own preferences within jobs that we could plausibly get.[1] Whether you work at an EA-aligned organization, hold a high-impact role elsewhere, or have a well-compensated position which allows you to make significant high effectiveness donations, your job situation is likely extraordinarily fortunate and high impact by global standards. This career conversations week, it's worth reflecting on this and considering how we can make the most of these opportunities. Intro I think job choice is one of the great advantages of development. Before the industrial revolution, nearly everyone had to be a hunter-gatherer or a farmer, and they typically didn’t get a choice between those. Now there is typically some choice in low income countries, and typically a lot of choice in high income countries. This already suggests that having a job in your preferred field puts you in a high percentile of job choice. But for many in the EA community, the situation is even more fortunate. The Mathematics of Job Preference If you work at an EA-aligned organization and that is your top preference, you occupy an extraordinarily rare position. There are perhaps a few thousand such positions globally, out of the world's several billion jobs. Simple division suggests this puts you in roughly the 99.9999th percentile of job preference. Even if you don't work directly for an EA organization but have secured: * A job allowing significant donations * A position with direct positive impact aligned with your values * Work that combines your skills, interests, and preferred location You likely still occupy a position in the 99.9th percentile or higher of global job preference matching. Even without the impact perspective, if you are working in your preferred field and preferred country, that may put you in the 99.9th percentile of job preference
 ·  · 6m read
 · 
I am writing this to reflect on my experience interning with the Fish Welfare Initiative, and to provide my thoughts on why more students looking to build EA experience should do something similar.  Back in October, I cold-emailed the Fish Welfare Initiative (FWI) with my resume and a short cover letter expressing interest in an unpaid in-person internship in the summer of 2025. I figured I had a better chance of getting an internship by building my own door than competing with hundreds of others to squeeze through an existing door, and the opportunity to travel to India carried strong appeal. Haven, the Executive Director of FWI, set up a call with me that mostly consisted of him listing all the challenges of living in rural India — 110° F temperatures, electricity outages, lack of entertainment… When I didn’t seem deterred, he offered me an internship.  I stayed with FWI for one month. By rotating through the different teams, I completed a wide range of tasks:  * Made ~20 visits to fish farms * Wrote a recommendation on next steps for FWI’s stunning project * Conducted data analysis in Python on the efficacy of the Alliance for Responsible Aquaculture’s corrective actions * Received training in water quality testing methods * Created charts in Tableau for a webinar presentation * Brainstormed and implemented office improvements  I wasn’t able to drive myself around in India, so I rode on the back of a coworker’s motorbike to commute. FWI provided me with my own bedroom in a company-owned flat. Sometimes Haven and I would cook together at the residence, talking for hours over a chopping board and our metal plates about war, family, or effective altruism. Other times I would eat at restaurants or street food booths with my Indian coworkers. Excluding flights, I spent less than $100 USD in total. I covered all costs, including international transportation, through the Summer in South Asia Fellowship, which provides funding for University of Michigan under