Hide table of contents

That link is to a 2015 “shallow investigation” by the Open Philanthropy Project. Their summary reads:

Background: Atomically precise manufacturing is a proposed technology for assembling macroscopic objects defined by data files by using very small parts to build the objects with atomic precision using earth-abundant materials. There is little consensus about its feasibility, how to develop it, or when, if ever, it might be developed. This page focuses primarily on potential risks from atomically precise manufacturing. We may separately examine its potential benefits and development pathways in more detail in the future.

What is the problem? If created, atomically precise manufacturing would likely radically lower costs and expand capabilities in computing, materials, medicine, and other areas. However, it would likely also make it substantially easier to develop new weapons and quickly and inexpensively produce them at scale with an extremely small manufacturing base. In addition, some argue that it would help make it possible to create tiny self-replicating machines that could consume the Earth’s resources in a scenario known as “grey goo,” but such machines would have to be designed deliberately and we are highly uncertain of whether it would be possible to make them.

What are possible interventions? A philanthropist could seek to influence research and development directions or support policy research. Potential goals could include achieving consensus regarding the feasibility of atomically precise manufacturing, identifying promising development strategies, and/or mitigating risks from possible military applications. We are highly uncertain about how to weigh the possible risks and benefits from accelerating progress toward APM and about the effectiveness of policy research in the absence of greater consensus regarding the feasibility of the technology.

Who else is working on it? A few small non-profit organizations have explicitly focused on research, development, and policy analysis related to atomically precise manufacturing. Atomically precise manufacturing receives little explicit attention in academia, but potential enabling technologies such as DNA nanotechnology and scanning probe microscopy are active fields of research.

A key passage I’d highlight is:

Unless APM is developed in a secret “Manhattan Project”—and there is disagreement about how plausible that is —the people we spoke with believe it would be extremely unlikely for an observer closely watching the field to be surprised by a sudden increase in potentially dangerous APM capabilities.

That said, their list of “Questions for further investigation” includes:

How confident can we be that there will be substantial lead time between early signs that APM is feasible and the deployment of APM?

Also on this topic, 80,000 Hours write:

Both the risks and benefits of advances in this technology seem like they might be significant, and there is currently little effort to shape its trajectory. However, there is also relatively little investment going into making atomic-scale manufacturing work right now, which reduces the urgency of the issue.

Why I’m posting this here

The handful of public, quantitative existential risk estimates that exist suggest APM - or perhaps nanotechnology more broadly - may be one of the largest sources of existential risk. (Of course, it's hard to say what conclusions to draw from these estimates, for many reasons.)

It also seems like APM/nanotechnology was among the most prominently discussed existential risks until sometime around 2010, but that it’s been discussed less since then. And I think there has been relatively little public discussion of why that shift in focus occurred. (See also.)

So I think it’d be good if it was a bit easier for people to: 

  • see indications of why that apparent shift in focus may have happened
  • see arguments that can help them come to their own views regarding APM and its risk (including on whether this area may warrant a bit more attention on the margin, as 80,000 Hours tentatively suggests it might)

To that end, I’ve made this link post, as well as the tag Atomically Precise Manufacturing.

Feel free to use this comment section for general discussion of whether EAs should pay more attention to APM, why or why not, and what in particular we should consider doing about APM (if anything).

29

0
0

Reactions

0
0

More posts like this

Comments4
Sorted by Click to highlight new comments since:

I raised a similar question on the Effective Altruism fb group last year.

Notable responses included the comment from Howie Lempel which reiterated the points in the Open Phil article about how it seemed unlikely that someone watching the field would fail to notice if there was a sudden increase in capabilities.

Also Rob Wiblin commented to ask to make it clear that 80,000 hours doesn't necessarily endorse the view that nanotech/APM is as high a risk as that survey suggests.

It looks like FHI now want to start looking into nanotechnology/APM more, and build more capacity in that area: They're hiring for researchers in a bunch of areas, one of which is: 

Nanotechnology: analysing roadmaps to atomically precise manufacturing and related technologies, including possible intersections with advances in artificial intelligence, and potential impacts and strategic implications of progress in these areas.

That's interesting. As far as I can tell, Eric Drexler was basically the person who kicked off interest + concern about this tech in the 1980s onwards.* His publications on the topic have accrued tens of thousands of citations. But Drexler's work at FHI now focuses on AI.

(I came to this year-old post because some of the early transhumanist / proto-EA content (e.g. Bostrom and Kurzweil) seems to mention nanotech very prominently, sometimes preceding discussion of superintelligent AI, and I wanted to see if any aspiring EAs were still talking about it.)

 

*General impression from some of the transhumanist stuff I've been reading. The Wikipedia page on nanotechnology says:

The term "nano-technology" was first used by Norio Taniguchi in 1974, though it was not widely known. Inspired by Feynman's concepts, K. Eric Drexler used the term "nanotechnology" in his 1986 book Engines of Creation: The Coming Era of Nanotechnology, which proposed the idea of a nanoscale "assembler" which would be able to build a copy of itself and of other items of arbitrary complexity with atomic control. Also in 1986, Drexler co-founded The Foresight Institute (with which he is no longer affiliated) to help increase public awareness and understanding of nanotechnology concepts and implications. The emergence of nanotechnology as a field in the 1980s occurred through convergence of Drexler's theoretical and public work, which developed and popularized a conceptual framework for nanotechnology, and high-visibility experimental advances that drew additional wide-scale attention to the prospects of atomic control of matter.

One somewhat tangential thing you might find interesting is how prominent nanotech seems to be in many of the "Late 2021 MIRI Conversations". Though none of the mentions there seem to be suggesting anyone should try to study or influence nanotech itself, more so that nanotech could be a key tool used by agentic AI systems. 

Curated and popular this week
 ·  · 10m read
 · 
Regulation cannot be written in blood alone. There’s this fantasy of easy, free support for the AI Safety position coming from what’s commonly called a “warning shot”. The idea is that AI will cause smaller disasters before it causes a really big one, and that when people see this they will realize we’ve been right all along and easily do what we suggest. I can’t count how many times someone (ostensibly from my own side) has said something to me like “we just have to hope for warning shots”. It’s the AI Safety version of “regulation is written in blood”. But that’s not how it works. Here’s what I think about the myth that warning shots will come to save the day: 1) Awful. I will never hope for a disaster. That’s what I’m trying to prevent. Hoping for disasters to make our job easier is callous and it takes us off track to be thinking about the silver lining of failing in our mission. 2) A disaster does not automatically a warning shot make. People have to be prepared with a world model that includes what the significance of the event would be to experience it as a warning shot that kicks them into gear. 3) The way to make warning shots effective if (God forbid) they happen is to work hard at convincing others of the risk and what to do about it based on the evidence we already have— the very thing we should be doing in the absence of warning shots. If these smaller scale disasters happen, they will only serve as warning shots if we put a lot of work into educating the public to understand what they mean before they happen. The default “warning shot” event outcome is confusion, misattribution, or normalizing the tragedy. Let’s imagine what one of these macabrely hoped-for “warning shot” scenarios feels like from the inside. Say one of the commonly proposed warning shot scenario occurs: a misaligned AI causes several thousand deaths. Say the deaths are of ICU patients because the AI in charge of their machines decides that costs and suffering would be minimize
 ·  · 14m read
 · 
This is a transcript of my opening talk at EA Global: London 2025. In my talk, I challenge the misconception that EA is populated by “cold, uncaring, spreadsheet-obsessed robots” and explain how EA principles serve as tools for putting compassion into practice, translating our feelings about the world's problems into effective action. Key points:  * Most people involved in EA are here because of their feelings, not despite them. Many of us are driven by emotions like anger about neglected global health needs, sadness about animal suffering, or fear about AI risks. What distinguishes us as a community isn't that we don't feel; it's that we don't stop at feeling — we act. Two examples: * When USAID cuts threatened critical health programs, GiveWell mobilized $24 million in emergency funding within weeks. * People from the EA ecosystem spotted AI risks years ahead of the mainstream and pioneered funding for the field starting in 2015, helping transform AI safety from a fringe concern into a thriving research field. * We don't make spreadsheets because we lack care. We make them because we care deeply. In the face of tremendous suffering, prioritization helps us take decisive, thoughtful action instead of freezing or leaving impact on the table. * Surveys show that personal connections are the most common way that people first discover EA. When we share our own stories — explaining not just what we do but why it matters to us emotionally — we help others see that EA offers a concrete way to turn their compassion into meaningful impact. You can also watch my full talk on YouTube. ---------------------------------------- One year ago, I stood on this stage as the new CEO of the Centre for Effective Altruism to talk about the journey effective altruism is on. Among other key messages, my talk made this point: if we want to get to where we want to go, we need to be better at telling our own stories rather than leaving that to critics and commentators. Since
 ·  · 3m read
 · 
A friend of mine who worked as a social worker in a hospital told me a story that stuck with me. She had a conversation with an in-patient having a very difficult time. It was helpful, but as she was leaving, they told her wistfully 'You get to go home'. She found it hard to hear—it felt like an admonition. It was hard not to feel guilt over indeed getting to leave the facility and try to stop thinking about it, when others didn't have that luxury. The story really stuck with me. I resonate with the guilt of being in the fortunate position of being able to go back to my comfortable home and chill with my family while so many beings can't escape the horrible situations they're in, or whose very chance at existence depends on our work. Hearing the story was helpful for dealing with that guilt. Thinking about my friend's situation it was clear why she felt guilty. But also clear that it was absolutely crucial that she did go home. She was only going to be able to keep showing up to work and having useful conversations with people if she allowed herself proper respite. It might be unfair for her patients that she got to take the break they didn't, but it was also very clearly in their best interests for her to do it. Having a clear-cut example like that to think about when feeling guilt over taking time off is useful. But I also find the framing useful beyond the obvious cases. When morality feels all-consuming Effective altruism can sometimes feel all consuming. Any spending decision you make affects how much you can donate. Any activity you choose to do takes time away from work you could be doing to help others. Morality can feel as if it's making claims on even the things which are most important to you, and most personal. Often the narratives with which we push back on such feelings also involve optimisation. We think through how many hours per week we can work without burning out, and how much stress we can handle before it becomes a problem. I do find that