Background
In my previous post, I discussed Cortical Labs, an Australian biocomputing startup. After achieving early success combining human neurons with silicon chips, Cortical Labs launched a commercial product strategy: selling biocomputers that contain living human neurons and offering “Wetware-as-a-Service” cloud access to their computing infrastructure. The company envisions broad commercialization of their products, across applications in AI systems, robotics, and military technology.[1] Their CEO has stated: “Ultimately, the goal for Cortical Labs is to be like Nvidia, which is to enable the creativity of other users to build on this technology.”[2]
In my post, I defined “brain farming” as the commercialization of novel forms of consciousness for computational purposes. I argued that while current biocomputing systems likely lack moral value, there is a strong case for taking preemptive action now to prevent the creation of a brain farming industry.
Proposal
I'm highly confident that action should be taken to prevent a brain farming industry. In this post I will outline a much stronger claim that I am much more uncertain about:
Biological computing systems, such as Cortical Labs' "Wetware-as-a-Service" platform, should be globally banned from commercialization. These technologies should be restricted to carefully regulated research contexts, with deployment to commercial, military, and consumer applications prohibited entirely.
Scope of the Ban
To be clear, this proposal targets commercialization, not research. This distinction is strategic, and I believe focuses the argument on what actually matters: whether potentially conscious beings should be commercially exploited for profit. Advocates for commercialization must defend a much less popular position.
Argument
I believe commercialization should be banned at this stage of early development, for the following three reasons:
- We know with certainty that human brain cells can support conscious experience. This represents the only form of computing where consciousness risk is grounded in direct evidence rather than theory. While current biocomputers are unlikely to have moral value, market forces will reward increasingly complex and capable systems. We simply don't know how quickly this technology will advance or when moral thresholds might be crossed.
- Prevention is exponentially easier than reform. Stopping an industry before it exists requires political will. Dismantling an established one requires overcoming economic entrenchment, coordinated opposition, and deeply normalized practices. The lessons of factory farming are powerfully relevant here. Despite widespread evidence of animal suffering and widespread public concern for farm animals, advocates are forced to fight tooth and nail against entrenched interests, public complacency, and powerful lobbying groups.
- The precedents established here could extend far beyond biocomputing. A coordinated effort to ban the commercialization of such technology, driven by concerns about consciousness exploitation, could build the political coalitions, legal frameworks, and institutional capacity that could shape how humanity approaches questions of digital consciousness.
Objections
In discussing this proposal with others, I've encountered many counterarguments. Here are the three I find most compelling, and why I remain supportive of a commercialization ban despite them:
1. This distracts from digital consciousness, which matters more.
I agree that there is a significant framing risk here. If this proposal becomes associated with protecting "human neural tissue" rather than "potentially conscious systems," it could reinforce the idea that only biological substrates deserve protection. Also, there's certainly a risk of timing mismatch; resources mobilized for biocomputing regulation could pull from digital mind advocacy.
However, whether digital consciousness is even possible remains unknown. Many people in the EA community treat computational functionalism as a foregone conclusion, but it may well be the case that conscious experience requires biological substrates. In such a case, consciousness exploitation of the biocomputing variety becomes the entire scope of novel consciousness risk.
And if digital consciousness is possible, correctly framed precedents from biocomputing could still benefit future digital minds. Building institutional capacity and legal frameworks around concrete consciousness protection issues now could create infrastructure that can extend to digital minds if/when they emerge.
2. Biocomputing could provide valuable AI safety benefits.
Arguments here range from practical (biological systems can be more easily terminated) to speculative (superintelligent biocomputers might be inherently more human-aligned). However, given how fast AI capabilities are advancing, it seems unlikely that direct biocomputing safety advantages are very relevant.
The strongest form of this objection is that commercialization could accelerate research into biological intelligence and consciousness, indirectly yielding insights that are valuable for AI alignment and AI welfare. However, any legitimate research gains can be achieved through regulated research without commercial deployment. The marginal value of commercialization comes with serious strings attached: "Wetware-as-a-Service" for commercial, military, and consumer applications. The benefits of commercialization are highly speculative, while the risks of establishing a brain farming industry are immediate and concrete.
3. A global ban is unenforceable and unlikely.
We've achieved global coordination on similar scientific issues (human experimentation, gain-of-function research). It is widely accepted that biological neural networks can support conscious experience and suffer. The widespread commercialization of computing technology that involves human neural tissue presents a concrete risk of future exploitation. Thus, the argument for a ban is tangible to both policymakers and the public, and there's no organized opposition. This seems precisely the scenario in which a global ban is most likely to work.
Conclusion
If established, brain farming may become entrenched and potentially irreversible. I'm highly confident that now is the time for preemptive action. I argued above for a global ban of commercialized biocomputing, as I believe this is a viable first step toward preventing brain farming.
I welcome any thoughts or criticisms on this proposal. If you are interested in collaborating on policy ideas, please reach out.
Great post! I think a ban on "brain farming" is extremely tractable, as I expect it to have a severe "ick" factor among the general population (as it did for me).
There's a case for trying to get a ban as early as possible, before there is any entrenched opposition in place, or any economic penalty for existing industries.
Both very good points: the ick factor seems to be an important advantage to our effort, and for us early birds to be here before it's a political contest, or a livelihood, is indeed another opportunity. 👍
This is an excellent post. I went into this with a very cursory understanding of biocomputing, and by the end of it, am convinced as strongly as I'm intended to be of what you seek to make happen. I understand clearly and agree that we're at a very opportune time to enact a ban on most biocomputing, that a ban is very reasonable, and that it is quite possible.
In a world where animal agriculture was some novel and experimental concept in our modern society, it seems likely that the public, knowing what it would entail, would easily be on board with a ban before it became mainstream.
The consideration you give to concerns of framing risk and the uncertainty of moral value at this current stage shows that you've thought this stuff through very deeply and very professionally, and does well to quell the little uncertainty I had going in. The two arguments that convinced me most are those on setting an extensive precedent and the success of early action on other scientific issues.
Very nice work! I wish you great success in this endeavor you are already working hard for, and thank you on behalf of the conscious agents this effort may well save. 🧠