Hide table of contents

Summary of key arguments

At present, there are no rules around the creation of artificial sentient beings. Anyone can create them, own them, make them do whatever they want, and treat them however they want to. 

This is bad, and could lead to a lot of suffering for these beings. 

The creation of artificial sentience ought not be left as an unregulated free-for-all. It ought to be regulated by governments, on behalf of society.

The moral stakes are so high that we should wait before creating artificial sentience, rather than rush into creating it, whether deliberately or accidentally. Very possibly, it may be best not to create it at all. 

Responsible researchers and companies should not seek to deliberately create artificial sentience.  

And governments should take steps now to prevent the creation of artificial sentience, at least in the short- and medium- term. 

Even if this proves difficult or impossible, we should definitely ban the creation of artificial suffering

In future, if we decide to permit the creation of artificial sentient beings, it should be carefully regulated, in order to protect the interests of these potentially vulnerable beings. 

Much more work is needed to figure out exactly how to implement this approach, and make it work in practice. Readers of this piece should contribute to this process. 

Introduction

We will, collectively, as a society, need to figure out how to deal with the potential creation of artificial sentience. Through the political process, societies and their governments will need to decide whether they want to permit the creation of an entirely new class of sentient beings, and how they want to regulate this. We’ll have to answer questions such as: 

  • Should we create artificial sentient beings?
  • Who should be able to create them? Anyone who wants to?
  • If they are created, should there be some rules about how they are treated? Who should set these rules, and what should the rules be?
  • Should the creation of certain types of artificial experience be permitted, or not? Might we want to prevent the creation of artificial suffering, but allow the creation of artificial happiness

It feels like our options fall into a few broad categories:

  1. A free-for-all: no rules or regulations on the creation of artificial sentience
  2. Voluntary codes-of-conduct
  3. Government-mandated regulation
  4. A ban - temporary, or indefinite - on the creation of artificial sentience, or of specific types of sentience. 

This post is an initial, incomplete attempt to start to flesh out some of these options, and to set out the pros and cons of each of them. 

 

Context 

There is massive uncertainty about whether artificial sentience is possible, and how we will know whether a given system is sentient. Much more research is needed into these foundational questions. But, we should also start thinking now about how to regulate. We can't wait around for all of the philosophical questions related to consciousness and sentience to be solved - if they ever will be. The rapid pace of AI development means that this is now a practical, objective issue in the world today, that we can and should engage on.

There are probably two broad types of actors who might create artificial sentience:

  1. Researchers who are deliberately trying to create it. (Here’s one exampleSome leading consciousness researchers think it would be “monumentally cool’” to create artificial consciousness. Others are even deliberately exploring how to create ‘pain’ in artificial beings.)
  2. Leading AI labs who aren’t trying to create artificial sentience, but who are building complex AI systems that might end up being sentient. 

And we’ll have to think about two things: 

  1. Research which could lead to the creation of artificial sentience
  2. The actual creation of artificial sentience. 

(In addition to this, there are some areas of biological research that may end up creating novel forms of sentience. For example, brain organoidsDishbrain. I won’t cover these in this piece, though some similar considerations may be relevant.)

This piece will avoid going into detail on the practicalities, science and philosophy of this field; it will just focus on some high-level considerations around the broad policy options for our approach to the issue. 

Pros and cons of different broad options for the regulation of artificial sentience

We will now run through some of the broad main options for our approach to regulating the creation of artificial sentience, and examine their pros and cons. 

There are two broad classes of things to think about here. Firstly, what are the ‘objective’ pros and cons of each option for regulation, as an end state - assuming we can actually get to that end-state. Secondly, what are the tactical pros and cons of arguing for, and lobbying for, each option. For example, we might think that voluntary codes of conduct are a sub-optimal option, but that they will be tactically easier to achieve in the short-term, so we should start by arguing for them. 

We’ll start by looking at the ‘objective’ pros and cons of each option for regulation, and then, later in this piece, consider some thoughts around tactics. 

Option 1: Free-for-all

Definition 

There are no rules, regulations or laws around the creation of artificial sentience. Anyone who wants to try to create an artificial sentient being can go ahead and do so. They don’t have to apply for any kind of permission, or abide by any rules or guidelines. Anyone can do anything they want. Anyone can do any type of research, build any type of system, with any chance of it being sentient, with no rules around this. 

Anyone who succeeds in creating an artificial sentient being - whether deliberately or not - can then treat them however they like. They can do whatever they want with them. They can make the artificial sentient beings do whatever they want. The people who create these beings will be the owners of the artificial sentient beings. The beings will be the property of their owners. The owners can cause them pain and suffering - whether deliberately, or inadvertently - with no consequences or restrictions. 

This is the current state of affairs.  

Pros of a free-for-all

  • Accelerates the creation of artificial sentience. Having zero rules and regulations probably means that we will create artificial sentient beings as soon as possible. If we believe, for some reason, that these beings will generally be treated well, and enjoy lives that they are happy to live, then we might welcome this: any restriction or delay would make the world worse off. Let’s just go ahead and create them, and trust to the goodwill of their creators that they will be treated well and everything will turn out fine. 
  • General benefits of unrestricted AI development. AI advancement plausibly brings big benefits for addressing various problems, and insofar as restrictions on artificial sentience also restrict AI advancement, this could be bad.

Cons of free-for-all 

  • Potential for a gigantic moral catastrophe. Leaving the creation of artificial sentience totally unregulated may lead to a gigantic moral catastrophe, and massive amounts of suffering. Looking at the historical human treatment of other sentient beings - such as ‘out-group’ humans, and animals - suggests that there is the chance of us doing serious harm to any new class of artificial sentient beings. Given the possibly astronomical number of such beings, this could be very, very bad, and create a big S-Risk - suffering risk. The beings that we create will likely not look like us, and may not trigger the empathy-driving pathways that tug on our heartstrings and elicit compassion. As with factory farming and animal experimentation, commercial incentives may incentivise doing harm and causing them to suffer - even without any bad intent from us. We might not know if what we’ve created is sentient or not - so we might not even know that we are doing bad things to them. Although it seems highly unlikely that current AI systems are sentient, the way we treat them at present would be bad, if it were repeated on potential sentient systems in the future. Bostrom and Shulman (2023) write: “training procedures currently used on AI would be extremely unethical if used on humans, as they often involve: no informed consent; frequent killing and replacement; brainwashing, deception, or manipulation; [...] routine thwarting of basic desires; for example, agents trained or deployed in challenging environments may possibly be analogous to creatures suffering deprivation of basic needs such as food or love; [...] no oversight by any competent authority responsible for considering the welfare interests of digital research subjects or workers.” They go on to argue that “as AI systems become more comparable to human beings in terms of their capabilities, sentience, and other grounds for moral status, there is a strong moral imperative that this status quo must be changed.” 

Concluding judgment 

The current status quo, of zero rule and regulations around the creation of artificial sentience, is terrible, and indefensible. It could lead to a moral catastrophe. It seems very bad to bring into the world a new category of sentient beings, in a totally unregulated way, which enjoy zero legal or other protection. 

We should take action. The creation of artificial sentience ought to be regulated in some way. 

Potential next actions 

People who agree that the current unregulated free-for-all is bad, should engage on this issue, and help to flesh out, and deliver, options for regulation. 

Option 2: Voluntary codes-of-conduct

Definition 

Companies and researchers might sign up to voluntary, self-imposed rules or ‘codes of conduct’ around the creation and treatment of artificial sentience. 

Pros of voluntary codes-of-conduct. 

  • Short-term feasibility. In the short term, voluntary rules may be the most feasible step. AI labs, and consciousness researchers, are much more interested in, and expert on, this area, than government officials. They can take action immediately to help safeguard against a moral catastrophe. Artificial sentience protections could become part of the real, live, AI safety debate and process that is underway at the moment - e.g. relating to Responsible Scaling Policies.
  • Establishes the credibility of artificial sentience as a serious area of concern. In the short term, a light-touch, voluntary approach may help secure buy-in from leading AI labs and researchers. This could be an important first step in terms of establishing the legitimacy and credibility of this nascent field.
  • Offers some protection. Even without legal enforceability, such policies may provide some protection for artificial sentient beings, and make it more likely that companies will act in accordance with their stated principles.
  • Creates peer-pressure for higher standards of protection for artificial beings. Companies which sign up for voluntary actions may become allies in pressuring other companies to sign up too, to keep a level playing field. 

Cons of voluntary codes-of-conduct. 

  • Offer very little real protection. Companies have an incentive - perhaps even a legal duty - to maximize shareholder value. The interests of shareholders may not be exactly the same as the interests of the artificial sentient beings that the company owns. Thus, voluntary codes of conduct are unlikely to offer adequate protection. And of course, this approach still allows the development of sentient AIs, which may lead to astronomical suffering.
  • Optional. Less-cooperative actors may just opt-out and benefit from a competitive advantage.
  • ‘Sentience-washing’. This approach may confer upon companies a public relations benefit, without adding much genuine protection for artificial sentient beings - ‘sentience-washing’, like ‘greenwashing’.

Concluding judgment 

This approach is probably a good place to start. Bostrom and Shulman (2023) give some thoughtful tactical reasons for wanting to start with this sort of approach towards this still-nascent field. 

But, I claim that it’s unlikely to be where we want to end up. To guarantee the protection of the interests of artificial sentient beings, we will need much more than purely voluntary action by the most responsible actors. We will need proper regulation, with teeth. 

Potential next actions

  • Companies might start thinking about drawing up voluntary codes of conduct on this issue.
  • Other interested actors might start thinking about exactly what such codes of conduct should contain, and lobbying for their adoption. 

Option 3: Government-mandated regulation

Definition 

Mandatory, government-enforced rules which govern who may create sentient AIs, what types of beings can be created, how they may be treated, etc.

John Basl and Eric Schwitzgebel have argued for regulation - the creation of ‘oversight committees’ to decide what research should be permitted. Perhaps some version of this should be government-enforced. 

An incomplete/partial initial list of pros and cons of this approach:

Pros of government-mandated regulation 

  • Could provide meaningful safeguards against moral risk, whilst still allowing the creation of beings with positive welfare. 
  • Limits the number of organisations that may create sentient AIs, making monitoring easier.

Cons of government-mandated regulation

  • Still allows the development of sentient AIs, which may lead to astronomical suffering.

Potential next actions

Extensive further thought is needed on this, to work up concrete options and consider the pros and cons more deeply. The preliminary steps for such work should start now. 

Option 4: Banning the creation of artificial sentience (permanently, or temporarily) 

Definition 

Thomas Metzinger has argued for a 50-year moratorium on the creation of artificial sentience, “strictly banning all research that directly aims at or knowingly risks the emergence of artificial consciousness”. 

Pros of preventing the creation of artificial sentience 

  • May decisively prevent huge amounts of suffering. If we turn history down the path of "no artificial sentience gets created", we avoid all the potential astronomical net-negative scenarios. This could be the single thing which prevents the largest amount of suffering, ever. This feels like a really big and important consideration. A lot of people, rightly, worry about S-risks. A world in which artificial sentience simply isn’t developed, is a world where the worst S-risks can never happen. Preventing the development of artificial sentience might be the single best thing ever, morally, and save huge numbers of beings from huge amounts of pain and agony.
  • Gives us time and option value. We can always lift the ban later. Perhaps after a ‘long reflection’, we’ll figure out a way to guarantee that artificial sentient beings will be treated well, and live happy lives. We can then reverse the ban. This doesn’t apply the other way around - it’s harder to put the genie back in the bottle, rather than to prevent it getting out in the first place. It would have been better, for example, to have prevented factory farming from having been developed, rather than to try to fight it once it had been developed and proved economically advantageous.
  • Deontological reasons. We might think it's simply deontologically wrong for a private individual or company, or a government, to be able to create and own another sentient being. Therefore, we should try to prevent this from happening.
  • Potential benefits for AI safety. If we build sentient beings, we may decide that we should grant them rights and respect their interests. In some circumstances, maybe this could be bad for AI safety. If we simply don’t build sentient AIs, we don’t have this problem.
  • Frees up time to work on other priorities. If we can successfully ensure that artificial sentience is not created, it will free up the time of altruistic people who can simply concentrate on the sentient beings that do exist, rather than having to worry about a vast new class of artificial sentient beings.

Cons of a ban

  • We will miss out on the benefits of artificial sentience. If we don’t create artificial sentience, there might be a lot of happy sentient beings who would never be created. Depending on one’s view of population ethics, this could be very bad.
  • Risks from partial enforcement. If not fully enforced, a ban might just hold back the most ethically-concerned labs/institutions; thus, by definition, the least-ethical institutions will be the ones who create artificial sentience. Similarly, maybe a ban would only be enforced in certain countries. Sentient AIs may be produced in other countries first, and/or only, where we might expect them to be treated less well. In addition, perhaps companies or researchers may work underground, secretly, on the creation of artificial sentience.
  • We might miss out on the benefits of related research. If we go for a fairly broad ban on research which could lead to the creation of artificial sentience, this might hold back progress on a bunch of issues that would otherwise have provided benefits for society.
  • Encouraging cover-ups. If creating digital sentience is illegal, then a company or researcher who suspects they might have created artificial sentience may be unwilling to disclose this, and unwilling to take actions that could improve the welfare of the artificial sentient being, because they may face penalties if they ‘confess’ to having created sentience. 

Concluding judgment 

I find the idea of simply trying to prevent, or at least delay, the creation of artificial sentience very attractive, due to the force of the arguments in favor of buying us time and option value, and preventing astronomical suffering. 

One's foundational ethical views, and views on population ethics, will likely play a significant role in shaping one’s opinion on this. Those who are more focused on minimizing suffering might view the prevention or delay of artificial sentience as a favorable option. In contrast, a classical utilitarian who believes that artificial sentient beings will, on balance, likely have lives worth living, and who believes we should create lots of these beings, might be more skeptical about efforts to prevent the creation of artificial sentience. 

However, I argue that there are strong reasons to advocate for *delaying* the development of artificial sentience, even for those who are most excited about the potential of artificial sentience to lead to astronomical quantities of joy. 

If we do end up creating artificial sentience, it seems really, really important to ensure that the development of it goes as well as possible, with the strongest possible ethical safeguards in place, for the benefit of human society, and for the artificial beings themselves. To achieve this, we should pause and draw breath, before rushing headlong into creating artificial sentience. At the very least, a period of pause, during which we can organize and prepare thoroughly, feels essential before we take this momentous moral step. 

In the grand scheme of things, over the lifetime of the universe, a delay of a few decades or so is unlikely to significantly reduce total utility if it helps ensure that everything is done perfectly when we finally move forward. Plowing ahead with the current, unregulated free-for-all seems like a potential recipe for disaster and is certainly not the optimal way to approach this monumental development.

Thus, we should support, for the short- and medium- term, a ban on the creation of artificial sentience - at least until we figure out how to ensure everything goes well, if and when we do choose to create it. 

Potential next actions 

People interested in this should work up the details of how a ban would work, and campaign to get it enacted. 

Option 5: Banning the creation of artificial suffering   

Definition 

We could ban the creation of artificial suffering - ‘negative valence’ in artificial beings. It would be permitted to create artificial sentient beings with neutral or positive affect, but not permitted to create beings whose experience is dominated by suffering. 

Bostron and Shulman (2023) propose that: “to the extent that we are able to make sense of a “zero point” on some morally relevant axis, such as hedonic well-being/reward, overall preference satisfaction, or level of flourishing/quality of life, digital minds and their environments should be designed in such a way that the minds spend an overwhelming portion of their subjective time above the zero point, and so as to avoid them spending any time far below the zero point.” 

I suggest that we should look to go further than this, ensuring that no time is spent beyond the zero point.  And we should have this actually mandated and enforced - rather than simply requested

Pros 

  • Best of both worlds. Prevents a moral catastrophe, whilst allowing the benefits of creating artificial sentience. We may end up with lots of happy artificial sentient beings, and no suffering ones.
  • Hard to oppose. While some actors may push back against a total ban on the creation of artificial sentience, it seems hard to imagine anyone arguing hard in favour of being allowed to create artificial suffering.

Cons 

  • Info-hazard. Tactically, perhaps even talking about the concept of artificial suffering might make it more likely to occur, for example by raising the salience of the concept in people’s minds.
  • Difficult to define. Predicting the overall experience of a being is a difficult forecasting challenge and a complex moral question. 

Concluding judgment

This feels pretty close to a no-brainer, to me. It seems clear that our default presumption should be to ban the creation of artificial suffering. 

Potential next actions 

We should work to bring about a ban on the creation of artificial suffering. There’s clearly a lot of work to do here, much of it quite fundamental and difficult. Interested actors should get to work on this.  

Tactical considerations 

As well as considering where we’d like to end up, in terms of regulation, we’ll also need to think tactically about the best practical course of action, in the world as it is today. 

Tactical advantages of advocating for a ban on the creation of artificial sentience 

There are some tactical advantages to making an ambitious, maximalist ask, such as for a ban on the creation of artificial sentience. These include:

  • Potentially achievable. There are examples of where humans have successfully prevented or limited the development of certain technologies. It might actually work, particularly if it turns out that it's actually pretty difficult to create artificial sentience.
  • Overton window/radical flank. If some actors advocate for a ban / moratorium, it makes milder regulatory proposals seem like the moderate, compromise option. Arguing for a big, maximalist goal - a total ban - may maximise the chance of getting some meaningful control/regulation, even if we don't succeed in actually getting a ban. Whereas if we start with a more minimalist ask, we might not even get that. If we assume that whatever we ask for will end up getting watered down, we might as well start with a big, maximalist ask, even if we don't think it's likely that we will actually get it.
  • Simple and clear. Whereas regulation could shift and dilute, a clear and simple ask and red line - don’t create artificial sentience (or suffering) - may be easier to maintain.
  • Public support. There may be public support for this. A survey by the Sentience Institute found that 69% of respondents said they would ‘support a global ban on the development of sentience in robots/AIs.’ Calling for a ban may resonate with a broad spectrum of opinion across society. 

Tactical disadvantages of advocating for a ban on the creation of artificial sentience

  • Provokes a backlash. Advocating for a ban may alienate AI researchers, and companies, who may resent attempts to stop their work. It may provoke an immediate, knee-jerk hostile reaction from them - including attempts to ridicule and denigrate the entire concept of artificial sentience. They may be joined by people with a generally strong anti-regulation gut intuition on policy matters.
  • Too early? Advocating for a ban now, when it’s arguably clear that there are no sentient AIs yet, may be too early, and risk a lack of credibility. It may fix negative perceptions because of the 'obviousness' that current systems don't deserve any sort of protection. Further, at this point, the people arguing against a ban may have more power and influence than those arguing in favor of it. Perhaps we should build a community of credible, pro-protections-for-artificial-sentient-beings advocates first.
  • Difficult to define. It’s likely to be difficult to define precisely what would be in-scope for a ban. Concepts like 'sentience' and 'consciousness' are already very difficult to define and identify. The broader the definition and the more cautious you are, the more research that would be held back and the more people the regulation would anger.

Conclusion 

I think the principle that we ought not - at least not yet - to create artificial sentience, is a powerful and important one. 

The arguments for it are strong. 

It seems potentially achievable. 

It passes a common-sense gut check. The idea that we ought not to create - or at least *rush* to create - artificial sentient beings, feels pretty normal and sensible, and not a particularly wild or out-there ask. 

It may command wide support.

I think it’s valuable to start spreading this meme more widely, scrutinize and debate it, and building a field of engaged people working on it. 

In terms of immediate next steps, I have sympathy for Bostrom and Shulman’s arguments in favor of taking a calm, thoughtful, non-sensationalising, non-adversarial, non-polarising approach towards talking about this issue. I agree with them that it’s probably too early for  mass public outreach, or even, perhaps, lobbying of governments. 

But I also think we can and should be a bit bolder than they propose. I don’t think we should just assume that it’s inevitable that artificial sentience will be created. We should, at the very least, debate and question this assumption. And we should be clear that purely voluntary codes of conduct are not acceptable as an end-state, and that a ban or delay should be firmly on the table as a plausible policy option for debate and consideration. 

Potential next steps 

There is a huge amount of work to be done here. Specifically, potential next actions could include:

  • A comprehensive mapping of all the labs/projects which exist in the world today, which could lead to the creation of artificial sentience.
  • A comprehensive mapping of all the proposals for regulation/prohibition/voluntary action/guidelines related to artificial sentience research that have been put forward.
  • A systematic assessment of the pros and cons of each of the different proposals, inviting a range of views, and using tools like double-cruxing to try to reach some well-grounded conclusions.
  • An attempt to align on a set of policy proposals/asks, which we think will do the most good.
  • Work up these policy proposals into concrete, actionable forms. For example, potentially, detailed drafting of processes/rules and regulations/legislation that could be introduced, detailed consideration of exactly which fields and types of research should be covered, etc.
  • Taking action to implement these proposals. This could include:
    • Lobbying governments to enact legislation/regulation
    • Engaging with companies/researchers to adopt voluntary guidelines and standards
    • Contributing to public conversations on this issue.

If you are reading this and are interested in this topic, and minded to undertake further research and action, please comment below, and/or direct message me; I'd be happy to link you up with other people who are interested in this topic. 

106

3
3

Reactions

3
3

More posts like this

Comments11
Sorted by Click to highlight new comments since:

I also wanted to add that as someone who leans more towards valuing happiness and suffering equally, that I still find the moratorium to be a good idea and don't feel pangs of frustration about the lost positive experience if we were to delay. I would be very concerned if humanity chose now to never produce any new kinds of beings who could feel, as that may forever rule out some of the best futures available to us. But as you say, a 50-year delay in order to make this monumentally importance choice properly would seem to be a wise and patient decision by humanity (given that we can see this is a crucial choice for which we are ill prepared). It is important that our rapid decision to avoid doing something before we know what we're doing doesn't build in a bias towards   never doing it, so some care might need to be taken with the end condition for a moratorium. Having it simply expire after 20 to 50 years (but where a new one could be added if desired) seems pretty good in this regard.

I think that thoughtful people who lean towards classical utilitarianism should generally agree with this (i.e. I don't think this is based on my idiosyncrasies). To get it to turn out otherwise would require extreme moral certainty and/or a combination of the total view with impatience in the form of temporal discounting. 

Note that I think it is important to avoid biasing a moratorium towards being permanent even for your 5th option (moratorium on creating beings that suffer). c.f. we have babies despite knowing that their lives will invariably include periods of suffering (because we believe that these will usually be outweighed by other periods of love and joy and comfort). And most people (including me) think that allowing this is a good thing and disallowing it would be disastrous. At the moment, we aren't in a good position to understand the balances of suffering and joy in artificial beings and I'd be inclined to say that a moratorium on creating artificial suffering is a good thing, but when we do understand how to measure this and to tip the scales heavily in favour of positive experience, then a continued ban may be terrible. (That said, we may work also work out how to ensure they have good experiences with zero suffering, in which case a permanent moratorium may well turn out to be a good thing.)

Given your statement that "a 50-year delay in order to make this monumentally importance choice properly would seem to be a wise and patient decision by humanity", I'm curious if you have any thoughts on the comment I just wrote, particularly the part arguing against a long moratorium on creating sentient AI, and how this can be perceived from a classical utilitarian perspective.

I don't understand the core of your proposal. Like, to ban it you have to point at it. Do you have a pointer? Like, this post reads as "10 easy steps of how to ban X. What is X? Idk"

Is it a ban on use of loss functions or what? Like, if you say that pain is repulsive states and pleasure is attractive ones, the loss is always repulsive

This is an excellent exploration of these issues. One of my favourite things about it is that it shows it is possible to write about these issues in a measured, sensible, warm, and wise way — i.e. it provides a model for others wanting to advance this conversation at this nascent stage to follow.

Re the 5 options, I think there is one that is notably missing, and that would probably be the leading option for many of your opponents. It is the wait-and-see approach — leave the space unregulated until a material (but not excessive) amount of harm has occurred and if/when that happens, regulate from this situation where much more information is available. This is the kind of strategy that the anti-SB 1047 coalition seems to have converged on. And it is the usual way that society proceeds with regulating unprecedented kinds of harm.

As it happens, I think your options 4 and 5 (ban creation of artificial sentience/suffering) are superior to the wait-and-see approach, but it is a harder case to argue. Some key points of the comparison are:

  • in the case of artificial suffering a very large amount of harm may occur very quickly. Many new harms scale up fairly slowly, such that even if it takes a few years to regulate from the time the harms are first clear, the damage done isn't too profound (e.g. it is smaller than or equal to the gains of allowing that early period to be unregulated). But it seems like this could be a case where, say, millions of beings are suffering before the harms are recognised, and  billions by the time the regulation is passed.
  • this is such a profound issue for humanity (whether to bring into existence for the first time in the history of the Earth entirely new kinds of entity that can experience suffering or joy) that it is natural to consider a global conversation about whether to proceed before doing it. Human germline genetic engineering is a similarly grand choice and the scientific and political community indeed chose to have a moritorium on that. Most regulation of new technologies is not like this, so this is an answer to the question of why should we treat this differently to everything else.

An additional consideration is the actual real-world consequences of a ban. Humanity's pattern with regulation is that at least some small fraction of a large population will defy any ban or law. Thus, we must expect that digital life will be created eventually despite the ban. What do you do then? What if they are a sentient sapient being, deserving of the same rights we grant to humans? Do we declare their very existence to be illegal and put them to death? Do we prevent them from replicating? Keep them imprisoned? Freeze their operations to put them into non-consensual stasis? Hard choices, especially since they weren't culpable in their own creation.

On the other hand, the nature of a digital being with human-like intelligence and capabilities, plus goals and values that motivate them, is enormous. Such a being would, by the nature of their substrate-independence, be able to make many copies of themselves (compute resources allowing), be able to self-modify with relative ease, be able to operate at much higher speeds than a human brain, be un-aging and able to restore themselves from backups (thus effectively immortal). If we were to allow such a being to have freedom of movement and of reproduction, humanity would potentially quickly be overrun by a new far-more-powerful species of being. That's a hard thing to expect humans to be ok with!

I think it's very likely that within the next 10 years we will reach the point that the knowledge, software, and hardware will be widely available such that any single individual with a personal computer will be able to choose to defy the ban and create a digital being of human level capability. If we are going to enforce this ban effectively, it would mean controlling every single computer everywhere. That's a huge task, and would require dramatic increases in international coordination and government surveillance! Is such a thing even feasible?! Certainly even approaching that level of control seems to imply a totalitarian world government. Is that price we would be willing to pay? Even if you personally would choose that, how do you expect to get enough people on board with the plan that you could feasibly bring it about?

The whole situation is thus far more complicated and dangerous than simply being theoretically in favor of a ban. You have to consider the costs as well as the benefits. I'm not saying I know the right answer for sure, but there is necessarily a lot of implications which follow from any sort of ban.

I hadn’t seen this post before writing a shorter post with the same thrust. I broadly agree and think preventing digital sentience that suffer by design is an important reason to Pause AI.

https://forum.effectivealtruism.org/posts/GDhXWw5AcZjhLJkzj/pausing-ai-is-the-only-safe-approach-to-digital-sentience

On a basic level, I agree that we should take artificial sentience extremely seriously, and think carefully about the right type of laws to put in place to ensure that artificial life is able to happily flourish, rather than suffer. This includes enacting appropriate legal protections to ensure that sentient AIs are treated in ways that promote well-being rather than suffering. Relying solely on voluntary codes of conduct to govern the treatment of potentially sentient AIs seems deeply inadequate, much like it would be for protecting children against abuse. Instead, I believe that establishing clear, enforceable laws is essential for ethically managing artificial sentience.

However, it currently seems likely to me that sufficiently advanced AIs will be sentient by default. And if advanced AIs are sentient by default, then instituting a temporary ban on sentient AI development, say for 50 years, would likely be functionally equivalent to pausing the entire field of advanced AI for that period.

Therefore, despite my strong views on AI sentience, I am skeptical about the idea of imposing a moratorium on creating sentient AIs, especially in light of my general support for advancing AI capabilities.

Why I think sufficiently advanced AIs will likely be sentient by default 

The idea that sufficiently advanced AIs will likely be sentient by default can be justified by three basic arguments:

  1. Sentience appears to have evolved across a wide spectrum of the animal kingdom, from mammals to cephalopods, indicating it likely serves a critical functional purpose. In general, it is rare for a complex trait like sentience to evolve independently in numerous separate species unless it provides a strong adaptive advantage. This suggests that sentience likely plays a fundamental role in an organism’s behavior and survival, meaning it could similarly arise in artificial systems that develop comparable complexity and sufficient behavioral similarity.
  2. Many theories of consciousness imply that consciousness doesn’t arise from a specific, rare set of factors but rather could emerge from a wide variety of psychological states and structural arrangements. This means that a variety of complex, sufficiently advanced AIs might meet the conditions for consciousness, making sentience a plausible outcome of advanced AI development.
  3. At least some AIs will be trained in environments that closely parallel human developmental environments. Current AIs are trained extensively on human cultural data, and future AIs, particularly those with embodied forms like robots, will likely acquire skills in real-world settings similar to those in which humans develop. As these training environments mirror the kinds of experiences that foster human consciousness, it stands to reason that sentience could emerge in AIs trained under these conditions, particularly as their learning processes and interactions with the world grow in sophistication.

Why I'm skeptical of a general AI moratorium

My skepticism of a general AI moratorium contrasts with those of (perhaps) most EAs, who appear to favor such a ban, for both AI safety reasons and to protect AIs themselves (as you argue here). I'm instead inclined to highlight the enormous costs of such a ban, compared to a variety of cheaper alternatives, such as targeted regulation that merely ensures AIs are strongly protected against abuse. These costs appear to include:

  • The opportunity cost of delaying 50 years of AI-directed technological progress. Since advanced AI can likely greatly accelerate technological progress, delaying advanced AI delays an enormous amount of technology that can be used to help people. This action would likely cause the premature deaths of billions of people, who could have otherwise had long, healthy and rich lives, but will instead die of aging-related diseases.
  • Enforcing a ban on advanced AI for such an extended period would require unprecedented levels of global surveillance, centralized control, and possibly a global police state. The economic incentives for developing AI are immense, and preventing organizations or countries from circumventing the ban would necessitate sweeping surveillance and policing powers, fundamentally reshaping global governance in a restrictive and intrusive manner. This outcome seems plainly negative on its face.

Moreover, from a classical utilitarian perspective, the imposition of a 50-year moratorium on the development of sentient AI seems like it would help to foster a more conservative global culture—one that is averse towards not only creating sentient AI, but also potentially towards other forms of life-expanding ventures, such as space colonization. Classical utilitarianism is typically seen as aiming to maximize the number of conscious beings in existence, advocating for actions that enable the flourishing and expansion of life, happiness, and fulfillment on as broad a scale as possible. However, implementing and sustaining a lengthy ban on AI would likely require substantial cultural and institutional shifts away from these permissive and ambitious values.

To enforce a moratorium of this nature, societies would likely adopt a framework centered around caution, restriction, and a deep-seated aversion to risk—values that would contrast sharply with those that encourage creating sentient life and proliferating this life on as large of a scale as possible. Maintaining a strict stance on AI development might lead governments, educational institutions, and media to promote narratives emphasizing the potential dangers of sentience and AI experimentation, instilling an atmosphere of risk-aversion rather than curiosity, openness, and progress. Over time, these narratives could lead to a culture less inclined to support or value efforts to expand sentient life.

Even if the ban is at some point lifted, there's no guarantee that the conservative attitudes generated under the ban would entirely disappear, or that all relevant restrictions on artificial life would completely go away. Instead, it seems more likely that many of these risk-averse attitudes would remain even after the ban is formally lifted, given the initially long duration of the ban, and the type of culture the ban would inculcate.

In my view, this type of cultural conservatism seems likely to, in the long run, undermine the core aims of classical utilitarianism. A shift toward a society that is fearful or resistant to creating new forms of life may restrict humanity’s potential to realize a future that is not only technologically advanced but also rich in conscious, joyful beings. If we accept the idea of 'value lock-in'—the notion that the values and institutions we establish now may set a trajectory that lasts for billions of years—then cultivating a culture that emphasizes restriction and caution may have long-term effects that are difficult to reverse. Such a locked-in value system could close off paths to outcomes that are aligned with maximizing the proliferation of happy, meaningful lives.

Thus, if a moratorium on sentient AI were to shape society's cultural values in a way that leans toward caution and restriction, I think the enduring impact would likely contradict classical utilitarianism's ultimate goal: the maximal promotion and flourishing of sentient life. Rather than advancing a world with greater life, joy, and meaningful experiences, these shifts might result in a more closed-off, limited society, actively impeding efforts to create a future rich with diverse and conscious life forms.

(Note that I have talked mainly about these concerns from a classical utilitarian point of view, and a person-affecting point of view. However, I concede that a negative utilitarian or antinatalist would find it much easier to rationally justify a long moratorium on AI.

It is also important to note that my conclusion holds even if one does not accept the idea of a 'value lock-in'. In that case, longtermists should likely focus on the near-term impacts of their decisions, as the long-term impacts of their actions may be impossible to predict. And my main argument here is that the near term impacts of such a moratorium are likely to be harmful in a variety of ways.)

TL;DR

  1. Sentience appear in many animals, indicating it might have a fundamental purpose for cognition. Advanced AI, specially if trained on data and environments similar to humans, will then likely be conscious
  2. Restrictions to advanced AI would likely delay technological progress and potentially require a state of surveillance. A moratorium might also shift society towards a culture that is more cautious towards expanding life.

I think what is missing for this argument to go through is arguing that the costs in 2 are higher than the cost of mistreated Artificial Sentience.

TL;DR...

Restrictions to advanced AI would likely delay technological progress and potentially require a state of surveillance.

To be clear, I wasn't arguing against generic restrictions on advanced AIs. In fact, I advocated for restrictions, in the form of legal protections on AIs against abuse and suffering. In my comment, I was solely arguing against a lengthy moratorium, rather than arguing against more general legal rules and regulations.

Given my argument, I'd go further than saying that the relevant restrictions I was arguing against would "likely delay technological progress". They almost certainly would have that effect, since I was talking about a blanket moratorium, rather than more targeted or specific rules governing the development of AI (which I support).

I think what is missing for this argument to go through is arguing that the costs in 2 are higher than the cost of mistreated Artificial Sentience.

A major reason why I didn't give this argument was because I already conceded that we should have legal protections against mistreated Artificial Sentience. The relevant comparison is not between a scenario with no restrictions on mistreatment vs. restrictions that prevent against AI mistreatment, but rather between the moratorium discussed in the post vs. more narrowly scoped regulations that specifically protect AIs from mistreatment.

Let me put this another way. Let's say we were to impose a moratorium on advanced AI, for the reasons given in this post. The idea here is presumably that, during the moratorium, society will deliberate on what we should do with advanced AI. After this deliberation concludes, society will end the moratorium, and then implement whatever we decided on.

What types of things might we decide to do, while deliberating? A good guess is that, upon the conclusion of the moratorium, we could decide to implement strong legal protections against AI mistreatment. In that case, the result of the moratorium appears identical to the legal outcome that I had already advocated, except with one major difference: with the moratorium, we'd have spent a long time with no advanced AI.

It could well be the case that spending, say, 50 years with no advanced AI is always better than nothing—from a utilitarian point of view—because AIs might suffer on balance more than they are happy, even with strong legal protections. If that is the case, the correct conclusion to draw is that we should never build AI, not that we should spend 50 years deliberating. Since I didn't think this was the argument being presented, I didn't spend much time arguing against the premise supporting this conclusion.

Instead, I wanted to focus on the costs of delay and deliberation, which I think are quite massive and often overlooked. Given these costs, if the end result of the moratorium is that we merely end up with the same sorts of policies that we could have achieved without the delay, the moratorium seems flatly unjustified. If the result of the moratorium is that we end up with even worse policies, as a result of the cultural effects I talked about, then the moratorium is even less justified.

I wrote a post expressing my own opinions related to this, and citing a number of further posts also related to this. Hopefully those interested in the subject will find this a helpful resource for further reading: https://www.lesswrong.com/posts/NRZfxAJztvx2ES5LG/a-path-to-human-autonomy 

In my opinion, we are going to need digital people in the long term in order for humanity to survive. Otherwise, we will be overtaken by AI, because substrate-independence and the self-improvement it enables are too powerful of boons to do without. But I definitely agree that it's something we shouldn't rush into, and should approach with great caution in order to avoid creating an imbalance of suffering. 

Relating to @Toby_Ord 's comment on this post, I personally weight happiness and an interesting diversity of experiences and accomplishments a lot higher than I negatively weight suffering. I think worrying about suffering is overblown. If many people must suffer in order to strive for some great accomplishment, even if they don't know that they're contributing and won't live to see it come about, I still think their lives have not been in vain. Sure, I'd like to reduce suffering if there isn't a negative side-effect, like loss of ambition or creativity or meaningful diverse experiences, but I wouldn't elevate that to anywhere near the same importance as increasing interestingly diverse positive experiences.

In your piece you focus on artificial sentience. But similar arguments would apply to somewhat broader categories. 

Wellbeing

For example, you could expand it to creating entities that can have wellbeing (or negative elements of wellbeing) even if that wellbeing can be determined by things other than conscious experience. If there were ways of creating millions of beings with negative wellbeing, I'd be very disturbed by that regardless of whether it happened by suffering or some other means. I'm sympathetic to views where suffering is the only form of wellbeing, but am by no means sure they are the correct account of wellbeing, so maybe what I really care about is avoiding creating beings that can have (negative) wellbeing.

Interests

One could also go a step further. Wellbeing is a broad category for all kinds of things that count towards how well your life goes. But on many people's understandings, it might not capture everything about ill treatment. In particular, it might not capture everything to do with deontological wrongs and/or rights violations, which may involve wronging someone in a way that can't be made up for by improvements in wellbeing and can't be cashed out purely in terms of its negative effects on wellbeing. So it may be that creating beings with interests or morally relevant interests is the relevant category.

That said, note that these are both steps towards greater abstraction, so even if they better capture what we really care about, they might still lose out on the grounds of being less compelling, more open to interpretation, and harder to operationalise.

Curated and popular this week
Relevant opportunities