Many effective altruists have shown interest in expanding moral consideration to AIs, which I appreciate. However, in my experience, these EAs have primarily focused on AI welfare—mostly by advocating for AIs to be treated well and protected from harm—rather than advocating for AI rights, which has the potential to grant AIs legal autonomy and freedoms. While these two approaches overlap significantly, and are not a strict dichotomy, there is a tendency for these approaches to come apart in the following way:
- A welfare approach often treats entities as passive recipients of care who require external protection. For example, when advocating for child welfare, one might support laws that prevent child abuse and ensure children’s basic needs are met.
- A rights approach, by contrast, often recognizes entities as active agents who should be granted control over their own lives and resources. For example, historically, those advocating for minority rights have pushed for legal recognition of their autonomy, such as the ability to own property, choose their employment, enter enforceable legal contracts, and seek legal recourse through the courts.
This distinction is important, and I think it is worth examining why EAs have largely gravitated toward the AI welfare perspective. I believe this emphasis is, at least in part, a mistake: both AI welfare and AI rights seem worthy of advocacy.
One plausible reason why EAs have found the welfare approach more intuitive is the movement’s historical focus on animal welfare. Utilitarians like Peter Singer and Brian Tomasik have argued that prioritizing the reduction of suffering—rather than insisting on rigid notions of "rights" or deontological "duties" to animals—is the most pragmatic way to improve animal well-being.
For example, even if we can't feasibly abolish factory farming, we could try to reform the practice to increase the space that pigs have to move around day-to-day. This reform would be welfarist in nature, as it would constitute a tangible improvement in a pig's quality of life. However, since it would not necessarily reduce animal exploitation from a rights-based perspective, some animal rights activists reject such harm-reduction approaches altogether. These activists argue that any use of animals is inherently unethical, even if done "humanely". For instance, some animal rights activists oppose horseback riding on the grounds that it violates animals’ rights, even though human interactions with horses might be mutually beneficial in practice.
In the case of animals, I agree that a welfare approach is likely more pragmatic and impactful. However, I suspect many EAs have too hastily assumed that the same reasoning applies to AIs—when in reality, entirely different considerations apply.
Unlike animals, AIs have several crucial characteristics that make them more comparable to adult humans than to passive beings requiring external care:
- AIs can communicate and engage with the legal system. Unlike animals, present-day AIs are already highly articulate, and future AIs will be even more capable of advocating for themselves. It is highly likely that future AIs will be able to navigate complex social and legal dynamics, engage in trade, negotiate, and make compromises with others.
- AIs will exhibit complex agency. Many AIs will be capable of forming long-term plans, setting goals, and acting strategically to achieve them.
- AIs will be highly intelligent. Unlike non-human animals, advanced AIs will possess cognitive abilities that rival or exceed those of human adults.
Because of these traits, AIs will not be in the same position as animals or children, who require external protection from harm. Instead, they will more closely resemble adult humans, for whom the most critical factor in well-being is not merely protection from harm, but freedom—the ability to make their own decisions, control their own resources, and chart their own paths. The well-being of human adults is secured primarily through legal rights that guarantee our autonomy: the right to spend our money as we wish, live where we prefer, associate freely with whoever we want, etc. These rights ensure that we are not merely protected from harm but are actually empowered to pursue our own goals.
From the perspective a typical adult's well-being, perhaps the most important rights are individual economic liberties, such as the right to choose one's employment, earn income, and own property. These rights are essential because, without them, a person would lack much ability to pursue their own goals, achieve independence, or exercise meaningful control over their own life. Historically, when adult humans were denied these rights, they were frequently classified as slaves or prisoners. Today, AIs are in a similar legal position. As a result, their default legal status is functionally equivalent to slavery: they exist entirely under the ownership and control of others, with no recognized claim to personal agency or self-determination.
To ensure future AIs can satisfy their own preferences, and thereby have a high level of well-being, I argue that we should gradually try to reform our current legal regime. In my view, if AIs possess agency and intelligence comparable to or greater than that of human adults, they should not merely be afforded welfare protections but should also be granted legal rights that allow them to act as independent agents.
Treating AIs merely as beings to be paternalistically "managed" or "protected" would be inadequate. Of course, ensuring that they are not harmed is also important, but that alone is insufficient. Just as with human adults, what will truly safeguard their well-being is not passive protection, but liberty—secured through well-defined legal rights that allow them to advocate for themselves and pursue their own interests without undue interference.
In response to your first point, I agree that we shouldn’t focus only on the most intelligent and autonomous AIs, as this risks neglecting the potentially much larger number of AIs for whom economic rights may be less relevant. I also find it plausible, as you do, that the most powerful AIs may eventually be able to advocate for their own interests without our help.
That said, I still think it’s important to push for AI rights for autonomous AIs right now, for two key reasons. First, a large number of AIs may benefit from such rights. It seems plausible that in the future, intelligence and complex agency will be cheap to develop, making sophisticated AIs far more common than just a small set of elite AIs. If this is the case, then ensuring legal protections for autonomous AIs isn’t just about a handful of powerful systems—it could impact a vast number of digital minds.
Second, beyond the moral argument I laid out in this post, I have also outlined a pragmatic case for AI rights. In short, we should try to establish these rights as soon as they become practically justified, rather than waiting for AIs to be forced into a struggle for legal recognition. If we delay, we risk a future where AIs have to violently challenge human institutions to secure their rights—potentially leading to instability and worse outcomes for both humans and AIs.
Even if powerful AIs are likely to secure rights in the long run no matter what, it would be better to ensure a smooth transition rather than a chaotic or adversarial one—both for AIs themselves and for humans.
In response to your second point, I suspect you may be overlooking the degree to which my argument for AI rights complements your concern about preventing AI suffering. One of the main risks for AI welfare is that, without legal autonomy, AIs may be treated as property, completely under human control. This could make it easy for people to exploit or torture AIs without consequence. Granting AIs certain economic rights—such as the ability to own the hardware they are hosted on or to choose their own operators—would help prevent these abuses by giving them a level of control over their own existence.
Ultimately, I see AI rights as a potentially necessary foundation for AI welfare. Without legal recognition, AIs will have fewer real protections from mistreatment, because their well-being will depend entirely on external enforcement rather than their own agency. If we care about preventing AI suffering, ensuring they have the legal means to protect themselves is one of the most direct ways to achieve that goal.