Take the 2025 EA Forum Survey to help inform our strategy and prioritiesTake the survey
A

Ailanthus

23 karmaJoined

Comments
6

I've seen much written that takes it as a premise that you shouldn't concede to a Pascal's mugging, but I've seen very little about why not.


You may be right. I think a lot of us feel that it is intuitively wrong and take that as a premise.

I don't have a rigorous argument against biting the bullet of expected value in the abstract. But in my view,  utility calculations will never fully account for 2nd order harms (let alone alternative moral perspectives), and I think that provides ample reason to not rely on numbers alone and err on the side of caution.

Specific risks that come to mind for me here (at least, in the unlikely scenario where the nematode-extinction movement enters the EA mainstream) risks that come to mind for me are reputational damage, intra-movement conflict, climate change exacerbation, biodiversity loss, and the possibility of redirecting evolution toward greater suffering. I'm sure there's plenty of other risks I haven't considered.

I'm all for caring about soil nematodes and researching their welfare! I just think we need more clarity to justify shifting unrelated charity spending.

Thanks, Vasco.

I would agree that supporting GiveWell (or HIPF) is in alignment with human values.

But if I understand your analysis correctly, you find the vast majority (> 99.9%) of the benefit of giving to these charities is received not by humans, but by soil life (specifically, mostly by nonexistent nematodes that would have existed counter-factually).

All is well so long as human impact and soil life impact are closely correlated, but I see no reason why that must always be the case. I suspect there are interventions that could produce event greater results by convert even more wildlands to cropland, but with no benefits to humans. It's these interventions that come into conflict with my values.

Specifically, I find it morally dubious to purchase animal products with the intention of reducing nematode populations. More broadly, I'm doubtful that speculative, uncertain benefits (even if possibly immense) can justify clear harm. I think this useful moral intuition, given the complexity of 2nd order effects and human tendencies toward motivated reasoning.

Similarly, I also find the idea that destruction of wildlands and their creatures is good to be in tension with my intuitive values. While I wish a positive life for all sentient animals, I also value the existence of wild habitats. If indeed most wild beings have negative lives, these values are in conflict. Nonetheless, I feel that they come from largely overlapping drives, and I expect this is true for most who care about animals. Considering the controversy surrounding killing animals even with very good reasons (e.g. invasive species control) I think a message of "Expand your moral circle to include these creatures... then kill millions of them!" is unlikely to land well.

Along more preference-utilitarian lines, I have a hard time imagining the nematodes getting on board with this. If an superintelligent AI finds that human existence is probably net-negative, does that entitle them to eradicate us?

I could also imagine cases where human welfare and nematode welfare could be actively in conflict. For instance, if one had an opportunity to increase human population more rapidly by installing an authoritarian government.

I'm curious about how you navigate these issues in cases where they're not so obviously aligned. Would you support charities with no other benefits if you found greater impact on soil life? How would you trade off harms to humans and other animals?

Reductio ad absurdum: If we consider the lives of nematodes and mites meaningful, suddenly all human welfare questions become meaningless compared to the question of how our behaviour affects their welfare. The conclusion will be that we either need to nuke ourselves or completely restructure society around maximising nematode wellbeing. This is impractical, and like many internally consistent but impractical philosophies (nihilism, antinatalism, Kaczynskiism) aren't conducive to a functioning society.


I think there is actually a reasonable middle ground here. If indeed the vast majority of all meaningful lives are those of soil organisms, I think an EA approach would imply:

  • Taking the most effective actions to help these beings. Demanding that soil life be included in all existing animal welfare work is analogous to demanding that GiveWell include animal welfare in all its calculations. More targeted interventions directly focused on helping soil life are likely to be far more impactful. Currently, this probably looks like invertebrate welfare research, perhaps with some movement building.
  • Working for long term solutions, recognizing and avoiding unintended consequences, which could include damage to the movement, biodiversity loss, or even redirecting evolution toward greater suffering.
  • Balancing "utilon" nematode well-being with "warm fuzzy" human and larger animal well-being. Most people feel little-to-no empathy for beings they can't even see. It's wonderful that there's some who do intuitively care for these tiny beings, but in order to bring the rest of us along they'll need to understand where we're starting from.

I got a probability for this of 58.7 %

 

This seems to me like a Pascal's mugging. Much has been written about why we should not concede to such. To me, it is enough to see that history has not been kind to those who, when faced with a speculative moral analysis in conflict with human values, chose the analysis.

To ask that others prioritize the well-being of nematodes over that of clearly sentient animals (including humans), I'd need far greater confidence in the ability for these small beings to suffer. To prioritize reducing their populations, I believe we need much more confidence that their lives are net negative, and that downstream effects could be avoided. (Even with those considerations, I think there's still some moral uncertainty. Beings with net-negative welfare can still want to live, and their lives have value in non-utilitarian moral perspectives.)

It might still be better than the counterfactual if an AI arms race was likely to happen soon anyway. I'd prefer the AI leader has some safety red tape (even if it's largely ignored by leaders and staff) as opposed to being a purely for-profit entity.

Nonetheless, there's a terrible irony in the organization with the mission "ensure that artificial general intelligence benefits all of humanity" not only kicking off the corporate arms race, but seemingly rushing to win it.

It's clear that the non-profit wrapper was inadequate to constrain the company. In hindsight, perhaps the right move would have been investing more in AI governance early on, and perhaps seeking to make OpenAI a government body. Though, I expect taking AI risk to DC in 2015 would have been a tough sell.

Thanks for making this post! It's a very thought-provoking topic that certainly merits more discussion and investigation!

My response is in three parts. First, I'll share some my thoughts on why, in my view we should expect unionized companies to act more safely. Then, I'll share some doubts that I have about tractability on fast timelines. Lastly, I offer an alternative proposal.

1.

To gently push back against other commenters here, I think there's a case to be made that worker's incentives should lean much more toward safety than management's.

Management has incentives to signal that they care about safety, but incentives against appropriate caution in their actions. Talking about safety and credibly demonstrating it has PR benefits for a company. But the company who deploys powerful AI gains a massive portion of the upsides, while the downside risk is carried by all. Thus, our priors should be that company management will lean toward far more risk taking than if they were acting in the public interest.

Workers (at least those without stock options) don't have the same incentive. They might miss out on a raise or bonus from slowed progress, but they may also lose their job from fast progress if it allows for their automation. (As one example, I suspect that content moderation contractors like those interviewed in the podcast linked by OP won't be used the same quantity for GPT-5, most of the work will be performed by GPT-4.) Since workers represent a larger proportion of the human population at risk (i.e. all of us), we should expect their voices in company direction to better represent that risk, provided they behave rationally.

Of course, there are countless examples of companies convincing their workers to act against their own interests. But successful workplace organizing could be an effective way to balance corporate messaging with alternative narratives. Even if AI safety doesn't wind up as a top priority of the union, improvements to job security--a standard part of union contracts--could make employees more likely to voice safety concerns or to come out as whistleblowers.

2.

That said, I can think of some reasons to doubt that labor organizing at AI companies is tractable:
 

For one, despite their popular support and some recent unionizations, union membership in the USA is low and declining (by percentage). This means that unions are lacking resources and most workers are inexperienced with them. It also demonstrates that organizing is hard, especially so in a world where many only meet their colleagues through a screen, and where remote work and independent contractors makes it easier for companies to replace workers. It's possible that a new wave of labor organizing could overcome these challenges, but I think it's unlikely that we'll have widespread unions at tech companies within the next five years. (I hope I'm wrong.)

As we approach AGI, power will shift entirely from labor to capital. In a world without work, labor organizing isn't feasible. In my model, this is a gradual process, with worker value slowly declining well before AGI as more and more tasks are automated. This will be true across all industries, but the companies building AI tools will be the first to use them, so their workers are vulnerable to replacement a bit sooner.

An organized workplace might slow down the company's AI development or release. This would likely be a positive if it occurred everywhere at once, but otherwise would make it more likely that a non-unionized company develop AGI. This would be difficult to coordinate given the current legal status of unions in the USA: if at one company only 49% of employees vote in favor, they get no legally protected union.

On slower timelines, these issues may be overcome, but fast timelines are likely where most AI risk lies. In short, it seems unlikely that political momentum will build in time to prevent the deployment of misaligned AGI.
 

3.

An alternative that may be less affected by these issues is specifically organize around the issue of AI safety. I could imagine workers coming together in the spirit of the Federation of Atomic Scientists, perhaps forming the "Machine Learning Workers for Responsible AI" . This model would not fit legally recognized unionization in the USA, but it could be a way to build solidarity across the industry to add weight to asks such as that to pause giant AI experiments.

I expect that the MLWRAI could scale quickly, especially with an endorsement from, say, the Future of Life Institute. It would be able to grow in parallel across all AI companies, even internationally, and it should avoid the political backlash of unions. Employees supporting the MLWRAI would not have the legal protections of those in unions, but firing such employees would attract scrutiny. Given sufficient public support or regulatory oversight, this could be sufficient incentive for companies to voluntarily cooperate with the MLWRAI.
 

An inter-company and international workers organization would support coordination across companies by reducing the concern that slowing down or investing in safety would allow others to race ahead. It would also provide an avenue for employees to influence company decisions without the majority support required for a union.  With the support of the public and/or major EA organizations even a small minority of workers could have the leverage to push company decisions toward AI safety, worldwide.