2 min read 12

7

Labor unions are associations of workers that negotiate with employers. A union for AI workers such as data scientists, hardware and software engineers could organise labor to counterbalance the influence of shareholders or political masters.

Importantly, unions could play a unique, direct role in redirecting or slowing down the rapid development of AI technology across multiple companies when there is a concern about safety and race dynamics. With difficult to replace expertise, they could do so independent of employers wishes.

Unions often negotiate with multiple companies simultaneously, including in industries where competition is fierce. By uniting workers across AI labs, unions could exert significant collective bargaining power to demand a pause or slower, more cautious development of AI systems with a strong emphasis on safety.

If union demands are not met, they can and in history have organised workplace slowdowns or work stoppages as a form of protest or negotiation tactic. If workers across various AI companies and countries organise together, they can coordinate slowdowns or strikes that affect multiple companies simultaneously.

If the AI safety community seeded or nurtured an AI workers union they could also help embed a longtermist culture of safety. Unions already have a proven track record of prioritising and achieving safety in various fields more effectively than employers alone. They often foster a culture of safety that encourages workers to be proactive in identifying and addressing safety concerns. Unions also often provide protection and support for employees who report safety violations or concerns. This encourages workers to come forward without fear of retaliation, ensuring that safety issues are addressed promptly.

With roots in the AI safety community, an AI workers union could advocate for AI safety in government and corporate policies and regulations with greater independence from profit-motives.

Some practical considerations and open questions:

Google tells me there are already some unions for data scientists and software engineers. However their relevance relative to the scale of the challenge is negligible. That is not to say an AI workers union is not feasible. Support for unions in the United States has risen from 65% before the pandemic to 71% in 2022, the highest support level since 1965. Whether or not that is reflective of the tech industry I cannot say.

If some countries unionise AI workers more readily than others what will the geopolitical considerations be? More harmful than good? Will restrictions on the activities of union in different countries affect the efficacy of union organising for AI safety.

Since AI workers are relatively well remunerated (what Marxists would call petty bourgeois) they may lack the class conscience to unionise. On the other hand, these workers will be well placed to contribute funding for a union to scale and punch above its weight in members. Could a critical mass of AI workers be recruited to collectively bargain effectively?

Defining the union's scope can enhance its influence and bargaining power but requires careful planning. The choice of occupations to incorporate in the union might include data scientists, machine learning engineers, hardware experts. But other workers are involved in AI-related work such as ethicists at a university, sailors shipping semiconductor by sea or policy professionals at an AI lab. Should they be incorporated?

Should a union of AI workers be its own entity, entities or part of a multipurpose union like the Industrial Workers of the World (IWW)? Should AI safety activists nurture existing data science or software unions or start their own initiatives? Do AI workers share common concerns that are distinct from those of workers in other industries?

I don't know, but these questions are possible directions those reading may want to explore and comment on.

7

1
2

Reactions

1
2

More posts like this

Comments12


Sorted by Click to highlight new comments since:

A union for AI workers such as data scientists, hardware and software engineers could organise labor to counterbalance the influence of shareholders or political masters.

 

It's not obvious to me that AI workers would want a more cautious approach than AI shareholders, AI bosses, and so on. Whether or not this would be the case seems to me to be the main crux behind whether this would be net positive or net harmful.

Even if they were slightly more cautious than management, if they were less cautious than policymakers it could still be net negative due to unions' lobbying abilities.

Granted, in principle you could also have a situation where they're less cautious than management but more cautious than policymakers and it winds up being net positive, though I think that situation is pretty unlikely. Agree the consideration you raised is worth paying attention to.

I had explicitly considered this in drafting and whether to state that crux. If so, it could be an empirical question of whether there is greater support from the workers or management, or receptiveness to change.

I did not because I now think the question is not whether AI workers are more cautious than AI shareholders, but whether AI firms where unionised AI workers negotiate with AI shareholders would be more cautious. To answer that question, I think so

Edit: to summarise, the question is not whether unions (in isolation) would be more cautious, but whether an system of management (and policymakers) bargaining with a union would be more cautious - and yes it probably would

I've thought about this before and talked to a couple people in labs about it. I'm pretty uncertain if it would actually be positive. It seems possible that most ML researchers and engineers might want AI development to go as quickly or more than leadership if they're excited about working on cutting edge technologies or changing the world or for equity reasons. I remember some articles about how people left Google for companies like OpenAI because they thought Google was too slow, cautious, and lost its "move fast and break things" ethos. 

As you have said there are examples of individuals have left firms because they feel their company is too cautious. Conversely there are individuals who have left for companies that priorities AI safety.

If we zoom out and take the outside view, it is common for those individuals who form a union to take action to slow down or stop their work or take action to improve safety. I do not know an example of a union that has instead prioritised acceleration.

That's a good point. Although 1) if people leave a company to go to one that prioritizes AI safety, then this means there are fewer workers at all the other companies who feel as strongly. So a union is less likely to improve safety there. 2) It's common for workers to take action to improve safety conditions for them, and much less common for them to take action on issues that don't directly affect their work, such as air pollution or carbon pollution, and 3) if safety inclined people become tagged as wanting to just generally slow down the company, then hiring teams will likely start filtering out many of the most safety minded people. 

Thanks for writing this; I've thought about this before, it seems like an under-explored (or under-exploited?) idea. 

Another point: even if ML engineers, software devs etc either could not be persuaded to unionize, or would accelerate AI development if they could, maybe other labour unions could still exert pressure. E.g., workers in the compute or hardware supply chain; HR, cleaners, ops, and other non-technical staff who work at AI companies? Perhaps strong labour unions in sectors that are NOT obviously related to AI could be powerful here, e.g. by consumer boycotts (e.g., what if education union members committed to not spending money on AI products unless and until the companies producing them complied with certain safety measures?)

Some recent polls suggest that the idea of slowing down AI is already popular among US citizens (72% want to slow it down). My loose impressions are also that (i) most union members and organizers are on the political left (ii) many on the left are already sceptical about AI, for reasons related to (un)employment, plagiarism (i.e. critics of art AI's use of existing art), capitalism (tech too controlled by powerful interests), algorithmic bias. So this might not be an impossible sell, if AI safety advocates communicate about it in the right way.

To your first para - yes I wonder how unionised countries and relevant sectors are in bottlenecks in the compute supply chain - Netherlands, Japan and Taiwan. I don't know enough about the efficacy of boycotts to comment on the union led boycotts idea.

I've raised this in response to another comment but I want to also address here the concern that workers who join a union would organise to accelerate the development of AI. I think that is very unlikely - the history of unions is a strong tradition of safety, slowing down or stopping work. I do not know an example of a union that has instead prioritised acceleration but there's probably some and it would get grey as you move into the workers self-management space.

Yeah I don't have a strong opinion about whether they would accelerate it - I was just saying, even if some workers would support acceleration, other workers could work to slow it down.

One reason that developers might oppose slowing down AI is that it would put them out of work, wouldn't it? (Or threaten to). So if someone is not convinced that AI poses a big risk, or thinks that pausing isn't the best way to address the risk, then lobbying to slow down AI development would be a big cost for no obvious benefit. 

Something feels off about this Article. It is not really discussed what the AI workers could want or believe, or how to convince them that slowing down AI would delay or aviod extinction of humanity.

Are you assuming a world where the risk of extinction from AGI is widely accepted among AI workers? (In this case, why are they still working on the thing that potentially kills everyone?) If the workers do not believe in (large) risks of extinction from AI, how do you want to recruit them into your union? This seems hard if you want to be honest about the main goal of the union?

I don't think this is predicated on those assumptions.

My assumptions are:

  • AI workers who join a union are more likely to care about safety than AI workers who do not join a union. That is because the history of unions suggests that unions promote a culture of safety

  • Unionised AI workers will be more organised in influencing their workplace than non unionised AI workers. That is because of their ability to co-ordinate collectively

Therefore:

  • Unionisation of AI workers would encourage a culture of safety

Furthermore, these unions could be in a position to implement AI safety policies.

Curated and popular this week
 ·  · 11m read
 · 
Confidence: Medium, underlying data is patchy and relies on a good amount of guesswork, data work involved a fair amount of vibecoding.  Intro:  Tom Davidson has an excellent post explaining the compute bottleneck objection to the software-only intelligence explosion.[1] The rough idea is that AI research requires two inputs: cognitive labor and research compute. If these two inputs are gross complements, then even if there is recursive self-improvement in the amount of cognitive labor directed towards AI research, this process will fizzle as you get bottlenecked by the amount of research compute.  The compute bottleneck objection to the software-only intelligence explosion crucially relies on compute and cognitive labor being gross complements; however, this fact is not at all obvious. You might think compute and cognitive labor are gross substitutes because more labor can substitute for a higher quantity of experiments via more careful experimental design or selection of experiments. Or you might indeed think they are gross complements because eventually, ideas need to be tested out in compute-intensive, experimental verification.  Ideally, we could use empirical evidence to get some clarity on whether compute and cognitive labor are gross complements; however, the existing empirical evidence is weak. The main empirical estimate that is discussed in Tom's article is Oberfield and Raval (2014), which estimates the elasticity of substitution (the standard measure of whether goods are complements or substitutes) between capital and labor in manufacturing plants. It is not clear how well we can extrapolate from manufacturing to AI research.  In this article, we will try to remedy this by estimating the elasticity of substitution between research compute and cognitive labor in frontier AI firms.  Model  Baseline CES in Compute To understand how we estimate the elasticity of substitution, it will be useful to set up a theoretical model of researching better alg
 ·  · 7m read
 · 
Crossposted from my blog.  When I started this blog in high school, I did not imagine that I would cause The Daily Show to do an episode about shrimp, containing the following dialogue: > Andres: I was working in investment banking. My wife was helping refugees, and I saw how meaningful her work was. And I decided to do the same. > > Ronny: Oh, so you're helping refugees? > > Andres: Well, not quite. I'm helping shrimp. (Would be a crazy rug pull if, in fact, this did not happen and the dialogue was just pulled out of thin air).   But just a few years after my blog was born, some Daily Show producer came across it. They read my essay on shrimp and thought it would make a good daily show episode. Thus, the Daily Show shrimp episode was born.   I especially love that they bring on an EA critic who is expected to criticize shrimp welfare (Ronny primes her with the declaration “fuck these shrimp”) but even she is on board with the shrimp welfare project. Her reaction to the shrimp welfare project is “hey, that’s great!” In the Bible story of Balaam and Balak, Balak King of Moab was peeved at the Israelites. So he tries to get Balaam, a prophet, to curse the Israelites. Balaam isn’t really on board, but he goes along with it. However, when he tries to curse the Israelites, he accidentally ends up blessing them on grounds that “I must do whatever the Lord says.” This was basically what happened on the Daily Show. They tried to curse shrimp welfare, but they actually ended up blessing it! Rumor has it that behind the scenes, Ronny Chieng declared “What have you done to me? I brought you to curse my enemies, but you have done nothing but bless them!” But the EA critic replied “Must I not speak what the Lord puts in my mouth?”   Chieng by the end was on board with shrimp welfare! There’s not a person in the episode who agrees with the failed shrimp torture apologia of Very Failed Substacker Lyman Shrimp. (I choked up a bit at the closing song about shrimp for s
 ·  · 9m read
 · 
Crosspost from my blog.  Content warning: this article will discuss extreme agony. This is deliberate; I think it’s important to get a glimpse of the horror that fills the world and that you can do something about. I think this is one of my most important articles so I’d really appreciate if you could share and restack it! The world is filled with extreme agony. We go through our daily life mostly ignoring its unfathomably shocking dreadfulness because if we didn’t, we could barely focus on anything else. But those going through it cannot ignore it. Imagine that you were placed in a pot of water that was slowly brought to a boil until it boiled you to death. Take a moment to really imagine the scenario as fully as you can. Don’t just acknowledge at an intellectual level that it would be bad—really seriously think about just how bad it would be. Seriously think about how much you’d give up to stop it from happening. Or perhaps imagine some other scenario where you experience unfathomable pain. Imagine having your hand taped to a frying pan, which is then placed over a flame. The frying pan slowly heats up until the pain is unbearable, and for minutes you must endure it. Vividly imagine just how awful it would be to be in this scenario—just how much you’d give up to avoid it, how much you’d give to be able to pull your hand away. I don’t know exactly how many months or years of happy life I’d give up to avoid a scenario like this, but potentially quite a lot. One of the insights that I find to be most important in thinking about the world is just how bad extreme suffering is. I got this insight drilled into me by reading negative utilitarian blogs in high school. Seriously reflecting on just how bad extreme suffering is—how its intensity seems infinite to those experiencing it—should influence your judgments about a lot of things. Because the world is filled with extreme suffering. Many humans have been the victims of extreme suffering. Throughout history, tort