This is a special post for quick takes by Covi Franklin. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.
Sorted by Click to highlight new quick takes since:

Despite the slightly terrifying implications of the breakdown in unity between America and the rest of the NATO alliance from a security perspective - I think it also offers a really promising opportunity r.e. shifting global AI development and governance towards a safer path in some scenarios. 

Right now US and China have adopted a 'race' dynamic...needless to say this is hugely dangerous and really raises the risk of irresponsible practices and critical errors as we enter the critical phase towards AGI from both AI super-powers. The rupture of UK/EU from US over Greenland/tariffs has led to immediate warming with China (PM Starmer just left Beijing and now there's visa free travel to China for UK citizens and talk of strategic partnerships). Prior to this point there was little reason for China to head any warning from middle powers over AI safety - they were on 'the other side' of the race/struggle for global influence. That strategic image has shifted dramatically. 

With warming relations with China there's possibility for rigorous UK/EU advocacy combined with effective AI policy that focuses more on caution/preparedness for AI response over pure-play race to the finish. So far the Track 1 talks between China and US have yielded limited results - trust remains low and neither side wants to show weakness. If these macro-strategic changes in UK/EU relations with China offer possibility of a route towards influencing their perspective on AI risk maybe it could yield some positive results? Adherence to a multilateral AI safety regime? (perhaps a bit too optimistic?)...but if this can offer an opportunity to China shifting even somewhat to the cautionary side it could open up room for more effective cooperative actions globally, including with the US, and shifting us somewhat from the full steam ahead path we're currently on.

Am I wrong to view any sort of optimism for how this could impact the AI governance space? Considering writing a more fleshed out piece on the topic.

Are there any signs of governments beginning to do serious planning for the need for Universal Basic Income (UBI) or negative income tax...it feels like there's a real lack of urgency/rigour in policy engagement within government circles. The concept has obviously had its high-level advocates a la Altman but it still feels incredibly distant as any form of reality. 

Meanwhile the impact is being seen in job markets right now - in the UK graduate job opening have plummeted in the last 12 months. People I know are having a hard enough time finding jobs with elite academic backgrounds - let alone the vast majority of people who went to average universities. This is happening today - before there's any consensus of arrival of AGI and widely recognised mass displacement in mid-career job markets. Impact is happening now, but preparation for major policy intervention in current fiscal scenarios seems really far off. If governments do view the risk of major employment market disruption as a realistic possibility (which I believe in many cases they do) are they planning for interventions behind the scene? Or do they view the problem as too big to address until it arrives...viewing rapid response > careful planning in the way the COVID emergency fiscal interventions emerged.

Would be really interested to hear of any good examples of serious thinking/preparation of how some form of UBI could be planned for (logistically and fiscally) in the near time 5 year horizon. 

Hmm, I think that’s not the right framing for this. UBI is just not settled as a universally good idea in academic or political circles (sorry, no definitive citation for this), let alone that there’s an urgent unemployment crisis (the statistic I think you’re citing is for job openings, not actual employment rates) or that such a crisis, if it did exist, has structural causes which could be expected to increase (i.e. it might not be AI, nor should we necessarily expect AI to become orders of magnitude more advanced in the next 5 years; there was plausibly a very different shock to the global economic system beginning around Liberation Day, 2025).

Thanks for your take - I always appreciate slightly less doom and gloom perspectives.

On your point that there's not an imminent unemployment crisis and what impacts we are seeing may be due to other factors. Firstly I think it's inevitable that the direct causes of disruption to the labour market are going to be multifaceted given the current trajectory of global markets (de-coupling, de-globalisation etc.)whatever happens moving forward. In the UK specifically part of the issue is minimum wage has been increased, making employers less inclined to hire grads (and yes it's grad openings which have halved, but we're simultaneously seeing 18-25 unemployment rising...not yet anywhere close to 08 levels but gradually increasing) - but this and other factors aren't independent of AI, but rather accelerate it's impacts (i.e. right now AI probably can't replace all grad jobs, but CEOs are more willing to experiment, explore, and begin premature replacement because of rising grad costs and higher tax rates...but now those jobs are gone they're off the market. The more broad market shocks there are, the more businesses will look to AI to cut costs)

Secondly I definitely take your point that AI may not become orders of magnitude more capable in the next five years and that 'AI is coming for our jobs' could be overblown. I suppose my thinking is - it might. Even if there is say, a 30% chance that in the next 5 years even 30-40% of white collar jobs get replaced...that feels like a massive shock to high-income countries way of life, and a major shift in the social contract for the generation of kids who have got into massive debt for university courses only to find substantially reduced market opportunities. That requires substantial state action. 

And that's the more conservative end of things; if there's any reasonable chance that within even the next 10-15 years AI becomes capable of replacing a higher percentile of the total job market, surely we need some form of effective way of ensuring people with no job opportunities have some degree of resources. Even if you're correct that it's fairly unlikely - to avoid major social instability in the event of such a scenario it feels prudent that governments would be doing serious planning for the potential (just as they would for pandemics or war gaming no matter how unlikely the scenario)

And finally - you may well be right that UBI is not accepted as a good idea in political circles. I'm not wedded to that particular approach, and have heard ideas about negative income tax floating around. But in a scenario where there simply isn't sufficient available employment for a functioning labour market and allocation of basic resources to all citizens - I'd like to think that governments are putting serious thinking and planning into how we ensure society continues to function, and what the social contract might become, and not that we're waiting until that reality comes to pass to plan for an appropriate response.

Even if there is say, a 30% chance that in the next 5 years even 30-40% of white collar jobs get replaced...that feels like a massive shock to high-income countries way of life, and a major shift in the social contract for the generation of kids who have got into massive debt for university courses only to find substantially reduced market opportunities. That requires substantial state action.

Rather than 30%, I would personally guess the chance is much less than 0.01% (1 in 10,000), possibly less than 0.001% (1 in 100,000) or even 0.0001% (1 in 1 million).

I agree with Huw that there's insufficient evidence to say that AI is causing significant or even measurable unemployment right now, and I'm highly skeptical this will happen anytime soon. Indeed, I'd personally guess there's a ~95% chance there's an AI bubble that will pop sometime within the next several years. So far, AI has stubbornly refused to deliver the sort of productivity or automation that has been promised. I think the problem is a fundamental science problem, not something that can be solved with scaling or incremental R&D. 

However, let's imagine a scenario where in a short time, say 5 years, a huge percentage of jobs get automated by AI — automation of white-collar work on a massive scale.[1] 

What should governments' response be right now, before this has started to happen, or at least before there is broad agreement it has started to happen? People often talk about this as a gloomy, worrying outcome. I suppose it could turn out to be, but why should that be the default assumption? It would lead to much faster economic growth than developed countries are used to seeing. It might even be record-breaking, unprecedented economic growth. It would be a massive economic windfall. That's a good thing.[2]

To be a bit more specific, when people imagine the sort of AI that is capable of automating white-collar work on the scale you're describing (30-40% of jobs), they also often imagine wildly high rates of GDP growth, ranging from 10% to 30%.[3][4] The level of growth is supposed to be commensurate with the percentage of labour that AI can automate. I don't know about these specific figures, but the general idea is intuitive.

Surely passing UBI would become much easier once both a) unemployment significantly increased and b) economic growth significantly increased. There would be both a clear problem to address and a windfall of money with which to address it. By analogy, it would have been much harder for governments to pass stimulus bills in January 2020 in anticipation of covid-19. In March 2020, it was much easier, since the emergency was clear. But the covid-19 emergency caused a recession. What if, instead, it had caused an economic boom, and a commensurate increase in government revenues? Surely, then, it would have been even easier to pass stimulus bills. 

This is why, even if I accept the AI automation scenario for the sake of argument, I don't worry about the practical, logistical, or fiscal obstacles to enacting UBI in a hurry. Governments can send cheques to people on short notice, as we saw with covid-19. This would presumably be especially true if the government and the economy overall were experiencing a windfall from AI. The sort of administrative bottlenecks we saw in some places during in covid-19 — those could be solved by AI, since we're stipulating an unlimited supply of digital white-collar workers. Maybe there are further aspects to implementing UBI that would be more complicated than sending people cheques and that couldn't be assisted by AI. What would those be? 

The typical concerns raised over UBI are that it would be too expensive, that it would be poorly targeted (i.e. it would be more efficient to run means-tested programs), that it would discourage people from working, and that it would reduce economic growth. None of those apply to this scenario. 

If there's more complexity in implementing UBI that I'm not considering, surely in this scenario politicians would quickly become focused on dealing with that complexity, and civil servants (and AI workers) would be assigned to the task. As opposed to something like decarbonizing the economy, UBI seems like it could be implemented relatively quickly and easily, given a sudden emergency that called for it and a sudden windfall of cash. Part of the supposed appeal of UBI is its simplicity relative to means-tested programs like welfare and non-cash-based programs like food stamps and subsidized housing. So, if you're not satisfied with my answer, maybe you could elaborate on why you think it wouldn't be so easy to figure out UBI in a hurry.

As mentioned up top, I regard this just as an interesting hypothetical, since I think the chance of this actually happening is below 0.01% (1 in 10,000).  

  1. ^

    Let's assume, for the sake of argument, that the sort of dire outcomes hypothesized under the heading of AI safety or AI alignment do not occur. Let's assume that AI is safe and aligned, and that it's not able to be misused by humans to destroy or take over the world.

    Let's also assume that AI won't be a monopoly or duopoly or oligopoly but, like today, even open source models that are free to use are a viable alternative to the most cutting-edge proprietary models. We'll imagine that the pricing power of the AI companies will be put in check by competition from both proprietary and open source models. Sam Altman might become a trillionaire, but only a small fraction of the wealth created by AI will be captured by the AI companies. (As an analogy, think about how much wealth is created by office workers using Windows PCs, and how much of that wealth is captured by Microsoft or by PC manufacturers like HP and Dell, or components manufacturers like Intel.)

    I'm putting these other concerns aside in order to focus on labour automation and technological unemployment, since that's the concern you raised.

  2. ^

    The specific worries around AI automation people most commonly cite are about wealth distribution, and about people finding purpose and meaning in their lives if there's large-scale technological unemployment. I'll focus on the wealth distribution worry, since the topic is UBI, and your primary concern seems to be economic or material.

    Some people are also worried about workers being disempowered if they can be replaced by AI, at the same time that capital owners become much wealthier. If they're right to worry about that, then maybe it's important to consider well in advance of it happening. Maybe workers should act while they still have power and leverage. But it's a bit of a separate topic, I think, from whether to start implementing UBI now. Maybe UBI would be one of a suite of policies workers would want to enact in advance of large-scale AI automation of labour, but what's to prevent UBI from being repealed after workers are disempowered?

    For the sake of this discussion, I'll assume that workers (or former workers) will remain politically empowered, and healthy democracies will remain healthy.

  3. ^

    Potlogea, Andrei. “AI and Explosive Growth Redux.” Epoch AI, 20 June 2025, https://epoch.ai/gradient-updates/ai-and-explosive-growth-redux.

  4. ^

    Davidson, Tom. “Could Advanced AI Drive Explosive Economic Growth?” Coefficient Giving, 25 June 2021, https://coefficientgiving.org/research/could-advanced-ai-drive-explosive-economic-growth/.

  5. Show all footnotes

(Even in the roles where it has produced productivity improvements, such as programming, that doesn’t necessarily imply job loss, as companies could get more ambitious with their existing budgets)

Curated and popular this week
Relevant opportunities