TG

Tax Geek

179 karmaJoined

Comments
20

Thank you! Yes, totally fair point. I am not trained in development economics so was very uncertain about this post, and expected there to be large differences between countries that I wouldn't pick up. It's disappointing to hear that the development econ mainstream has not been engaging with this topic.  

I had in mind the lower-income countries (mostly in Africa) when writing most of this. Your point about how, without TAI, these countries might be able to develop export industries and climb the development ladder is an interesting one. I had thought of that briefly, but was unsure how likely that was to happen, given we haven't seen any African country do it yet (to my knowledge). But perhaps it's just something that takes time and can only really happen after the middle-income countries become rich. 

AI agrees with you on cars and yachts, but says the majority of TVs, gaming rigs, and bikes consumed in HICs are made in LMICs. 

Fair enough. I think most of these are made in Asia and I do expect Asia (particularly China) to fare better than most other LMICs or developing countries. 

I should note that "transfers" is not limited to unemployment benefits. For OECD governments, the biggest class of transfer by far is currently public pensions. 

There are all sorts of good reasons why the elderly should be happy with lower public pensions (elderly poverty rates tend to be lower than for children or working-age adults, life expectancy has increased far more than retirement ages have). But that still doesn't happen for political economy reasons. Perhaps that will change with TAI - the elderly tend to own more capital so they should see massive returns in general. Maybe they'll be happy with <10x pension increases even as wages increase 10x. I just wouldn't take that for granted.

Agree 100% that governments would need to tap into capital gains somehow, or capital more broadly. I also like that Capital dividend fund idea - thanks for sharing. 

Thanks for the post! I share much of the concerns you raise, particularly your conclusion that benefits of AI will not be distributed equitably through natural market mechanisms. 

There will still exist a sizable gap between the development of these systems and their diffusion into the broader economy, but this gap will be on the order of years, not decades. 

I am curious about why you think this. And by "the broader economy" are you talking about the global economy or only the US? I don't have any firm views on speed of diffusion but I find decades plausible, at least when it comes to the global economy. Especially if diffusion involves widespread deployment of robotics. 

Thanks. I'm a bit sceptical of that 10x estimate and will have a closer look at that paper.

However, even assuming wages for non-automatable roles goes up ~10x before full automation, that won't help governments if their costs rise more than 10x. In developed countries, government costs mostly consist of social protection transfers and wages themselves. In the case where wages rise 10x, transfers could rise more than 10x if (1) transfers are linked to wages (which they often are); and/or (2) the share of people receiving transfers rises (because unemployment rises). 

It is possible that transfers could be de-linked from wages somewhat, but political economy can make that difficult and, to the extent that people's welfare depends on the parts of the economy that are not rapidly growing (e.g. healthcare, housing, childcare), that could have negative welfare impacts. 

So I'm not saying governments are doomed - as I point out, TAI should be creating value and the challenge is ultimately one of distribution. But governments still have to worry about revenue, because it's not the size of GDP that matters so much as the composition of government income and spending. 

Thanks for the feedback and you raise some interesting points.

the demand for manual labor will increase dramatically, because the people making a lot of money from AI will want bigger houses, more roads, and more physical goods in general

I'm not sure about this. I think we have different intuitions on how broad the class of "people making a lot of money from AI" vs how broad the class of knowledge workers losing out from TAI. My sense is that the former will be smaller than the latter, just based on data I've seen showing how little most people in developed countries have saved for retirement in the form of financial assets like shares (i.e. I haven't researched this, and it likely depends on the country).

I guess this will also depend on the specific capabilities of TAI and how much labour it will displace. 

I appreciate this challenge and it would be good to see some modelling on it (I might have a quick go at some stage). 

Some of the production of goods will be done in the HICs (perhaps increasing remittances), but I think a lot will be done in LMICs, which would a boon for those countries.

Interesting point. Some of the goods you mention people getting rich from AI will want are bigger houses and roads. These are not globally traded so, to the extent TAI increases demand for construction work, I would expect this demand to increase in HICs rather than LMICs.

To the extent that people getting rich from AI want to increase their consumption, I expect much of that will be channelled through higher demand for in-person services - e.g. personal chef, gardener, masseusse, and (as you point out) construction. 

Some increased consumption might also be channeled through physical goods, but physical goods are already incredibly cheap relative to incomes in HICs. The main bottlenecks to consumption are likely to be physical space (your house needs to be big enough for your toys) and leisure time. I expect TAI could remove the leisure time bottleneck for existing capitalholders, which could boost demand for leisure complements. But there is still only so much time in the day, and many higher-tech leisure goods like TVs, gaming rigs, bikes, cars, yachts don't rely much on LMIC labour anyway. (Some things like clothing and tents may be more exceptions.)

I do agree with you that some LMICs could see a boon - but I don't expect this effect to be widespread across LMICs in general. 

Also, in this scenario, government revenue would be large because they would be taxing the high wages of manual workers, and taxing massive corporate/capital gains.

I disagree with this because most governments tax labour more heavily than capital. So if all the income currently earned by labour just shifts to capital, all else equal tax revenues will decline. (At the same time government spending will likely have to increase to pay for increased unemployment. I go into this more here.)

If governments are able to shift their tax bases towards capital we might see tax revenues remain stable.[1] But there are major challenges to taxing capital: capital is mobile and more easily hidden, there are serious valuation issues when you try to tax unrealised gains[2], and it is distortionary because it is very hard to properly account for inflation. Plus there are a bunch of political challenges. All these challenges are why tax bases rely more heavily on labour to begin with. 

  1. ^

    I find it highly unlikely tax revenues would increase in the near term. 

  2. ^

    Most countries tax realised gains instead, but this leads to undertaxation because taxpayers can choose when to realise those gains, which is usually only when they need to fund personal consumption. There are also tax avoidance opportunities and distortionary lock-in effects that crop up when taxpayers can choose when to realise their gains - which is part of why capital is almost always taxed at flat rates rather than on a progressive schedule. 

Thanks for this post. As someone who has only recently started exploring the field of AI safety, much of this resonates with my initial impressions. I would be interested to hear the counterpoints from those who have Disagree-voted on this post. 

Do you think the "very particular worldview" you describe is found equally among those working on technical AI safety and AI governance/policy? My impression is that policy inherently requires thinking through concrete pathways of how AGI would lead to actual harm as well as greater engagement with people outside of AI safety. 

I have also noticed a split between the "superintelligence will kill us all" worldview (which you seem to be describing) and "regardless of whether superintelligence kills us all, AGI/TAI will be very disruptive and we need to manage those risks" (which seemed to be more along the lines of the Will MacAskill post you linked to - especially as he talks about directing people to causes other than technical safety or safety governance). Both of these worldviews seem prominent in EA. I haven't gotten the impression that the superintelligence worldview is stronger, but perhaps I just haven't gotten deep enough into AI safety circles yet. 

On the "AI as normal technology" perspective - I don't think it involves a strong belief that AI won't change the world much. The authors restate their thesis in a later post:

There is a long causal chain between AI capability increases and societal impact. Benefits and risks are realized when AI is deployed, not when it is developed. This gives us (individuals, organizations, institutions, policymakers) many points of leverage for shaping those impacts. So we don’t have to fret as much about the speed of capability development; our efforts should focus more on the deployment stage both from the perspective of realizing AI’s benefits and responding to risks. All this is not just true of today’s AI, but even in the face of hypothetical developments such as self-improvement in AI capabilities. Many of the limits to the power of AI systems are (and should be) external to those systems, so that they cannot be overcome simply by having AI go off and improve its own technical design.

The idea of focusing more on the deployment stage seems pretty consistent with Will MacAskill's latest forum post about making the transition to a post-AGI society go well. There are other aspects of the "AI as normal technology" worldview that I expect will conflict more with Forethought's, but I'm not sure that conflict would necessarily be frustrating and unproductive - as you say, it might depend on the person's characteristics like openness and willingness to update, etc.

I'm considering writing a post on how I think governments are likely to respond to the tax, spending, and inequality pressures that transformative AI will bring. 

Others have already pointed out that if TAI displaces a lot of workers, this will reduce tax revenue (as most countries' tax bases rely heavily on labour income) while increasing government spending (to pay for benefits to support people out of work). I've also read some articles suggesting we'll need a high degree of global coordination in order to make sure AI's benefits are widely distributed. 

I agree global coordination on tax would be the first best solution, but I also think it is highly unlikely to happen. However:

  • I think there are things individual countries can (and hopefully will) do to mitigate national inequality;
  • I'm not sure TAI will worsen global inequality (I think it probably will, but not by as much as I initially thought); and
  • I don't think governments are going to go broke everywhere (I don't this is actually possible). We will likely see significant economic disruptions and maybe a few defaults, but the fiscal situation may not be as bad as some people seem to think.  

I'm not sure exactly where I'll end up with it, but my hope is to outline a few realistic pathways that will help people (including myself) decide where best to focus their efforts.  

Please let me know if you'd be willing to read drafts or act as a sounding board — I would very much appreciate the help. 

Thank you very much for this post. I agree that privacy seems to be a massive blindspot for EA. I think it is a common blindspot for well-meaning people/groups, because they find it hard to imagine how adversarial actors might actually act. 

As a counterbalance to the "convenience" benefits offered by tech cloud platforms, I want to point out that there are also people who are concerned about data privacy who may hesitate to participate in EA activities/workshops/bootcamps or apply to EA grants/orgs if it would require handing over copious amounts of personal information or even having a LinkedIn profile (given LinkedIn's terrible reputation with privacy). I am one such person. 

I know there are always trade-offs between convenience and security, and that LinkedIn is widely used in recruitment spaces. But I would like to see more considered discussion of these trade-offs within EA. Currently the default just seems to rely on a high degree of trust, but even well-meaning people might can inadvertently leak other people's information (e.g. feeding example profiles into an AI to help them write their own profile). My initial impression is that there is much more that could be done to protect privacy without significant convenience cost.  

For example, I attended EAGx Virtual conferences in 2023 and 2024. The Google sheets with over 1000 attendees' personal data (which is maintained by CEA) was shared with all attendees, and I still have access to these in 2025. I don't see much benefit for these documents to exist at all, let alone remain publicly accessible years after the conference has ended. This likely also breaches data privacy laws in many jurisdictions, including the GDPR, which has a principle of data minimisation and requiring data to be kept no longer than is necessary. 

At the very least, the retention policies for widely shared documents should be seriously considered. 

Load more