This post is a response to AGI and the EMH: markets are not expecting aligned or unaligned AI in the next 30 years from January 2023, by @basil.halperin , @J. Zachary Mazlish and @tmychow. In contrast to what they argue, I believe that interest rates are not a reliable instrument for assessing market beliefs about AI timelines - at least not the transformative AI described in the former forum post.
The reason for this is that savvy investors cannot expect to get rich by betting on short AI timelines. Such a bet simply ties up your capital until it loses its value (either because you're dead, or because you're so rich that it hardly matters). Therefore, a savvy investor with short timelines will simply increase their own consumption. No individual can increase their own consumption enough to affect global capital markets - rather, something like tens of millions of people would need to increase their consumption for interest rates to be affected. Interest rates can therefore remain low even if TAI is near, unless all of these people get worried enough about AI to change their savings rate.
Replaying the argument from AGI and the EMH
My understanding of the argument in the former post goes like this:
- AI is defined as transformative if it either causes an existential risk, or explosive growth (which, presumably, would be broad-based enough that the typical investor would expect to partake in it)
- If we knew that transformative AI was near, people would not need to save as much as they do today - since they would expect to either be dead or very, very rich in the near future
- If people save less, capital supply goes down, and interest rates go up. Therefore, if we knew transformative AI was near, interest rates should be high.
- Even if we allow for uncertainty around the timing of transformative AI, a significant probability of near-term transformative AI should increase interest rates, since the equilibrium condition is that the expected utility of consumption today and in the future should be equal (reflecting the full distribution of outcomes).
- Since interest rates aren't high, if you assume market efficiency, this is evidence against near-term transformative AI.
My high-level response
I will start by granting the definition of transformative AI as either an existential risk or a driver of explosive (and broadly shared) growth. This means that I accept the premise that the marginal value of additional money post-TAI is much, much lower than the marginal value of money today, for any relevant investor. EDIT: I don’t think this is a realistic assumption, but I chose to run with it in this case to show that even when I accept the assumptions of the former post, the conclusions don’t follow. In reality I think many investors with short timelines believe in other scenarios than this, and in some of those scenarios, they still have some marginal utility of additional savings. If that is true, it supports my conclusion and it is one further nail in the coffin for the argument in the original post.
Second, I consider the dynamics of how prices in markets can change over time. This is something which the original post glosses over a bit, which is forgivable, since the main social value of markets is that they can work like a black box information aggregation mechanism where you don't need to think too carefully about the gears. However, in this case, this is a crucial reason why their argument seems to fail.
Let's consider two possible ways the price of an asset can change. Either, some information becomes available to all players in the market, and they uniformly update their assessment of the value of the asset, and adjust their market positions accordingly. Alternatively, some investor gains private information which indicates the asset is mispriced, and they take a big, directional bet based on said information, unilaterally moving the price. These two situations are extremes on a spectrum, and in most cases, price changes will reflect a situation somewhere in between these extremes.
My argument is that this matters in the special context of interest rates. After all, interest rates reflect the aggregate capital supply in the world. Let's assume that there are two ways to move the prices of capital:
- Many people decide to reduce their retirement savings and rather consume more in the present, instead of investing the money.
- Savvy investors spot a mispricing of capital, and make directional bets (i.e., investments) that capital should be higher valued (i.e., interest rates should go up). While it is possible to debate whether such bets are available in the market at scale, I will simply assume that such an asset exists.
Presumably, it's the 2nd of these options that is of interest in the context of a discussion about AI timelines. Admittedly, the 1st option would happen if the typical consumer believed that extinction or explosive growth was near, so that mechanism is a plausible link between interest rates and AI timelines - but it is not an interesting link, since it would require very many people to believe in near-term TAI with high confidence. Global balance sheets are valued to at least $500 Tn and annual savings are between 1/4th and 1/3rd of global GDP, so even the wealthiest investors will struggle to meaningfully increase their utility by increasing their consumption rate to the point where it makes a dent in global capital supply. Therefore, interest rates would not be affected by this mechanism until something like tens of millions of people adjust their savings rate.
Therefore, the question is "would a savvy investor, informed about impending TAI, make a directional bet about interest rates, and if so, would that be sufficient to move interest rates?". I believe the answer is that the incentives of the savvy investor preclude them from taking these directional bets. The short reason is this:
- The reason for an investor to make a bet, is that they believe they will profit later
- However, if they believe in near-term TAI, savvy investors won't value future profits (since they'll be dead or super rich anyways)
- Therefore, there is no way for them to win by betting on near-term TAI
The second part of the question is therefore of little relevance.
Why savvy investors won't bet on near-term TAI, even if they believe in it
In response to my short argument above about investor incentives, you may respond the following:
- Investors aren't betting on imminent TAI directly. They are betting on rising interest rates
- Interest rates can rise before we get TAI. Therefore, there is a period where the savvy investor can enjoy the profits of their bet, before the money is made worthless
- Therefore, an investor which is sufficiently certain of imminent TAI should still take the bet
I believe that this is almost correct. My objection is with the second bullet point, "interest rates can rise before we get TAI". This is possible, but we no longer have a reason to believe that it will happen - unless very many people decide to reduce their savings rates. By then, this is no longer a bet on short AI timelines, but rather a bet about whether the typical consumer will realize that AI timelines are short sufficiently long enough before AI that you have time to enjoy your profits.
The slightly more technical explanation relies on backward induction. I will start with the case of an idealized investment - any investment, not specifically linked to transformative AI. Let's assume that it works like this:
- An investment is essentially just a money machine where you put a dollar in the machine, and then it spits out some other amount at some point in the future
- A good investment will, in expectation, spit out some amount larger than a dollar - and furthermore compensate you if you have to wait a long time for it. (let's assume that this additional compensation corresponds to some metric of the opportunity cost of money, i.e., the appropriate discount rate for an investor)
- For simplicity, let's assume each machine works only once
Now consider a case where we ignore some uncertainty: you have a machine where you put in $1, and sometime later it is perfectly known that it will spit out $100, plus compensation for however long the delay is. What is the present value of this machine? In a world where patient investors exist - investors who don't care about the exact timing of the payout, only that it beats their other alternatives - this machine is worth $99 already today, since that's the present value that the patient investor knows that they can get from the machine. The combination of 1) knowledge of the future, 2) competitive markets and 3) sufficiently patient investors brings the value forward.
Now let's consider a specific case of the scenario above - everything is the same, except that the payout will happen 1 day after the world experiences transformative AI. It is still not exactly known when that is, but whenever it happens, that's when the machine will spit out $100. What is the value of this machine?
Whoever owns the machine on the final day derives ~no value from it - since money is worthless by then, for one reason or another. Therefore, whoever owns the machine before then, should not expect to be able to sell it for any price above $0. Any investor willing to pay for this asset would need to play a Greater Fool strategy.
Conclusion
A bet that interest rates will rise is not a bet on short AI timelines. Rather, it is a bet that:
- Most consumers will correctly perceive that AI timelines are short, and
- Most consumers will realize this long enough before TAI that there is enough time to benefit from profitable bets made now, and
- Most consumers will believe that transformative AI will significantly reduce the marginal utility they get from their savings - and not, say, increase the marginal value of saving, because they could lose their jobs without taking part in the newfound prosperity from AI
For this reason, savvy investors will not bet on the end of the world, or the end of capital markets as we know them, except for perhaps increasing their own consumption a bit, or going on vacation more - but that would be far from sufficient to move capital markets. The only way interest rates could provide information about AI timelines is if a very broad group of people decided to reduce their savings rate and increase their consumption - so when it comes to AI timelines, interest rates should rather be considered something like a poll of upper middle class consumers in the US and EU, than a poll of the most informed investors.
Addendum: appreciation for the authors of the former post
I have exchanged emails with the authors of the former forum post, and they have graciously taken the time to respond to my emails, but I believe they have not refuted the arguments that I presented in an email to them, so I chose to post my arguments here for broader scrutiny.
I want to underscore that I deeply appreciate the effort put into the former forum post, and I believe that some of the points in the post are true and good (e.g., that catastrophic risk researchers can use low-interest rate environments to fund their research), while other points don't quite hold (most importantly, that we can use low interest rates as evidence for long AGI timelines). In the past I have spent some time exploring how financial markets can be used to elicit information about catastrophic risk and increase funding for risk mitigation, and I still believe this to be a valuable endeavor, even if it seems hard to scale it to the very largest risks.
Side note: what about the empirical argument?
The original post also presents some empirical evidence of the link between a) interest rates and growth, in section V, and b) interest rates and risk, in section VI. The evidence on b) is scarce, so I'll only focus on a) here. In short, this link can be equally well explained by a couple of alternative explanations:
- Serial correlation in the data set: interest rates used to be higher a few decades ago, and growth also used to be higher, so what looks like many independent observations is in fact just two observations of downwards trends in both parameters.
- Variation in capital demand, not capital supply: if the opportunity for profitable investment varies over time (e.g., because technological progress creates new investment opportunities, but innovation is stochastic and varies over time), it is not surprising that interest rates are higher ahead of periods of high growth. This could just mean that there were many good investment opportunities at the time, and then those investments created growth! It is possible to test how much of the link between interest rates and growth is driven by variation in capital demand by analyzing historical data of capital formation and growth, but I have not done that, simply because I haven't had the time.
Personally, I believe that there is some merit to the empirical arguments in the original post, but they are focused on variability within the historical sample, and transformative AI would bring us far from that situation, so I'm not confident that it has a lot of predictive value for TAI in particular.
Also, @Joel Becker, at this point you have called my thinking "pretty tortured" twice (in comments to the original post) and "4D-chess" here. Especially the first phrase seems - at least to me - more like solider mindset than scout mindset, in that I don't see how you'd make a discussion more truth-seeking, or enlighten anyone when using words like that.
I try to ask both "what does Joel know that I don't" and "what do I know that Joel doesn't, and how can I help him understand that". This post is my attempt at engaging in that way. In contrast, I don't see your comments offering much new evidence (e.g., in the comments to the original post you make comments such as "Traders are not dumb. At least, the small number of traders necessary to move the market are not dumb" - which you should realize that I am well aware of. I am making my argument without that assumption, so you are only arguing against a straw man. So I will try to offer my explanation one more time, in the hopes that it could lead to a productive debate.
Let's use a physical analogy for financial markets - say, a horse race track. People take their money there, store it for some time, and take out a different amount of money when they leave, depending on the quality of their bets. If interest rates are ruled by capital supply, then making a bet on interest rates is akin to betting on how large volumes people will bet tomorrow. So if you believe that the horse race track is going to burn down tomorrow, you can of course go to the horse race track and place the bet "Trading volumes in 2 days are going to be really low" - and if you're right about the fire, you're likely also right about the trading volumes. But in the meantime, the horse race track burned down, and no one is left to pay out your winnings. Now of course, you can find someone who's willing to buy you out of the bet before things burn down, if you convince them that it is a safe way to profit. You can tell everyone about the forest fire you observed nearby, and how in 24h that's going to reach the horse track, and burn it to the ground. And people can believe your evidence. But that's not going to get anyone to buy you out of the bet you made, since they realize that they will be left holding the burned bag - unless they can find an even bigger fool to sell to. So the only way you can profit from your knowledge of the impeding fire, is to pull all of your bets, so you don't have cash inside the building when it burns down. And that's going to decrease the volumes on the market a little bit, but it is a tiny fraction of the total, since there are many bettors on the horse track. Now this analogy isn't perfect, but my point stands - the equilibrium you're hypothesizing, doesn't exist. If you're hypothesizing a capital supply-side response to short AI timelines, that can only happen if a large fraction of consumers decide to decrease their savings rates, and that would likely require so overwhelming evidence for near-term AI, it would no longer be a leading indicator. (as stated in the earlier comment, I think the capital demand-side argument has more merit, however).
Okay, I have attempted to clarify my thinking on multiple occasions now. In contrast, my experience is that you seem reluctant to engage with my actual arguments, offer few new pieces of evidence, and describe my thinking in quite disparaging terms, which adds up to a poor basis for further discussion. I don't think this is your intention, so please take this for what it is - an attempt at well-meaning feedback, and encouragement to revisit how you engage on this topic. Until I see this good-faith effort I will consider this argument closed for now.