I don’t claim originality for any content here; people who’ve been influential on this include Nick Beckstead, Phil Trammell, Toby Ord, Aron Vallinder, Allan Dafoe, Matt Wage, and, especially, Holden Karnofsky and Carl Shulman. Everything tentative; errors all my own.
Introduction
Here are two distinct views:
Strong Longtermism := The primary determinant of the value of our actions is the effects of those actions on the very long-run future.
The Hinge of History Hypothesis (HoH) := We are living at the most influential time ever.
It seems that, in the effective altruism community as it currently stands, those who believe longtermism generally also assign significant credence to HoH; I’ll precisify ‘significant’ as >10% when ‘time’ is used to refer to a period of a century, but my impression is that many longtermists I know would assign >30% credence to this view. It’s a pretty striking fact that these two views are so often held together — they are very different claims, and it’s not obvious why they should so often be jointly endorsed.
This post is about separating out these two views and introducing a view I call outside-view longtermism, which endorses longtermism but finds HoH very unlikely. I won’t define outside-view longtermism here, but the spirit is that — as our best guess — we should expect the future to continue the trends of the past, and we should be sceptical of the idea that now is a particularly unusual time. I think that outside-view longtermism is currently a neglected position within EA and deserves some defense and exploration.
Before we begin, I’ll note I’m not making any immediate claim about the actions that follow from outside-view longtermism. It’s plausible to me that whether we have 30% or just 0.1% credence in HoH, we should still be investing significant resources into the activities that would be best were HoH true. The most obvious implication, however, is regarding what proportion of resources longtermist EAs should be spending on near-term existential risk mitigation versus what I call ‘buck-passing’ strategies like saving or movement-building. If you think that some future time will be much more influential than today, then a natural strategy is to ensure that future decision-makers, who you are happy to defer to, have as many resources as possible when some future, more influential, time comes. So in what follows I’ll sometimes use this as the comparison activity.
Getting the definitions down
We’ve defined strong longtermism informally above and in more detail in this post.
For HoH, defining ‘most influential time’ is pretty crucial. Here’s my proposal:
a time ti is more influential (from a longtermist perspective) than a time tj iff you would prefer to give an additional unit of resources,[1] that has to be spent doing direct work (rather than investment), to a longtermist altruist living at ti rather than to a longtermist altruist living at tj.
(I’ll also use the term ‘hingier’ to be synonymous with ‘more influential’.)
This definition gets to the nub of the matter, for me. It seems to me that, for most times in human history, longtermists ought, if they could, to have been investing their resources (via values-spreading as well as literal investment) in order that they have greater influence at hingey moments when one’s ability to influence the long-run future is high. It’s a crucial question for longtermists whether now is a very hingey moment, and so whether they should be investing or doing direct work.
It’s significant that my definition focuses on how much influence a person at a time can have, rather than how much influence occurs during a time period. It could be the case, for example, that the 20th century was a bigger deal than the 17th century, but that, because there were 1/5th as many people alive during the 17th century, a longtermist altruist could have had more direct impact in the 17th century than in the 20th century.
It’s also significant that, on this definition, you need to take into account the level of knowledge and understanding of the average longtermist altruist at the time. This seems right to me. For example, hunter-gatherers could contribute more to tech speed-up than people now (see Carl Shulman’s post here); but they wouldn’t have known, or been in a position to know, that trying to innovate was a good way to benefit the very long-run future. (In that post, Carl mentions some reasons for thinking that such impact was knowable, but prior to the 17th century people didn’t even have the concept of expected value, so I’m currently sceptical.)
So I’m really bundling two different ideas into the concept of ‘most influential’: how pivotal a particular moment in time is, and how much we’re able to do something about that fact. Perhaps we’re at a really transformative moment now, and we can, in principle, do something about it, but we’re so bad at predicting the consequences of our actions, or so clueless about what the right values are, that it would be better for us to save our resources and give them to future longtermists who have greater knowledge and are better able to use their resources, even at that less pivotal moment. If this were true, I would not count this time as being exceptionally influential.
Strong longtermism even if HoH is not true
I mentioned that it’s surprising that strong longtermism and significant credence in HoH are so often held together. But here’s one reason why you might think you should put significant credence in HoH iff you believe longtermism: You might accept that most value is in the long-run future, but think that, at most times in history so far, we’ve been unable to do anything about that value. So it’s only because HoH is true that longtermism is true. But I don’t think that’s a good argument, for a few reasons.
First, given the stakes involved, it’s plausible that even a small chance of being at a period of unusually high extinction or lock-in risk is enough for working on extinction risk or lock-in scenarios to be higher expected value than short-run activities. So, you can reasonably think that (i) HoH is unlikely (e.g. 0.1% likely), but that (ii) when combined with the value of being able to influence the value of the long-run future, a small chance of HoH being true is enough to make strong longtermism true.
Second, even if we’re merely at a relatively hingey time — just not the most hingey time — as long as there are some actions that have persistent long-run effects that are positive in expected value, that’s plausibly sufficient for strong longtermism to be true.
Third, you could even be certain that HoH is false, and that there are currently no direct activities with persistent impacts, but still believe that longtermism is true if, as is natural to suppose, you have the option of investing resources, enabling future longtermist altruists to take action at a time which is more influential.
Arguments for HoH
In this post, I’m going to simply state, but not discuss, some views on which something like HoH would be entailed, and some arguments for thinking HoH is likely. Each of these views and arguments require a lot more discussion, and often have had a lot more discussion elsewhere.
There are two commonly held views that entail something like HoH:
The Value Lock-in view
Most starkly, according to a view regarding AI risk most closely associated with Nick Bostrom and Eliezer Yudkowsky: it’s likely that we will develop AGI this century, and it’s likely that AGI will quickly transition to superintelligence. How we handle that transition determines how the entire future of civilisation goes: if the superintelligence ‘wins’, then the entire future of civilisation is determined in accord with the superintelligence’s goals; if humanity ‘wins’, then the entire future of civilisation is determined in accord with whoever controls the superintelligence, which could be everyone, or could be a small group of people. If this story is right, and we can influence which of these scenarios occurs, then this century is the most influential time ever.
A related, but more general, argument, is that the most pivotal point in time is when we develop techniques for engineering the motivations and values of the subsequent generation (such as through AI, but also perhaps through other technology, such as genetic engineering or advanced brainwashing technology), and that we’re close to that point. (H/T Carl Shulman for stating this more general view to me).
The Time of Perils view
According to the Time of Perils view, we live in a period of unusually high extinction risk, where we have the technological power to destroy ourselves but lack the wisdom to be able to ensure we don’t; after this point annual extinction risk will go to some very low level. Support for this view could come from both outside-view and inside-view reasoning: the outside-view argument would claim that extinction risk has been unusually high since the advent of nuclear weapons; the inside-view argument would point to extinction risk from forthcoming technologies like synthetic biology.
The ‘unusual’ is important here. Perhaps extinction risk is high at this time, but will be even higher at some future times. In which case those future times might be even hingier than today. Or perhaps extinction risk is high, but will stay high indefinitely, in which case the future is not huge in expectation, and the grounds for strong longtermism fall away.
And, for the Time of Perils view to really support HoH, it’s not quite enough to show that extinction risk is unusually high; what’s needed is that extinction risk mitigation efforts are unusually cost-effective. So part of the view must be not only that extinction risk is unusually high at this time, but also that longtermist altruists are unusually well-placed to decrease those risks — perhaps because extinction risk reduction is unusually neglected.
Outside-View Arguments
The Value Lock-In and Time of Perils views are the major views on which HoH — or something similar — would be supported. But there are also a number of more general, and more outside-view-y, arguments that might be taken as evidence in favour of HoH:
- That we’re unusually early on in human history, and earlier generations in general have the ability to influence the values and motivations of later generations.[2]
- That we’re at an unusually high period of economic and technological growth.
- That the long-run trend of economic growth means we should expect extremely rapid growth into the near future, such that we should expect to hit the point of fastest-ever growth fairly soon, before slowing down.
- That we’re unusually well-connected and able to cooperate in virtue of being on one planet.
- That we’re unusually likely to become extinct in virtue of being on one planet.
My view is that, in the aggregate, these outside-view arguments should substantially update one from one’s prior towards HoH, but not all the way to significant credence in HoH.[3]
Arguments against HoH
#1: The outside-view argument against HoH
Informally, the core argument against HoH is that, in trying to figure out when the most influential time is, we should consider all of the potential billions of years through which civilisation might exist. Out of all those years, there is just one time that is the most influential. According to HoH, that time is… right now. If true, that would seem like an extraordinary coincidence, which should make us suspicious of whatever reasoning led us to that conclusion, and which we should be loath to accept without extraordinary evidence in its favour. We don’t have such extraordinary evidence in its favour. So we shouldn’t believe in HoH.
I’ll take each of the key claims in this argument in turn:
- It’s a priori extremely unlikely that we’re at the hinge of history
- The belief that we’re at the hinge of history is fishy
- Relative to such an extraordinary claim, the arguments that we’re at the hinge of history are not sufficiently extraordinarily powerful
Claim 1
That HoH is a priori unlikely should be pretty obvious. It’s hard to know exactly what ur-prior to use for this claim, though. One natural thought is that we could use, say, 1 trillion years’ time as an early estimate for the ‘end of time’ (due to the last naturally occurring star formation), and a 0.01% chance of civilisation surviving that long. Then, as a lower bound, there are an expected 1 million centuries to come, and the natural prior on the claim that we’re in the most influential century ever is 1 in 1 million. This would be too low in one important way, namely that the number of future people is decreasing every century, so it’s much less likely that the final century will be more influential than the first century. But even if we restricted ourselves to a uniform prior over the first 10% of civilisation’s history, the prior would still be as low as 1 in 100,000.
(This is a very rough argument. I really don’t know what the right ur-prior is to set here, and I’d be keen to see further discussion, as it potentially changes one’s posterior on HoH by an awful lot.)
[Later Edit (Mar 2020): The way I state the choice of prior in the text above was mistaken, and therefore caused some confusion. The way I should have stated the prior choice, to represent what I was thinking of, is as follows:
The prior probability of us living in the most influential century, conditional on Earth-originating civilization lasting for n centuries, is 1/n.
The unconditional prior probability over whether this is the most influential century would then depend on one's priors over how long Earth-originating civilization will last for. However, for the purpose of this discussion we can focus on just the claim that we are at the most influential century AND that we have an enormous future ahead of us. If the Value Lock-In or Time of Perils views are true, then we should assign a significant probability to that claim. (i.e. they are claiming that, if we act wisely this century, then this conjunctive claim is probably true.) So that's the claim we can focus our discussion on.
It's worth noting that my proposal follows from the Self-Sampling Assumption, which is roughly (as stated by Teru Thomas ('Self-location and objective chance' (ms)): "A rational agent’s priors locate him uniformly at random within each possible world." I believe that SSA is widely held: the key question in the anthropic reasoning literature is whether it should be supplemented with the self-indication assumption (giving greater prior probability mass to worlds with large populations). But we don't need to debate SIA in this discussion, because we can simply assume some prior probability distribution over sizes over the total population - the question of whether we're at the most influential time does not require us to get into debates over anthropics.]
Claim 2
Lots of things are a priori extremely unlikely yet we should have high credence in them: for example, the chance that you just dealt this particular (random-seeming) sequence of cards from a well-shuffled deck of 52 cards is 1 in 52! ≈ 1 in 10^68, yet you should often have high credence in claims of that form. But the claim that we’re at an extremely special time is also fishy. That is, it’s more like the claim that you just dealt a deck of cards in perfect order (2 to Ace of clubs, then 2 to Ace of diamonds, etc) from a well-shuffled deck of cards.
Being fishy is different than just being unlikely. The difference between unlikelihood and fishiness is the availability of alternative, not wildly improbable, alternative hypotheses, on which the outcome or evidence is reasonably likely. If I deal the random-seeming sequence of cards, I don’t have reason to question my assumption that the deck was shuffled, because there’s no alternative background assumption on which the random-seeming sequence is a likely occurrence. If, however, I deal the deck of cards in perfect order, I do have reason to significantly update that the deck was not in fact shuffled, because the probability of getting cards in perfect order if the cards were not shuffled is reasonably high. That is: P(cards not shuffled)P(cards in perfect order | cards not shuffled) >> P(cards shuffled)P(cards in perfect order | cards shuffled), even if my prior credence was that P(cards shuffled) > P(cards not shuffled), so I should update towards the cards having not been shuffled.
Similarly, if it seems to me that I’m living in the most influential time ever, this gives me good reason to suspect that the reasoning process that led me to this conclusion is flawed in some way, because P(I’m reasoning poorly)P(seems like I’m living at the hinge of history | I’m reasoning poorly) >> P(I’m reasoning correctly)P(seems like I’m living at the hinge of history | I’m reasoning correctly). In contrast, I wouldn’t have the same reason to doubt my underlying assumptions if I concluded that I was living in the 1047th most influential century.
The strength of this argument depends in part on how confident we are on our own reasoning abilities in this domain. But it seems to me there’s a strong risk of bias in our assessment of the evidence regarding how influential our time is, for a few reasons:
- Salience. It’s much easier to see the importance of what’s happening around us now, which we can see and is salient to us, than it is to assess the importance of events in the future, involving technologies and institutions that are unknown to us today, or (to a lesser extent) the importance of events in the past, which we take for granted and involve unsalient and unfamiliar social settings.
- Confirmation. For those of us, like myself, who would very much like for the world to be taking much stronger action on extinction risk mitigation (even if the probability of extinction is low) than it is today, it would be a good outcome if people (who do not have longtermist values) think that the risk of extinction is high, even if it’s low. So we might be biased (subconsciously) to overstate the case in our favour. And, in general, people have a tendency towards confirmation bias: once they have a conclusion (“we should take extinction risk a lot more seriously”), they tend to marshall arguments in its favour, rather than carefully assess arguments on either side, more than they should. Though we try our best to avoid such biases, it’s very hard to overcome them.
- Track record. People have a poor track record of assessing the importance of historical developments. And in particular, it seems to me, technological advances are often widely regarded as being more dangerous than they are. Some examples include assessment of risks from nuclear power, horse manure from horse-drawn carts, GMOs, the bicycle, the train, and many modern drugs.[4]
I don’t like putting weight on biases as a way of dismissing an argument outright (Scott Alexander gives a good run-down of reasons why here). But being aware that long-term forecasting is an area that’s very difficult to reason correctly about should make us quite cautious when updating from our prior.
If you accept you should have a very low prior in HoH, you need to be very confident that you’re good at reasoning about the long-run significance of events (such as the magnitude of risk from some new technology) in order to have a significant posterior credence in HoH, rather than concluding we’re mistaken in some way. But we have no reason to believe that we’re very reliable in our reasoning in these matters. We don’t have a good track record of making predictions about the importance of historical events, and some track record of being badly wrong. So, if a chain of reasoning leads us to the conclusion that we’re living in the most important century ever, we should think it more likely that our reasoning has gone wrong than that the conclusion really is true. Given the low base rate, and given our faulty tools for assessing the claim, the evidence in favour of HoH is almost certainly a false positive.
Claim 3
I’ve described some of the arguments for thinking that we’re at an unusually influential time in the previous section above.
I won’t discuss the object-level of these arguments here, but it seems hard to see how these arguments could be strong enough to move us from the very low prior all the way to significant credence in HoH. To illustrate: a randomised controlled trial with a p-value of 0.05, under certain reasonable assumptions, corresponds to a Bayes factor of around 3; a Bayes factor of 100 is regarded as ‘decisive’ evidence. In order to move from a prior of 1 in 100,000 to a posterior of 1 in 10, one would need a Bayes factor of 10,000 — extraordinarily strong evidence.
But, so this argument goes, the evidence we have for either the Value Lock-in view or the Time of Perils view are informal arguments. They aren’t based on data (because they generally concern future events) nor, in general, are they based on trend extrapolation, nor are they based on very well-understood underlying mechanisms, such as physical mechanisms. And the range of deep critical engagement with those informal arguments, especially from ‘external’ critics, has, so far, been limited. So it’s hard to see why we should give them much more evidential weight than, say, a well-done RCT with a p-value at 0.05 — let alone assign them an evidential weight 3000 times that amount.
An alternative path to the same conclusion is as follows. Suppose that, if we’re at the hinge of history, we’d certainly have seeming evidence that we’re at the hinge of history; so say that P(E | HoH ) ≈ 1. But if we weren’t at the hinge of history, what would be the chances of us seeing seeming evidence that we are at the hinge of history? It’s not astronomically low; perhaps P(E | ¬HoH ) ≈ 0.01. (This would seem reasonable to believe if we found just one century in the past 10,000 years where people would have had strong-seeming evidence in favour of the idea that they were at the hinge of history. This seems conservative. Consider: the periods of the birth of Christ and early Christianity; the times of Moses, Mohammed, Buddha and other religious leaders; the Reformation; the colonial period; the start of the industrial revolution; the two world wars and the defeat of fascism; and countless other events that would have seemed momentous at the time but have since been forgotten in the sands of history. These might have all seemed like good evidence to the observers at the time that they were living at the hinge of history, had they thought about it.) But, if so, then our Bayes factor is 100 (or less): enough to push us from 1 in 100,000 to 1 in 1000 in HoH, but not all the way to significant credence.
#2: The Inductive Argument against HoH
In addition to the previous argument, which relies on priors and claims we shouldn’t move drastically far from those priors, there’s a positive argument against HoH, which gives us evidence against HoH, whatever our priors. This argument is based on induction from past times.
If, when looking into the past, we saw hinginess steadily decrease, that would be a good reason for thinking that now is hingier than all times to come, and so we should take action now rather than pass resources on to future longtermists. If we had seen hinginess steadily increase, then we have some reason for thinking that the hingiest times are yet to come; if we had a good understanding of the mechanism of why hinginess is increasing, and knew that mechanism was set to continue into the future, that would strengthen that argument further.
I suggest that in the past, we have seen hinginess increase. I think that most longtermists I know would prefer that someone living in 1600 passed resources onto us, today, rather than attempting direct longtermist influence. (I certainly would prefer this.) One reason for thinking this would be if one thinks that now is simply a more pivotal point in time, because of our current level of technological progress. However, the stronger reason, it seems to me, is that our knowledge has increased so considerably since then. (Recall that on my definition a particularly hingey time depends both on how pivotal the period in history is and the extent to which a longtermist at the time would know enough to do something about it.) Someone in 1600 couldn’t have had knowledge of AI, or population ethics, or the length of time that humanity might continue for, or of expected utility theory, or of good forecasting practices; they would have had no clue about how to positively influence the long-run future, and might well have done harm. Much the same is true of someone in 1900 (though they would have had access to some of those concepts). It’s even true of someone in 1990, before people became aware of risks around AI. So, in general, hinginess is increasing, because our ability to think about the long-run effects of our actions, evaluate them, and prioritise accordingly, is increasing.
But we know that we aren’t anywhere close to having fully worked out how to think about the long-run effects of our actions, evaluate them, and prioritise accordingly. We should confidently expect that in the future we will come across new crucial considerations — as serious as the idea of population ethics, or AI risk — or major revisions of our views. So, just as we think that people in the past should have passed resources onto us rather than do direct work, so, this argument goes, we should pass resources into the future rather than do direct longtermist work. We should think, in virtue of future people’s far better epistemic state, that some future time is more influential.
There are at least three ways in which our knowledge is changing or improving over time, and it’s worth distinguishing them:
- Our basic scientific and technological understanding, including our ability to turn resources into things we want.
- Our social science understanding, including our ability to make predictions about the expected long-run effects of our actions.
- Our values.
It’s clear that we are improving on (1) and (2). All other things being equal, this gives us reason to give resources to future people to use rather than to use those resources now. The importance of this, it seems to me, is very great. Even just a few decades ago, a longtermist altruist would not have thought of risk from AI or synthetic biology, and wouldn’t have known that they could have taken action on them. Even now, the science of good forecasting practices is still in its infancy, and the study of how to make reliable long-term forecasts is almost nonexistent.
It’s more contentious whether we’re improving on (3) — for this argument one’s meta-ethics becomes crucial. Perhaps the Victorians would have had a very poor understanding of how to improve the long-run future by the lights of their own values, but they would have still preferred to do that than to pass resources onto future people, who would have done a better job of shaping the long-run future but in line with a different set of values. So if you endorse a simple subjectivist view, you might think that even in such an epistemically impoverished state you should still prefer to act now rather than pass the baton on to future generations with aims very different from yours (and even then you might still want to save money in a Victorian-values foundation to grant out at a later date). This view also makes the a priori unlikelihood of living at the hinge of history much less: from the perspective of your idiosyncratic values, now is the only time that they are instantiated in physical form, so of course this time is important!
In contrast, if you are more sympathetic to moral realism (or a more sophisticated form of subjectivism), as I am, then you’ll probably be more sympathetic to the idea that future people will have a better understanding of what’s of value than you do now, and this gives another reason for passing the baton on to future generations. For just some ways in which we should expect moral progress: Population ethics was first introduced as a field of enquiry in the 1980s (with Parfit’s Reasons and Persons); infinite ethics was only first seriously discussed in moral philosophy in the early 1990s (e.g. Vallentyne’s Utilitarianism and Infinite Utility), and it’s clear we don’t know what the right answers are; moral uncertainty was only first discussed in modern times in 2000 (with Lockhart’s Moral Uncertainty and its Consequences) and had very little attention until around the 2010s (with Andrew Sepielli’s PhD and then my DPhil), and again we’ve only just scraped the surface of our understanding of it.
So, just as we think that the intellectual impoverishment of the Victorians means they would have done a terrible job of trying to positively influence the long-run future, we should think that, compared to future people, we are thrashing around in ignorance. In which case we don’t have the level of understanding required for ours to be the most influential time.
#3: The simulation update argument against HoH
The final argument[5] is:
- If it seems to you that you’re at the most influential time ever, you’re differentially much more likely to be in a simulation. (That is: P(simulation | seems like HoH ) >> P(not-simulation | seems like HoH).)
- The case for focusing on AI safety and existential risk reduction is much weaker if you live in a simulation than if you don’t. (In general, I’d aver that we have very little understanding of the best things to do if we’re in a simulation, though there’s a lot more to be said here.)
- So we should not make a major update in the most action-relevant proposition, which is that we’re both at the hinge of history and not in a simulation.
The primary reason for believing (1) is that the most influential time in history would seem likely to be a very common subject of study by our descendents, and much more common than other periods in time. (Just as crucial periods in time, like the industrial revolution, get vastly more study by academics today than less pivotal periods, like 4th century Indonesia.) The primary reasons for believing (2) are that if we’re in a simulation it’s much more likely that the future is short, and that extending our future doesn’t change the total amount of lived experiences (because the simulators will just run some other simulation afterwards), and that we’re missing some crucial consideration around how to act.
This argument is really just a special case of argument #1: if it seems like you’re at the most influential point in time ever, probably something funny is going on. The simulation idea is just one way of spelling out ‘something funny going on’. I’m personally reticent to make major updates in the direction of living in a simulation on the basis of this rather than updates to more banal hypotheses like just some inside-view arguments not actually being very strong; but others might disagree on this.
Might today be merely an enormously influential time?
In response to the arguments I’ve given above, you might say: “Ok, perhaps we don’t have good reasons for thinking that we’re at the most influential time in history. But the arguments support the idea that we’re at an enormously influential time. And very little changes whether you think that we’re at the most influential time ever, or merely at an enormously influential time, even though some future time is even more influential again.”
However, I don’t think this response is a good one, for three reasons.
First, the implication that we’re among the very most influential times is susceptible to very similar arguments to the ones that I gave against HoH. The idea that we’re in one of the top-10 most influential times is 10x more a priori likely than the claim that we’re in the most influential time, and it’s perhaps more than 10x less fishy. But it’s still extremely a priori unlikely, and still very fishy. So that should make us very doubtful of the claim, in the absence of extraordinarily powerful arguments in its favour.
Second, some views that are held in the effective altruism community seem to imply not just that we’re at some very influential time, but that we’re at the most influential time ever. On the fast takeoff story associated with Bostrom and Yudkowsky, once we develop AGI we rapidly end up with a universe determined in line with a singleton superintelligence’s values, or in line with the values of those who manage to control it. Either way, it’s the decisive moment for the entire rest of civilisation. But if you find the claim that we’re at the most influential time ever hard to swallow, then you have, by modus tollens, to reject that story of the development of superintelligence.
Third, even if we’re at some enormously influential time right now, if there’s some future time that is even more influential, then the most obvious EA activity would be to invest resources (whether via financial investment or some sort of values-spreading) in order that our resources can be used at that future, more high-impact, time. Perhaps there’s some reason why that plan doesn’t make sense; but, currently, almost no-one is even taking that possibility seriously.
Possible other hinge times
If now isn’t the most influential time ever, when is? I’m not going to claim to be able to answer that question, but in order to help make alternative possibilities more vivid I’ve put together a list of times in the past and future that seem particularly hingey to me.
Of course, it’s much more likely, a priori, that if HoH is false, then the most influential time is in the future. And we should also care more about the hingeyness of future times than of past times, because we can try to save resources to affect future times, but we know we can’t affect past times.[6] But past hingeyness might still be relevant for assessing hingeyness today: If hingeyness has been continually decreasing over time, that gives us some reason for thinking that the present time is more influential than any future time; if it’s been up and down, or increasing over time, that might give us evidence for thinking that some future time will be more influential.
Looking through history, some candidates for particularly influential times might include the following (though in almost every case, it seems to me, the people of the time would have been too intellectually impoverished to have known how hingey their time was and been able to do anything about it[7]):
- The hunter-gatherer era, which offered individuals the ability to have a much larger impact on technological progress than today.
- The Axial age, which offered opportunities to influence the formation of what are today the major world religions.
- The colonial period, which offered opportunities to influence the formation of nations, their constitutions and values.
- The formation of the USA, especially at the time just before, during and after the Philadelphia Convention when the Constitution was created.
- World War II, and the resultant comparative influence of liberalism vs fascism over the world.
- The post-WWII formation of the first somewhat effective intergovernmental institutions like the UN.
- The Cold War, and the resultant comparative influence of liberalism vs communism over the world.
In contrast, if the hingiest times are in the future, it’s likely that this is for reasons that we haven’t thought of. But there are future scenarios that we can imagine now that would seem very influential:
- If there is a future and final World War, resulting in a unified global culture, the outcome of that war could partly determine what values influence the long-run future.
- If one religion ultimately outcompetes both atheism and other religions and becomes a world religion, then the values embodied in that religion could partly determine what values influence the long-run future.[8]
- If a world government is formed, whether during peacetime or as a result of a future World War, then the constitution embodied in that could constrain development over the long-run future, whether by persisting indefinitely, having knock-on effects on future institutions, or by influencing how some other lock-in event takes place.
- The time at which settlement of other solar systems begins could be highly influential for longtermists. For example, the ownership of other solar systems could be determined by an auction among nations and/or companies and individuals (much as the USA purchased Alaska and a significant portion of the midwest in the 19th century[9]); or by an essentially lawless race between nations (as happened with European colonisation); or through war (as has happened throughout history). If the returns from interstellar settlement pay off only over very long timescales (which seems likely), and if most of the decision-makers of the time still intrinsically discount future benefits, then longtermists at the time would be able to cheaply buy huge influence over the future.
- The time when the settlement of other galaxies begins, which might obey similar dynamics to the settlement of other solar systems.
Implications
I said at the start that it’s non-obvious what follows, for the purposes of action, from outside-view longtermism. The most obvious course of action that might seem comparatively more promising is investment, such as saving in a long-term foundation, or movement-building, with the aim of increasing the amount of resources longtermist altruists have at a future, more hingey time. And, if one finds my second argument compelling, then research, especially into social science and moral and political philosophy, might also seem unusually promising.
These are activities that seem like they would have been good strategies across many times in the past. If we think that today is not exceptionally different from times in the past, this gives us reason to think that they are good strategies now, too.
[1] The question of what ‘resources’ in this context are is tricky. As a working definition, I’ll use 1 megajoule of stored but useable energy, where I’ll allow the form of stored energy to vary over time: so it could be in the form of grain in the past, oil today, and antimatter in the future.
[2] H/T to Carl Shulman for this wonderful quote from C.S. Lewis, The Abolition of Man: “In order to understand fully what Man’s power over Nature, and therefore the power of some men over other men, really means, we must picture the race extended in time from the date of its emergence to that of its extinction. Each generation exercises power over its successors: and each, in so far as it modifies the environment bequeathed to it and rebels against tradition, resists and limits the power of its predecessors. This modifies the picture which is sometimes painted of a progressive emancipation from tradition and a progressive control of natural processes resulting in a continual increase of human power. In reality, of course, if any one age really attains, by eugenics and scientific education, the power to make its descendants what it pleases, all men who live after it are the patients of that power. They are weaker, not stronger: for though we may have put wonderful machines in their hands we have pre-ordained how they are to use them. And if, as is almost certain, the age which had thus attained maximum power over posterity were also the age most emancipated from tradition, it would be engaged in reducing the power of its predecessors almost as drastically as that of its successors. And we must also remember that, quite apart from this, the later a generation comes — the nearer it lives to that date at which the species becomes extinct—the less power it will have in the forward direction, because its subjects will be so few. There is therefore no question of a power vested in the race as a whole steadily growing as long as the race survives. The last men, far from being the heirs of power, will be of all men most subject to the dead hand of the great planners and conditioners and will themselves exercise least power upon the future.
The real picture is that of one dominant age—let us suppose the hundredth century A.D.—which resists all previous ages most successfully and dominates all subsequent ages most irresistibly, and thus is the real master of the human species. But then within this master generation (itself an infinitesimal minority of the species) the power will be exercised by a minority smaller still. Man’s conquest of Nature, if the dreams of some scientific planners are realized, means the rule of a few hundreds of men over billions upon billions of men. There neither is nor can be any simple increase of power on Man’s side. Each new power won by man is a power over man as well. Each advance leaves him weaker as well as stronger. In every victory, besides being the general who triumphs, he is also the prisoner who follows the triumphal car.”
[3] Quantitatively: These considerations push me to put my posterior on HoH into something like the [1%, 0.1%] interval. But this credence interval feels very made-up and very unstable.
[4] These are just anecdotes, and I’d love to see someone undertake a thorough investigation of how often people tend to overreact vs underreact to technological developments, especially in terms of risk-assessment and safety. As well as for helping us understand how likely we are to be biased, this is relevant to how much we should expect other actors in the coming decades to invest in safety with respect to AI and synthetic biology.
[5] I note that this argument has been independently generated quite a number of times by different people.
[6] Though if one endorses non-causal decision theory, those times might still be decision-relevant.
[7] An exception might have been some of the US founding fathers. For example, John Adams, the second US President, commented that: “The institutions now made in America will not wholly wear out for thousands of years. It is of the last importance, then, that they should begin right. If they set out wrong, they will never be able to return, unless by accident, to the right path." (H/T Christian Tarsney for the quote.)
[8] If you’re an atheist, it’s easy to think it’s inevitable that atheists will win out in the end. But because of differences in fertility rate, the global proportion of fundamentalists is predicted to rise and the proportion of atheists is predicted to decline. What’s more, religiosity is moderately heritable, so these differences could compound into the future. For discussion, see Shall the religious inherit the earth? by Eric Kaufman.
[9] Some numbers on this: The Louisiana purchase cost $15 million at the time, or $250 million in today’s money, for what is now 23.3% of US territory. https://www.globalpolicy.org/component/content/article/155/25993.html Alaska cost $120 million in today’s money; its GDP today is $54 billion per year. https://fred.stlouisfed.org/series/AKNGSP
Hi Will,
It is great to see all your thinking on this down in one place: there are lots of great points here (and in the comments too). By explaining your thinking so clearly, it makes it much easier to see where one departs from it.
My biggest departure is on the prior, which actually does most of the work in your argument: it creates the extremely high bar for evidence, which I agree probably couldn’t be met. I’ve mentioned before that I’m quite sure the uniform prior is the wrong choice here and that this makes a big difference. I’ll explain a bit about why I think that.
As a general rule if you have a domain like this that extends indefinitely in one direction, the correct prior is one that diminishes as you move further away in that direction, rather than picking a somewhat arbitrary end point and using a uniform prior on that. People do take this latter approach in scientific papers, but I think it is usually wrong to do so. Moreover in your case in particular, there are also good reasons to suspect that the chance of a century being the most influential should diminish over time. Especially because there are important kinds of significant event (such as the value lock-in or an... (read more)
Hi Toby,
Thanks so much for this very clear response, it was a very satisfying read, and there’s a lot for me to chew on. And thanks for locating the point of disagreement — prior to this post, I would have guessed that the biggest difference between me and some others was on the weight placed on the arguments for the Time of Perils and Value Lock-In views, rather than on the choice of prior. But it seems that that’s not true, and that’s very helpful to know. If so, it suggests (advertisement to the Forum!) that further work on prior-setting in EA contexts is very high-value.
I agree with you that under uncertainty over how to set the prior, because we’re clearly so distinctive in some particular ways (namely, that we’re so early on in civilisation, that the current population is so small, etc), my choice of prior will get washed out by models on which those distinctive features are important; I characterised these as outside-view arguments, but I’d understand if someone wanted to characterise that as prior-setting instead.
I also agree that there’s a strong case for making the prior over persons (or person-years) rather than centuries. In your discussion, you go via number of person... (read more)
Thanks for this very thorough reply. There are so many strands here that I can't really hope to do justice to them all, but I'll make a few observations.
1) There are two versions of my argument. The weak/vague one is that a uniform prior is wrong and the real prior should decay over time, such that you can't make your extreme claim from priors. The strong/precise one is that it should decay as 1/n^2 in line with a version of LLS. The latter is more meant as an illustration. It is my go-to default for things like this, but my main point here is the weaker one. It seems that you agree that it should decay, and that the main question now is whether it does so fast enough to make your prior-based points moot. I'm not quite sure how to resolve that. But I note that from this position, we can't reach either your argument that from priors this is way too unlikely for our evidence to overturn (and we also can't reach my statement of the opposite of that).
2) I wouldn't use the LLS prior for arbitrary superlative properties where you fix the total population. I'd use it only if the population over time was radically unknown (so that the first person is... (read more)
I appreciate your explicitly laying out issues with the Laplace prior! I found this helpful.
The approach to picking a prior here which I feel least uneasy about is something like: "take a simplicity-weighted average over different generating processes for distributions of hinginess over time". This gives a mixture with some weight on uniform (very simple), some weight on monotonically-increasing and monotonically-decreasing functions (also quite simple), some weight on single-peaked and single-troughed functions (disproportionately with the peak or trough close to one end), and so on…
If we assume a big future and you just told me the number of people in each generation, I think my prior might be something like 20% that the most hingey moment was in the past, 1% that it was in the next 10 centuries, and the rest after that. After I notice that hingeyness is about influence, and causality gives a time asymmetry favouring early times, I think I might update to >50% that it was in the past, and 2% that it would be in the next 10 centuries.
(I might start with some similar prior about when the strongest person lives, but then when I begin to understand something about strength the generating mechanisms which suggest that the strongest people would come early and everything would be diminishing thereafter seem very implausible, so I would update down a lot on that.)
I think this point is even stronger, as your early sections suggest. If we treat the priors as hypotheses about the distribution of events in the world, then past data can provide evidence about which one is right, and (the principle of) Will's prior would have given excessively low credence to humanity's first million years being the million years when life traveled to the Moon, humanity becoming such a large share of biomass, the first 10,000 years of agriculture leading to the modern world, and so forth. So those data would give us extreme evidence for a less dogmatic prior being correct.
On the other hand, the kinds of priors Toby suggests would also typically give excessively low credence to these events taking so long. So the data doesn't seem to provide much active support for the proposed alternative either.
It also seems to me like different kinds of priors are probably warranted for predictions about when a given kind of event will happen for the first time (e.g. the first year in which someone is named Steve) and predictions about when a given property will achieve its maximum value (e.g. the year with the most Steves). It can therefore be consistent to expect the kinds of "firsts" you list to be relatively bunched up near the start of human history, while also expecting relevant "mosts" (such as the most hingey year) to be relatively spread out.
That being said, I find it intuitive that periods with lots of "firsts" should tend to be disproportionately hingey. I think this intuition could be used to construct a model in which early periods are especially likely to be hingey.
Just a quick thought on this issue: Using Laplace's rule of succession (or any other similar prior) also requires picking a somewhat arbitrary start point. You suggest 200000BC as a start point, but one could of course pick earlier or later years and get out different numbers. So the uniform prior's sensitivity to decisions about how to truncate the relevant time interval isn't a special weakness; it doesn't seem to provide grounds for prefering the Laplacian prior.
I think that for some notion of an "arbitrary superlative," a uniform prior also makes a lot more intuitive sense than a Laplacian prior. The Laplacian prior would give very strange results, for example, if you tried to use it to estimate the hottest day on Earth, the year with the highest portion of Americans named Zach, or the year with the most supernovas.
... (read more)So your prior says, unlike Will’s, that there are non-trivial probabilities of very early lock-in. That seems plausible and important. But it seems to me that your analysis not only uses a different prior but also conditions on “we live extremely early” which I think is problematic.
Will argues that it’s very weird we seem to be at an extremely hingy time. So we should discount that possibility. You say that we’re living at an extremely early time and it’s not weird for early times to be hingy. I imagine Will’s response would be “it’s very weird we seem to be living at an extremely early time then” (and it’s doubly weird if it implies we live in an extremely hingy time).
If living at an early time implies something that is extremely unlikely a priori for a random person from the timeline, then there should be an explanation. These 3 explanations seem exhaustive:
1) We’re extremely lucky.
2) We aren’t actually early: E.g. we’re in a simulation or the future is short. (The latter doesn’t necessarily imply that xrisk work doesn’t have much impact because the future might just be short in terms of people in our anthropic reference class).
3) Early people don’t actually have outsized influen... (read more)
>> And it gets even more so when you run it in terms of persons or person years (as I believe you should). i.e. measure time with a clock that ticks as each lifetime ends, rather than one that ticks each second. e.g. about 1/20th of all people who have ever lived are alive now, so the next century it is not really 1/2,000th of human history but more like 1/20th of it.
And if you use person-years, you get something like 1/7 - 1/14! [1]
>> I doubt I can easily convince you that the prior I’ve chosen is objectively best, or even that it is better than the one you used. Prior-choice is a bit of an art, rather like choice of axioms.
I'm pretty confused about how these dramatically different priors are formed, and would really appreciate it if somebody (maybe somebody less busy than Will or Toby?) could give pointers on how to read up more on forming these sort of priors. As you allude to, this question seems to map to anthropics, and I'm curious how much the priors here necessarily maps to your views on anthropics. Eg, am I reading the post and your comment correctly that Will takes an SIA view and you take an SSA view on anthropic questions?
In general, does anybody have pointers on how best to reason about anthropic and anthropic-adjacent questions?
[1] https://eukaryotewritesblog.com/2018/10/09/the-funnel-of-human-experience/
I don't have time to get into all the details, but I think that while your intuition is reasonable (I used to share it) the maths does actually turn out my way. At least on one interpretation of what you mean. I looked into this when wondering if the doomsday argument suggested that the EV of the future must be small. Try writing out the algebra for a Gott style prior that there is an x% chance we are in the first x%, for all x. You get a Pareto distribution that is a power law with infinite mean. While there is very little chance on this prior that there is a big future ahead, the size of each possible future compensates for that, such that each order of magnitude of increasing size of the future contributes an equal expected amount of population to the future, such that the sum is infinite.
I'm not quite sure what to make of this, and it may be quite brittle (e.g. if we were somehow certain that there weren't more than 10^100 people in the future, the expected population wouldn't be all that high), but as a raw prior I really think it is both an extreme outside view, saying we are equally likely to live at any relative position in the sequence *and* that there is extremely high (infinite) EV in the future -- not because it thinks there is any single future whose EV is high, but because the series diverges.
This isn't quite the same as your claim (about influence), but does seem to 'save existential risk work' from this challenge based on priors (I don't actually think it needed saving, but that is another story).
Thanks for this post Will, it's good to see some discussion of this topic. Beyond our previous discussions, I'll add a few comments below.
I'd like to flag that I would really like to see a more elegant term than 'hingeyness' become standard for referring to the ease of influence in different periods.
I would dispute this. Possibilities of AGI and global disaster were discussed by pioneers like Turing, von Neumann, Good, Minsky and others from the founding of the field of AI.
The possibility of engineered plagues causing an apocalypse was a grave concern of forward thinking people in the early 20th century as biological weapons were developed and demonstrated. Many of the anti-nuclear scientists concerned for the global prospects of humanity were also concerned about germ warfare.
Both of the above also had prominent fictional portrayals to come to mind for longtermist altruists engaging in a wide-ranging search. If there had been a longtermist altruist movement trying to c... (read more)
I think this is a really important comment; I see I didn't put these considerations into the outside-view arguments, but I should have done as they are make for powerful arguments.
The factors you mention are analogous to the parameters that go into the Ramsey model for discounting: (i) a pure rate of time preference, which can account for risk of pre-emption; (ii) a term to account for there being more (and, presumably, richer) future agents and some sort of diminishing returns as a function of how many future agents (or total resources) there are. Then given uncertainty about these parameters, in the long run the scenarios that dominate the EV calculation are where there’s been no pre-emption and the future population is not that high. e.g. There's been some great societal catastrophe and we're rebuilding civilization from just a few million people. If we think the inverse relationship between population size and hingeyness i... (read more)
> Then given uncertainty about these parameters, in the long run the scenarios that dominate the EV calculation are where there’s been no pre-emption and the future population is not that high. e.g. There's been some great societal catastrophe and we're rebuilding civilization from just a few million people. If we think the inverse relationship between population size and hingeyness is very strong, then maybe we should be saving for such a possible scenario; that's the hinge moment.
I agree (and have used in calculations about optimal disbursement and savings rates) that the chance of a future altruist funding crash is an important reason for saving (e.g. medium-scale donors can provide insurance against a huge donor like the Open Philanthropy Project not entering an important area or being diverted). However, the particularly relevant kind of event for saving is the possibility of a 'catastrophe' that cuts other altruistic funding or similar while leaving one's savings unaffected. Good Ventures going awry fits that bill better than a nuclear war (which would also destroy a DAF saving for the future with high probability).
Saving extra for a catastro... (read more)
Hi Carl,
Thanks so much for taking the time to write this excellent response, I really appreciate it, and you make a lot of great points. I’ll divide up my reactions into different comments; hopefully that helps ease of reading.
This is a good idea. Some options: influentialness; criticality; momentousness; importance; pivotality; significance.
I’ve created a straw poll here to see as a first pass what the Forum thinks.
[Edit: Results:
Pivotality - 26% (17 votes)
Criticality - 22% (14 votes)
Hingeyness - 12% (8 votes)
Influentialness - 11% (7 votes)
Importance - 11% (7 votes)
Significance - 11% (7 votes)
Momentousness - 8% (5 votes)]
Now it's officially on BBC: https://www.bbc.com/future/article/20200923-the-hinge-of-history-long-termism-and-existential-risk
Although it also says:
Thinking further, I would go with importance among those options for 'total influence of an era' but none of those terms capture the 'per capita/resource' element, and so all would tend to be misleading in that way. I think you would need an explicit additional qualifier to mean not 'this is the century when things will be decided' but 'this is the century when marginal influence is highest, largely because ~no one tried or will try.'
Criticality is confusing because it describes the point when nuclear reaction becomes self-sustaining, and relates to "critical points" in the related area of dynamical systems, which is somewhat different from what we're talking about.
I think Hingeyness should have a simple name because it is not a complicated concept - It's how much actions affect long-run outcomes. In RL, in discussion of prioritized experience replay, we would just use something like "importance". I would generally use "(long-run) importance" or "(long-run) influence" here, though I guess pivotality (from Yudkowsky's "pivotal act") is alright in a jargon-liking context (like academic papers).
Edit: From Carl's comment, and from rereading the post, the per-resource component seems key. So maybe per-resource importance.
Sorry, I wasn’t meaning we should be entirely punting to the future, and in case it’s not clear from my post my actual all-things-considered views is that longtermist EAs should be endorsing a mixed strategy of some significant proportion of effort spent on near-term longtermist activities and some proportion of effort spent on long-term longtermist activities.
I do agree that, at the moment, EA is mainly investing (e.g. because of Open Phil and because of human capital and because much actual expenditure is field-building-y, as you say). But it seems like at the moment that’s primarily because of management constraints and weirdness of borrowing-to-give (etc), rather than a principled plan to spread giving out over some (possibly very long) time period. Certainly the vibe in the air is ‘expenditure (of money or labour) now is super important, we should really be focusing on that’.
(I also don’t think that diminishing returns is entirely true: there are fixed costs and economies of scale when trying to do most things in the world, so I expect s-curves in general. If so, that would favour a lumpier disbursement schedule.)
I agree that many small donors do not have a principled plan and are trying to shift the overall portfolio towards more donation soon (which can have the effect of 100% now donation for an individual who is small relative to the overall portfolio).
However, I think that institutionally there are in fact mechanisms to regulate expenditures:
- Evaluations of investments in movement-building involve estimations of the growth of EA resources that will result, and comparisons to financial returns; as movement-building returns decline they will start to fall under the financial return benchmark and no longer be expanded in that way
- The Open Philanthropy Project has blogged about its use of the concept of a 'last dollar' opportunity cost of funds, asking for current spending whether in exp
... (read more)I agree with this, though if we’re unsure about how many resources will be put towards longtermist causes in the future, then the expected value of saving will come to be dominated by the scenario where very few resources are devoted to it. (As happens in the Ramsey model for discounting if one includes uncertainty over future growth rates and the possibility of catastrophe.) This considerations gets stronger if one thinks the diminishing marginal returns curve is very steep.
E.g. perhaps in 150 years’ time, EA and Open Phil and longtermist concern will be dust; in which case those who saved for the future (and ensured that there would be at least some sufficiently likeminded people to pass their resources onto) will have an outsized return. And perhaps returns diminish really steeply, so that what matters is guaranteeing that there are at least some longtermists around. If the outsized return in th... (read more)
Is longtermism accessible today? That's a philosophy of a narrow circle, as Baconian science and the beginnings of the culture of progress were in 1600. If you are a specialist focused on moral reform and progress today with unusual knowledge, your might want to consider a counterpart in the past in a similar position for their time.
I agree there’s a tricky issue of how exactly one constructs the counterfactual. The definition I’m using is trying to get it as close as possible to a counterfactual we really face: how much to spend now vs how much to pass resources onto future altruists. I’d be interested if others thought of very different approaches. It’s possible that I’m trying to pack too much into the concept of ‘most influential’, or that this concept should be kept separate from the idea of moving resources around to different times.
I feel that involving the anachronistic insertion of a longtermist altruist into the past, if anything, makes my argument harder to make, though. If I can’t guarantee that the past person I’m giving resources to would even be a longtermist, that makes me less inclined to give them resources. And if I include the possibility that longtermism might be wrong and that the future-person that I pass resources onto will recognise this, that’s (at least some) argument to me in favour of passing on resources. (Caveat subjectivist meta-ethics, possibility of future people’s morality going wayward, etc.)
I tried engaging with the post for 2-3 hours and was working on a response, but ended up kind of bouncing off at least in part because the definition of hingyness didn't seem particularly action-relevant to me, mostly for the reasons that Gregory Lewis and Kit outlined in their comments.
I also think a major issue with the current definition is that I don't know of any technology or ability to reliably pass on resources to future centuries, which introduces a natural strong discount factor into the system, but which seems like a major consideration in favor of spending resources now instead of trying to pass them on (and likely fail, as illustrated in Robin Hanson's original "giving later" post).
Thanks, I’ve updated on this since writing the post and think my original claim was at least too strong, and probably just wrong. I don’t currently have a good sense of, say, if I were living in the 1950s, how likely I would be to figure out AI as the thing, rather than focus on something else that turned out not to be as important (e.g. the focus on nanotech by the Foresight Institute (a group of idealistic futurists) in the late 80s could be a relevant example).
I'd guess a longtermist altruist movement would have wound up with a flatter GCR porfolio at the time. It might have researched nuclear winter and dirty bombs earlier than in OTL (and would probably invest more in nukes than today's EA movement), and would have expedited the (already pretty good) reaction to the discovery of asteroid risk. I'd also guess it would have put a lot of attention on the possibility of stable totalitarianism as lock-in.
Your argument seems to combine SSA style anthropic reasoning with CDT. I believe this is a questionable combination as it gives different answers from an ex-ante rational policy or from updateless decision theory (see e.g. https://www.umsu.de/papers/driver-2011.pdf). The combination is probably also dutch-bookable.
Consider the different hingeynesses of times as the different possible worlds and your different real or simulated versions as your possible locations in that world. Say both worlds are equally likely a priori and there is one real version of you in both worlds, but the hingiest one also has 1000 subjectively indistinguishable simulations (which don't have an impact). Then SSA tells you that you are much less likely a real person in the hingiest time than you are to be a real person in the 20th hingiest time. Using these probabilities to calculate your CDT-EV, you conclude that the effects of your actions on the 20th most hingiest time dominate.
Alternatively, you could combine CDT with SIA. Under SIA, being a real person in either time is equally likely. Or you could combine the SSA probabilities with EDT. EDT would recommend acting as if you were controlling all simulati
... (read more)Excellent work; some less meritorious (and borderline repetitious) remarks:
1) One corollary of this line of argument is that even if one is living at a 'hinge of history', one should not reasonably believe this, given the very adverse prior and the likely weak confirmatory evidence one would have access to.
2) The invest for the future strategy seems to rely on our descendants improving their epistemic access to the point where they can reliably determine whether they're at a 'hinge' or not, and deploying resources appropriately. There are grounds for pessimism about this ability ever being attained. Perhaps history (or the universe as a whole) is underpowered for these inferences.
3) Although with the benefit of hindsight over previous times we could assess the distribution of hingeyness/influence across these, to get a sense of the distribution, and so a steer as to whether we should think there are hingey periods of vastly outsized influence in the first place.
4) If we grant the ground truth is occasional 'crucial moments', but we expect evidence at-the-time for living in one of these is scant, my intuition is the optimal strategy would to husban... (read more)
One of the amusing things about the 'hinge of history' idea is that some people make the mediocrity argument about their present time - and are wrong.
Isaac Newton, for example, 300 years ago appears to have made an anthropic argument that claims that he lived in a special time which could be considered any kind of, say, 'Revolution', due to the visible acceleration of progress and recent inventions of technologies, were wrong, and in reality, there was an ordinary rate of innovation and the invention of many things recently merely showed that humans had a very short past and were still making up for lost time (because comets routinely drove intelligent species extinct).
And Lucretius ~1800 years before Newton (probably relaying older Epicurean arguments) made his own similar argument, arguing that Greece & Rome were not any kind of exception compared to human history - certainly humans hadn't existed for hundreds of thousands or millions of years! - and if Greece & Rome seemed innovative compared to the dark past, it was merely because "our world is in its youth: it was not created long ago, but is of comparatively recent origin. That is why at ... (read more)
I think the outside view argument for acceleration deserves more weight. Namely:
- Many measures of "output" track each other reasonably closely: how much energy we can harness, how many people we can feed, GDP in modern times, etc.
- Output has grown 7-8 orders of magnitude over human history.
- The rate of growth has itself accelerated by 3-4 orders of magnitude. (And even early human populations would have seemed to grow very fast to an observer watching the prior billion years of life.)
- It's pretty likely that growth will accelerate by another order of magnitude at some point, given that it's happened 3-4 times before and faster growth seems possible.
- If growth accelerated by another order of magnitude, a hundred years would be enough time for 9 orders of magnitude of growth (more than has occurred in all of human history).
- Periods of time with more growth seem to have more economic or technological milestones, even if they are less calendar time.
- Heuristics like "the next X years are very short relative to history, so probably not much will happen" seem to have a very bad historical track record when X is enough time for lots of growth to occur, and so it seem
... (read more)Meta-comment: the level of discussion here has been fantastic. It's nice that these complex issues are discussed in this format; publically and relatively informally (though other formats obviously have their advantages too). Thanks to all contributors.
Great discussion here, top quality comments. To make one aspect of this a bit clearer I made this figure of different 'hingeiness' trajectories and their implications:
Will adds: "In this post I’m just saying it’s unlikely we’re at A2, rather than at some other point in that curve, or on a different curve, and the evidence we have doesn’t give us strong enough evidence to think we’re at A2.
But then yeah it’s a really good point that even if one thinks hinginess is increasing locally, and feels confident about that, it doesn’t mean we’re atop the last peak.
A related point from the graphs: even if hinginess is locally decreasing faster than the real rate of interest, that’s still not sufficient for spending, if there will be some future time when hinginess starts increasing or staying the same or slowing to less than the real rate of interest (as long as you can save for that long)."
Upvote for using graphics to elucidate discussion on the Forum. Haven't seen it often and it's very helpful!
As a side note, Derek Parfit was an early advocate of what you call the 'Hinge of History Hypothesis'. He even uses the expression 'hinge of history' in the following quote (perhaps that's the inspiration for your label):
Interestingly, he had expressed similar views already in 1984, though back then he didn't articulate why he believed that the present time is uniquely important:
Thanks, Pablo! Yeah, the reference was deliberate — I’m actually aiming to turn a revised version of this post into a book chapter in a Festschrift for Parfit. But I should have given the great man his due! And I didn’t know he’d made the ‘most important centuries’ claim in Reasons and Persons, that’s very helpful!
Thanks Pablo, I also didn't know he had claimed this at the very time he was introducing population ethics and extinction risk.
In his excellent Charity Cost Effectiveness in an Uncertain World, first published in 2013, Brian Tomasik calls this approach 'Punting to the Future'. Unless there are strong reasons for introducing a new label, I suggest sticking to Brian's original name, both to avoid unnecessary terminological profusion and to credit those who pioneered discussion of this idea.
Great post!
Minor point, but I think this is unclear. On AI see e.g. here. On synbio I'm less familiar but I'm guessing someone more than a few decades ago was able to think thoughts like "Once we understand cell biology realy well, seems like we might be able to engineer pathogens much more destructive than those served up by nature."
+1. I don't know the intellectual history well but the risk from engineered pathogens should have been apparent 4 decades ago in 1975 if not (more likely, IMO) earlier.
A fairly random sample of writing on the topic:
- Jack London's 1910 short story "An Unparalleled Invasion" [CW: really racist] imagines genocide through biological warfare and the possibility that a "hybridization" between pathogens created "a new and frightfully virulent germ" (I don't think he's suggesting the hybridization was intentional but it's a bit ambiguous).
- the possibility of engineering pathogens was seriously discussed 4 decades ago at the Asilomar Conference in 1975.
- There's a 1982 sci-fi book by a famous writer where a vengeful molecular biologist releases a pathogen engineered to be GCR-or-worse.
- In 1986, a U.S. Defense Department official was quoted saying "“The t
... (read more)Szilard anticipated nuclear weapons (and launched a large and effective strategy to cause the liberal democracies to get them ahead of totalitarian states, although with regret), and was also concerned about germ warfare (along with many of the anti-nuclear scientists). See this 1949 story he wrote. Szilard seems very much like an agenty sophisticated anti-xrisk actor.
Just a quick thought: I wonder whether the hingiest times were during periods of potential human population bottlenecks. E.g., Wikipedia says:
(Note that the Wikipedia article doesn't seem super well done, and also that it appears there has been significant scholarly controversy around population bottleneck claims. I don't want to claim that there in fact were population bottlenecks; I'm just curious what the implications in terms of hinginess would be if there were.)
As a first pass, it seems plausible to me that e.g. the action of any one of... (read more)
This was very thought-provoking. I expect I'll come back to it a number of times.
I suspect that how the model works depends a lot on exactly how this definition is interpreted:
In particular, I think you intend direct work to include extinction risk reduction, and to be opposite to strategies which punt decisions to future generations. However, extinction risk reduction seems like the mother of all punting strategies, so it seems naturally categorised as not direct work for the purpose of considering whether to punt. Due to this, I expect some weirdness around the categorisation, and would guess that a precise definition would be productive.
(Added formatting and bold to the quote for clarity.)
How I see it:
Extinction risk reduction (and other type of "direct work") affects all future generations similarly. If the most influential century is still to come, extinction risk reduction also affects the people alive during that century (by making sure they exist). Thus, extinction risk reduction has a "punting to future generations that live in hingey times" component. However, extinction risk reduction also affects all the unhingey future generations directly, and the effects are not primarily mediated through the people alive in the most influential centuries.
(Then, by definition, if ours is not a very hingey time, direct work is not a very promising strategy for punting. The effect on people alive during the "most influential times" has to be small by definition. If direct work did strongly enable the people living in the most influential century (e.g. by strongly increasing the chance that they come into existence), it would also enable many other generations a lot. This would imply that the present was quite hingey after all, in contradiction to the assumption that the present is unhingey.)
Punting strategies, in contrast, affect future generations primarly via their effect on the people alive in the most influential centuries.
Nice post :) A couple of comments:
To me it seems that the biggest constraint on being able to invest in future centuries is the continuous existence of a trustworthy movement from now until then. I imagine that a lot of meta work implicitly contributes towards this; so the idea that the HoH is far in the future is an argument for more meta work (and more meta work targeted towards EA longevity in particular). But my prior on a given movement remaining trustworthy over long time periods is quite low, and becomes lower the more money it is entrusted with.
To the ones you listed, I would add:
- The time period during which we reach technological
... (read more)I want to push back on the idea of setting the "ur-prior" at 1 in 100,000, which seems far too low to me. I also will critique the method that arrived at that number, and propose a method of determining the prior that seems superior to me.
(One note before that: I'm going to ignore the possibility that the hingiest century could be in the past and assume that we are just interested in the question of how probable it is that the current century is hingier than any future century.)
First, to argue that 1 in 100,000 is too low: The hingiest century of the future must occur before civilization goes extinct. Therefore, one's prior that the current century is the hingiest century of the future must be at least as high as one's credence that civilization will go extinct in the current century. I think this is already (significantly) greater than 1 in 100,000.
I'll come back to this idea when I propose my method of determining the prior, but first to critique yours:
The method you used to come up with the 1 in 100,000 prior that our current century is hingier than any future... (read more)
Using a distribution over possible futures seems important. The specific method you propose seems useful for getting a better picture of maxi{P(century i most leveraged)}. However, what we want in order to make decisions is something more akin to maxi{E[leverage of century i]}. The most obvious difference is that scenarios in which the future is short and there is little one can do about it score highly on expected ranking and low on expected value. I am unclear on whether a flat prior makes sense for expectancy, but it seems more reasonable than for probability.
Of course, even maxi{E[leverage of century i]} does not accurately reflect what we are looking for. Similarly to Gregory_Lewis' comment, the decision-relevant thing (if 'punting to the future' is possible at all) is closer still to maxi{E[what we will assess the leverage of century i to be at the time]}. i.e. whether we will have higher expected leverage in some future century according to our beliefs at that time. Thinking this through, I also find it plausible that even this does not make sense when using the definitions in the post, and will make a related top-level comment.
While I agree with you that maxi(P(century i most leveraged)) is not that action relevant, it is what Will is analyzing in the post, and think that William Kiely's suggested prior seems basically reasonable for answering that question. As Will said explicitly in another comment:
I do think that the focus on maxi(P(century i most leveraged)) is the part of the post that I am least satisfied by, and that makes it hardest to engage with it, since I don't really know why we care about the question of "are we in the most influential time in history?". What we actually care about is the effectiveness of our interventions to give resources to the future, and the marginal effectiveness of those resources in the future, both of which are quite far removed from that question (because of the difficulties of sending resources to the future, and the fact that the answer to that question makes overall only a small difference for the total magnitude of the impact of any ... (read more)
I agree that, among other things, discussion of mechanisms for sending resources to the future would needed to make such a decision. I figured that all these other considerations were deliberately excluded from this post to keep its scope manageable.
However, I do think that one can interpret the post as making claims about a more insightful kind of probability: the odds with which the current century is the one which will have the highest leverage-evaluated-at-the-time (in contrast to an omniscient view / end-of-time evaluation, which is what this thread mostly focuses on). I think that William_MacAskill's main arguments are broadly compatible with both of these concepts, so one could get more out of the piece by interpreting it as about the more useful concept.
Formally, one could see the thing being analysed as