Hide table of contents

The governments of the world disperse a combined $25 trillion annually through their budgets and enjoy a monopoly on coercion. Longtermism will be maximally impactful once it gets codified into the political processes by which governments decide where to allocate their resources and power. To this end, Tyler John and Will MacAskill have written an interesting paper proposing several reforms that would make governments significantly more longtermist.  

Considerations

Before we get into the nitty-gritty of John and MacAskill's proposals, here are some important broader considerations regarding longtermist governance. 

Why this matters

It’s reasonable to worry that talk of institutional reform is pointless - changing how a government works is not exactly a tractable problem.

Political reform is very much a power law domain, where the median law has roughly no consequence, but the most important ones have extraordinary and persistent impact. Some examples:

  • Liberalization of the Chinese economy under Deng Xiaping after Mao died in 1976
  • Indian economic reform in the 90s after a credit crisis forced the government to address festering problems.
  • The implementation of environmentalism into US law in the 70s via the creation of the EPA and the passage of NEPA.

Often this moment of political malleability is engineered by a crisis - as evidenced by the examples of China and India. It is very important to have your ideas for reform in the air when some catalyst for plasticity arrives. By the time we have a deadlier pandemic, longtermists should be ready with a specific list of institutional reforms for governments to implement. 

That's why we have to go over existing proposals for longtermist reform and check whether they are likely to be useful. We also need to see how these proposals could be made more effective and concrete.

State Capacity Longtermism

John and MacAskill want to add new longtermist agencies or assemblies to the government. But this will not be that useful if the state capacity that these new institutions would rely on continues to degrade. Simply improving state capacity alone would go a long way towards making the government act in a more coherent, effective, and responsive way to threats to our future.

A US government with higher state capacity will be able to better respond to the next pandemic or to novel threats like AI, and to organize international action on issues like climate change and dual use research.

Our administrative agencies are often disorganized and have overlapping or even mutually contradictory mandates. Often this leads to a situation where the government has lost the capacity to proactively address a crisis but still retains the means to destroy private efforts to respond. We saw an example of this during the beginning of the pandemic, when the FDA banned companies from producing at-home tests while the CDC failed to mass-manufacture working tests.

I admit that improving state capacity is not exactly a tractable problem. But it’s more plausible in the short term than John and MacAskill’s proposal to create a house of parliament dedicated to future generations. It’s a really interesting idea I think we should discuss! But as long as we are discussing something that speculative, we should also talk about how we might be able to improve state capacity.

Defending optimistic worldviews

In the section on the sources of shorttermist biases, John and MacAskill write:

Cognitive biases include actors’ tendencies to respond more strongly to vivid risks than to information acquired from abstract, general social scientific trends, as well as over-optimism about their ability to control and eliminate risks under situations of uncertainty. The attention that political actors pay to the future and to the nearby past are asymmetric because voters and many other political actors “can readily observe past economic performance but have little information about future conditions.” Thus, to economize on cognitive effort, many political actors forego the task of making predictions about the future and choose policies which have worked in the recent past. [emphasis mine]

What John and MacAskill are describing here doesn’t sound like a bias - it sounds like an actual political philosophy, one which people like Matt Ridley or Steven Pinker would probably endorse. There are many reasonable people who believe that we should extrapolate from past performance rather than “abstract, general social scientific trends”, and that we should be more optimistic with regards to our ability to deal with risks in due time rather than rely on hastily implemented policies.

The people who believe this might be wrong, but you have to actually argue that they’re mistaken instead of just dismissing their worldview as a cognitive bias. Arguably their philosophy was a useful corrective to issues in the past that involved long-term trends. The people who responded to the concrete overpopulation scare in the 20th century with vague optimism about our ability to feed more people were correct, whereas the people who had “abstract social scientific” reasons for expecting resources to run out were wrong, and disastrously so given the mass sterilization and population control programs they inspired in India and China. (Obviously, I don’t think John and MacAskill would endorse those atrocities - my point is simply that what they call a cognitive bias would have prevented all that unnecessary suffering.)

To zoom out a bit, we should be careful that we don’t implement longtermist reform in a way that dismisses the optimistic philosophy of governance which places greater weight on past experiences.

Does medium-termism correspond to long-termism?

If you apply a zero-discount rate, then most of the people you ought to care about will live thousands of years into the future. But the further out you go in time, the harder it is to tell whether what you are doing is actually helpful. You might prefer to just optimize for outcomes 50 years down the line and hope that this is in rough correspondence to outcomes 500 or 5000 years into the future. But is this a reasonable assumption?

This is an extremely important question for actually doing anything actionable about longtermism. Many of John and MacAskill’s ideas rely on us being able to create decent metrics for longterm performance. Such metrics are what their proposed research institutes would produce and how the pensions of the House members would be dispersed. But is anything we can measure in the medium term a reasonable correlate for a prosperous far future? 

For example, on a 100 year time horizon, economic growth seems very important. Raising growth rates from 2% to 3% would make us 2.7x wealthier in the end. But how much should a strong longtermist government care about this? Is being 2.7x wealthier in 100 years a good proxy for the probability of a flourishing galactic civilization in 10,000 years?

I won’t rehash all the arguments pro and con here (that would be especially cruel given how long this post already is), but it is something we need a good answer on before we can go about making longtermist reforms.

 

Review of Longtermist Institutional Reform by John and MacAskill

John and MacAskill propose four different longtermist institutional reform proposals in this paper. Let’s evaluate them one by one.

Posterity Impact Statements

John and MacAskill:

[Posterity impact statement] requirements combat uncertainty about policy causation by requiring legislators to thoroughly research and publicize the long-term effects of their proposed policy for the opposing political party to scrutinize. They also hold legislators liable for the long-term effects of their decisions. Depending on the scheme, the associated liability mechanism can be “soft” in that it relies only on informal punitive and reward mechanisms, such as the embarrassment associated with putting forward a bill with harmful long-term effects, or it can be “hard” in that it is backed by formal sanctions, such as the requirement that legislators pay an insurance premium to cover expected damages…

Posterity impact statement requirements should have triggering conditions and enforcement mechanisms which ensure that they are required in any conditions where posterity is affected, positively or negatively. The bill in front of the House of Lords ensures that PIAs are triggered on appropriate occasions by making them universally required, but there are various other triggering conditions that may suffice: PIAs could be required on submajority vote of the legislature, or upon order of a court. Ideally, PIA policy should require a zero rate of pure time preference and an open-ended assessment period. Significant impacts on future generations should not be treated as null merely because they are centuries away; we should ignore these effects only when there is no reason to think they are more likely on the proposed policy than its alternative.

I think this is likely a bad idea, and it will suffer from the same failure modes that environmental impact statements have faced.

John and MacAskill say that, “these reports are functionally an extension of the environmental impact statements required by many governments for policy proposals with a potentially adverse impact on the environment.” Given this inspiration, I highly recommend people read Brian Potter’s breakdown of how the National Environmental Policy Act works. This law created the current regime of environmental review in the United States. 

Here are some concrete ways in which posterity impacts may cause the same adverse outcomes as NEPA:

  • Brian Potter points out that you can think of environmental review as a tax on all major actions, and just like any other cost, taxes reduce what they apply to. Posterity impact statements are likely to have a similar impact. They lock in the status quo by making new action more expensive. To the extent that you think we are on track to create a flourishing future civilization, this is fine. But if you think we may need to adopt more innovative, proactive, and speculative longtermist policies, then this is a problem.
  • Environmental review requires a detailed analysis of any project that impacts the environment. But this criteria also includes projects that will help the environment, and thus it actually acts as a deterrent against projects aimed at these kinds of improvements. A recent article highlights some examples: 
    "[A] wind farm off Cape Cod that fought lawsuits for 16 years before giving up, a wind farm in Wyoming that first applied for federal permits in 2008 and finally got them in 2019, a ban on solar farms in one of the sunniest places in the world because they would ruin the views."
    Posterity Impact Statements would likely have a similar impact. John and MacAskill say that, “Posterity impact statement requirements should have triggering conditions and enforcement mechanisms which ensure that they are required in any conditions where posterity is affected, positively or negatively [emphasis mine].”
    It’s 2030, and a forward looking Secretary of Defense wants to create an AI alignment research program. Well, that’s definitely going to have some kind of impact on the future. And it will take a year to fully assess those impacts via a posterity review, by which time GPT-12 will have made GPT-10 look like an embarrassing anachronism.
  • NEPA litigation has often been used cynically by groups who have no credible interest in the environment to prevent projects which they object to for ideological or self-interested reasons independent of the environment. As Brian Potter explains:
    "The frequency of NEPA litigation is partly due to the fact that, while NEPA lawsuits often target legitimate inadequacies …, they are sometimes used as a weapon by activist groups to try to stop projects they don’t like. While lawsuits can’t stop a project permanently, the hope is that a lawsuit will result in an injunction that stops the project temporarily, and that the delay will make the project unattractive enough to cancel." 
    When asked whether posterity impact statements would just create another mechanism for special interest groups to block and delay projects they don’t like, Tyler John responds:
    "It's certainly true that EIAs have frequently been used to block and delay projects on spurious grounds, and the point here that PIAs are less epistemically tractable is spot-on and important. One advantage of PIAs in the legislature is that many more resources can be put to ensuring that they are objective and accurate than can be put into, say, local jurisdictions, given the much greater resources of the federal government and the fewer number of items requiring assessment. An idea we considered but didn't include here is that an independent, non-partisan body such as the in-government research institutions we defend could perform the impact assessments, taking them out of the hands of politicians who might use them for more obstructionist ends. But I remain quite uncertain on the best mechanism for ensuring that PIAs fulfill their information-gathering and soft censure functions rather than becoming used primarily to fuel partisan obstructionism, and I'd certainly be interested in other ideas."
    NEPA is enforced at the federal level, so its problems are not caused by a lack of funding at the local level. Nor is it enforced by politicians, so neither of these reforms suggest ways in which environmental review (or the posterity review proposal it inspires) could be improved.

John and MacAskill may respond that the delays and costs that NEPA has caused are the result of dysfunction or understaffing at the EPA or in the courts, and that we could learn the lessons from their failures before implementing Posterity Impact Statements. If that’s the case, then it would be helpful to actually have that list of lessons compiled so we could avoid repeating the same mistakes. 

Here are a few suggestions:

  • Don’t enforce via the courts. The way NEPA is enforced is that someone sues a federal agency for not complying with NEPA, and the judge rules whether said agency had complied with NEPA or not.
    This has two unfortunate effects. First, it allows private interest groups to easily throw a wrench into any government project they don’t like by cynically calling upon NEPA. 
    Second, because rules are set by judicial precedent from multiple independent judges instead a single, coherent regulatory body, agencies are never sure whether they are complying, and they have to dedicate significant overhead to anticipate these changing requirements. As Brian Potter explains,
    "[Judicial enforcement of NEPA] creates something of a moving target for NEPA compliance. Agencies must be constantly monitoring court outcomes to determine what compliance requires (this is sometimes described as “NEPA common law”), and over time more and more potential impacts have had to be included in NEPA analyses."
    In Political Order and Political Decay, Francis Fukuyama points out the general problem created by judicial execution of bureaucratic duties:
    "The story of the courts [in the United States] is one of the steadily increasing judicialization of functions that in other developed democracies are handled by administrative bureaucracies, leading to an explosion of costly litigation, slowness of decision making, and highly inconsistent enforcement of laws."
    Instead, task a specific, identifiable agency with enforcing posterity impact statements. If their judgements are unreasonable, contradictory, or inconsistent, then there is a specific agency head that can be fired and replaced instead of a vast and unmanageable judiciary.
  • Replace other veto points.  Every movement wants to add some review process by which it can ensure that its interests are factored into political decision making. Each such addition adds more delay, bureaucracy, and ultimately dysfunction to the government. As I explain in the state-capacity-longtermism section in the section above, reducing dysfunctional governance should be perhaps the main priority of longtermists.
    If we just keep adding veto points, we’re making it much harder to implement any longtermist solutions. You want to add Early Detection Centers at airports in order to catch novel pathogens as early as possible? First, we need a year to go over your environmental impact statements to see how metagenomic sequencing might burden mother nature. Of course, it’ll take us a bit to make sure your facility is compliant with the Americans with Disabilities Act and all relevant OSHA regulations. Oh, and don’t forget to complete your Posterity Impact Statement!
    If the consequences to future generations matter more than other considerations, then we need to clean out the kludge of other veto points that would enervate all lontermist projects.

In-government research institutions and archivists

John and MacAskill write:

Numerous sources of short-termism can be ameliorated through the production of digestible, widely-available, legitimate, and high-quality information about future trends and the future effects of policy. We therefore propose that existing national governments invest in the creation of many new in-government research institutions with the express purpose of information-gathering and information-sharing about issues of long-term importance. They should be tasked with producing periodic, public reports that (1) chronicle long-term trends, (2) summarize extant research to improve its accessibility by the legislature, (3) analyze the expected impacts of policy, and (4) identify matters of long-term importance that fall outside of the political business cycle.

This seems overall like a good idea. 

My main concern is that this institution would suffer from an epistemic hubris that is inappropriate when making far future predictions. In the 70s, it was a common belief among the relevant technical experts that we would hit peak oil by the 90s. They could not have anticipated the new technologies that made more oil reserves accessible to us. If there was a longtermist research institute within the government at that time, it would have recommended that we stock up on foreign oil, and the end result of this would have been unaffordable transportation and heating for the poorest people on the planet. 

It would be wise for the researchers at such an organization and the legislators who use its guidance to realize that projections about the future tend to have a pessimistic bias, because we cannot account for how the unpredictable growth of knowledge will dampen many problems we currently consider catastrophic.

We can try to compare this proposal to other non-partisan research institutions to see how accurate such bodies have been and whether their projections are used by legislators when making laws. The prominent point of comparison in the United States is the Congressional Budget Office, which tries to estimate the fiscal consequences of proposed laws. So how accurate has the CBO been? A 2020 report from the CBO about their past performance suggests that their estimates are quite accurate within a year or two but decay significantly when projected out in time - a troubling prospect for future impact research organizations. The main benefit of the CBO has frankly just been to raise the salience of fiscal issues - a concrete projection from a non-partisan government institution gives CNN and FOX the raw material for their shiny infographics. If you want longterm issues highlighted on these channels, sending them headline numbers from the Congressional Longtermist Office would be a great way to do it.

There’s also the question of how you decide who gets to serve on this research institution. John and MacAskill answer:

The best in-government research institutions will generally be structurally and functionally independent of existing government offices, with the power to set their own research agenda, in order to insulate them from the political business cycle. It may also improve institutional independence to identify researcher selection mechanisms which do not rely on the judgment of politicians, such as by tasking relevant professional associations with selecting researchers. [emphasis mine]

This strikes me as a bad idea. Letting professional associations select researchers is likely to lead to the emergence of guilds that control who can serve in important government roles. It would incentive these associations to promote policies that extend their influence and serve their backers.

Political appointees actually tend to do a good job provided that once they are appointed, they are not accountable to short-termist democratically elected officials. Arguably, the Federal Reserve has been the most functional US government institution in the aftermath of the pandemic, and its head is appointed by the President and approved by the Senate. Yet those institutions don’t get to micromanage the Fed, which allows for long term decision making insulated from short term political considerations.

Future Assemblies

John and MacAskill propose creating assemblies of randomly selected citizens to provide advice to their governments about decisions affecting the future:

To reduce the damaging influence of polarization, short-term institutional incentives, and motivational failures, we propose the creation of a novel representative, deliberative, and future-oriented body: the futures assembly. Futures assemblies are permanent citizens’ assemblies with an explicit mandate to represent the interests of future generations. As citizens’ assemblies, they are deliberative bodies of citizens who are randomly selected from the populace to provide non-binding advice to the national government on issues of long-term importance. 

My main concern about such an assembly is … well, let’s put it this way. Half the people are stupider than average, and the average person isn’t exactly a genius. 

I think it made sense for Ireland to have a citizen’s assembly debate whether their constitutional ban on abortion made sense - everyone can understand the moral tradeoffs relevant to that question, and it is arguably proper for the government to capture people’s intuitions on that topic into policy.

But will an assembly of random people be able to advise the government on the risks of unaligned AI or gain-of-function research? To be clear, I don’t think someone like me should be on such an assembly either. But I suspect that some of the most important questions about the longterm will require specialized skills and knowledge which most of the population will lack.

My second concern is that John and MacAskill cite polarization and election incentives as some of the main reasons why governments can’t adequately address longterm challenges. But social desirability bias is as big a distortion on good decision making. People want to be seen as believing things that others consider to be good. And public deliberation in large crowds is likely to make that bias worse. 

Legislative Houses for Future Generations

John and MacAskill write:

Over the coming decades and centuries, however, longtermists should consider much stronger institutional reforms that can transform governments into the kinds of institutions that can positively shape the future on very long timescales. While it is currently difficult to imagine exactly what sorts of institutions could do this, we propose one possibility: an upper house in the legislative branch of government devoted exclusively to the well-being of future generations. 

In the system we envision, bicameral national legislatures would be constituted by a lower house focused on attending to the interests of the people who exist today and an upper house focused on attending to the interests of all future generations. Legislation may be proposed in either house, but must be passed by both houses to become law. Thus, each house would provide a check on the other, ensuring that neither future-oriented nor present-focused legislation can be dominant. A strong constitution providing basic rights and freedoms to both presently-existing and future people would provide another strong backstop against the tyranny of either house…

Two major questions are relevant to the design of a successful legislative house for future generations: who serves?, and how do we ensure they have the right incentives?

Let’s consider each:

Eligibility

John and MacAskill:

Random selection of legislators from among voting-eligible citizens may provide the best mechanism for deciding who serves, given its aforementioned elimination of short-term incentives from elections, party interests, and campaign financing, as well as its ameliorative effects on industry corruption and partisan polarization.

This is probably a bad idea. It assumes that the problem with the political system is mainly partisanship and special interests, rather than the fact that the average voter (and by implication, a random citizen legislator) is irrational and ignorant. The ability to digest technical information and understand counterfactuals, hypotheticals, and exponentials is simply not widely dispersed in the population, and without it, one cannot deliberate about longterm problems.

A subset of the legislators might be selected at random from among eligible experts, stratified by area of expertise, in order to ensure technocratic competence across a range of issues.

This idea seems likely to suffer from the same problem of guilds which we discussed in the section about in-government research Institutions and archivists, and I won’t rehash that argument.

Incentives

1.

John and MacAskill:

To ensure that the House has the right incentives, we suggest three further mechanisms. First, the House should have objective and concrete long-term performance metrics which are set in close deliberation between the House and an informed and non-partisan body, such as an independent research institution for future generations. These metrics should be updated regularly to correct for prediction errors and new developments. 

It would be really helpful to know which kinds of performance metrics John and MacAskill have in mind. It’s possible that there are certain metrics which are less susceptible to Goodhart’s Law and correspond well to what we would intuitively and aesthetically call a flourishing civilization. 

My main concern would be that these research institutes will pick metrics simply because they  are salient in the current political discourse but which don’t matter all that much in the longterm scheme of things. Since modern examples will be contentious, let me pick one from the past. Imagine if the US government decided in the 1920s that one of their most important metrics was the percentage of people who consumed alcohol. I’m not saying alcohol is good for you, but  this is an issue that was politically prominent at the time yet wasn’t an important longterm priority. 

2.

John and MacAskill:

Second, the sole constitutional mandate of the House should be to set and pursue the achievement of long-term performance metrics. This would have some effect on the way House legislators conceive of their work and on the kinds of public justifications they can offer for their actions: any justification given to the media or in proposed legislation must cite concrete performance metrics. 

This is probably a mistake. A member of the House may disagree with whether a particular performance metric is relevant or important, or she may disagree about the independent research institute’s projection of how some legislation will impact these metrics. The reason we have legislators in the first place is to exercise these judgements. As Francis Fukuyama explains in Political Order and Political Decay:

Formal systems that minutely measure performance and punish poor performance often produce what the political scientist Jane Mansbridge labels “sanction-based accountability,” a modern version of Taylorism that is based more on fear than loyalty. Such systems are premised on the idea that workers cannot be trusted to do their jobs in the absence of careful external monitoring; they are surefire ways of killing risk taking and innovation on the part of those being evaluated. Because these procedures, designed to increase accountability and therefore legitimacy, have the ultimate impact of making the government less effective, they paradoxically undercut its legitimacy.

3.

John and MacAskill:

Third, the House should employ backwards pensioning: the pensions of House legislators should be determined some specified number of decades in the future, based on the House’s long-term impacts. One obvious way of evaluating the House’s impacts is by the extent to which objective performance metrics have been satisfied in the decades after their rule. An alternative evaluation mechanism would adjust pensions based on the retrospective attitudes of the future generations house in power at that future time. In this case, the reward scheme could have an intergenerational chaining effect. In deciding the pensions of past legislators, each house would be incentivized to consider how their pension choice will be evaluated by those who will in turn reward them, decades into the future, thus providing incentives for every house to consider the very long-term impacts of their decisions.

This definitely seems better than the status quo, but here are some potential problems:

  • This whole system relies on the belief on the part of the participants that this system of government will last - otherwise, a House member may not believe that his pension is forthcoming based on his decisions.
  • Backwards induction relies on  the premise that future people will be correct about which actions were good retrospectively and which weren’t. But I think counterfactual history is a genuinely hard and wicked problem. To give a concrete example, suppose you were tasked today with rewarding the person or group of people most responsible for the industrial revolution or the spread of democracy. These events are so historically contingent and multicausal that it is hard to say who should get the credit.
  • Often in the relatively medium term, it’s not clear that something obviously good or bad is so. If you polled the British people in the 19th century - is industrialization good, and is colonization bad? - they would probably have given the wrong answers. Of course, this is also an argument against the status quo political system, so it’s not a decisive reason to reject this proposal.

Forms of government

MacAskill and John’s proposed reforms largely involve making modifications that would make a liberal democracy more considerate of the interests of future people - but we should not start from the premise that the optimal longtermist government will resemble how modern Western democracies are organized. Let us consider all the possibilities:

Democracy

All longtermist political reform discussion begins with the observation that while future people constitute the overwhelming mass of those who we affect through our actions and policies, they have no representation in our political system.

But one man’s modus ponens is another’s modus tollens. If the overwhelming majority of people our policies affect are disenfranchised anyways, then what’s the harm of disenfranchising that final percentage? 

The main justification for democracy is that voters are best situated to judge which leaders will create good outcomes for them. But what’s the reason for thinking that voters will know which leaders are best for their descendants?

It’s kind of funny that many of the sources of short-termist bias identified by John and MacAskill are just descriptions of democracy: polarization, election cycles, and voter’s lack of concern about future people.

That being said, I’m not convinced there’s a better alternative even if you are a strong longtermist, as you will see based on my reaction to futarchy and monarchy.

Futarchy

Robin Hanson’s idea of futarchy is worth considering:

In futarchy, democracy would continue to say what we want, but betting markets would now say how to get it. That is, elected representatives would formally define and manage an after-the-fact measurement of national welfare, while market speculators would say which policies they expect to raise national welfare. The basic rule of government would be:

When a betting market clearly estimates that a proposed policy would increase expected national welfare, that proposal becomes law.

Futarchy is intended to be ideologically neutral; it could result in anything from an extreme socialism to an extreme minarchy, depending on what voters say they want, and on what speculators think would get it for them.

Futarchy seems promising if we accept the following three assumptions:

  • Democracies fail largely by not aggregating available information.
  • It is not that hard to tell rich happy nations from poor miserable ones.
  • Betting markets are our best known institution for aggregating information.

This system might be better than status quo democracy, but a longtermist might have the following objections:

  • You may not apply a discount rate to the future, but markets do. If the government implements whatever policy is likely to maximize GDP in 2122, and the risk-free-rate averages 2% during that time, then all your winnings at that point will only be worth 13% of their inflation adjusted face value.
    This means that people will not be incentivized strongly to improve market predictions on questions that resolve in hundreds or thousands of years. But it’s even worse than that. One common objection people have to futarchy is that wealthy special interests can get preferred policies passed by betting in favor of government actions that benefit them. For example, maybe Sergey Brin bets that giving Google a 100 billion dollar subsidy is going to double GDP growth in 2 years.
    And the obvious response to this critique is that speculators in the prediction markets will just take Sergey’s lunch money and return the market to the efficient prediction. But suppose Sergey bets that giving Google a 100 billion dollar subsidy will double GDP growth in 100 years. The prediction market winnings are only going to be worth 13% of their face value to the speculators betting against Sergey, and even then, it will take a century to realize those earnings. Whereas Sergey gets that 100 billion now. So it becomes possible for him to manipulate the prediction markets, and therefore, actual policy.
    The government could solve for this by paying winners their prediction market profit plus whatever returns they would have had on their bet amount if they invested it in T-bills. If you are designing a futarchy which makes decisions based on long term outcomes, you will need to add mechanisms like these to prevent market distortions.
  • Even in the short term, markets don’t always do a great job of predicting one-off tail events. And the problem is that predicting and responding to these once in a generation cataclysms should be perhaps the main priority of a longtermist government.
    We saw an example of this early in COVID. Markets only began to tumble in mid-February of 2020, despite the fact that the spread and lethality of COVID had been known about for months at that point. And if you are the kind of person who believes in short AI timelines, then you will notice that markets haven’t integrated the possibility of a literally unprecedented growth explosion this century. They could do this by increasing interest rates or by driving OpenAI owner Microsoft’s stock even higher.
    I don’t know enough about financial markets to say why this is - perhaps they are not well trained on these kinds of speculative questions - perhaps it’s just legally impossible to bet large sums of money based on predictions from online schizos - but it does make futarchy less compelling if you think one of the roles of a good government would be to see these kinds of things coming.

Monarchy

You will be (rightly) quite skeptical of monarchy. So let me try to steelman it a bit.

I often think of monarchy as a volatility increasing mechanism. Singapore has perhaps the best governance on the planet and it is more monarchical than Western Democracies. But the worst regimes in recent history - Stalin, Mao, and the Kims, were also monarchies. Most periods of human history have required governments to simply adopt reasonable, common-sense policies, and in these situations it doesn’t make much sense to increase volatility.

The time when it does make sense to increase volatility is when you’re very likely fucked anyways, and you’re just looking to increase the chances you can survive - this is known as an out-of-the-money call option, but it can be illustrated more intuitively with the idea of pulling the goalie in hockey.

If you think that creating a flourishing galactic civilization is a very precarious goal and that we are likely to fail, then it may make sense to increase volatility. If we pull a few more black balls out of Nick Bostrom’s urn, and our back is against the wall, perhaps it would be best to make someone who recognizes the risks we face our monarch.

The arguments against monarchy are quite obvious and also quite strong:

  • There is usually no accountability mechanism which works on a person who controls every lever of government. CEOs can’t have their board of directors executed. Often monarchy works for a time when you get lucky and select a competent and smart person - think Lee Kuan Yew in Singapore or Marcus Aerlius in Rome, but eventually you’ll roll a bad dice and get stuck with an awful regime.
  • The possibility of lock-in goes way up with a monarch. Modern values and institutions are likely very far from optimal, but a dictator could lock them in in a way that shapes the entire future of humanity.

Open problems

Here are some questions which, if resolved, would shed a lot of light into which institutional reform proposals are most promising, and how they could be made to be more effective:

  • Special interest capture: My greatest hesitation about most of these longtermist policy proposals is that there isn’t a tight feedback loop between our actions and their longterm consequences, and this leaves space for special interests or ideological groups to use longtermist language and institutions to advance policy goals they favor for independent reasons. How do you avoid this happening? For reasons explained above, I don’t think John and MacAskill have good answers to this problem.
  • Counterfactual history: Performance based pensions require the government to figure out how much better or worse outcomes would have been counterfactually had the decision makers acted differently. And this requires us to learn how to do counterfactual history better. How can we reason about counterfactual timelines in which some law wasn’t passed or was implemented differently?
  • Credible longterm government guarantees: Both futarchy and performance based pensioning require decision makers to believe that the government many decades or centuries into the future will honor the commitments it has made today. Are people likely to believe such guarantees, and how do we increase their confidence?
  • Global vs national governance: At what level of governance should longtermist policy be implemented? Many problems require global cooperation. But what if one country refuses to stop doing dual-use research? What level of response does this merit? Can and should the UN sanction a war against an enemy of future generations? It’s naive to assume that such considerations will never be relevant.
  • Long term forecasts and performance metrics: Many of John and MacAskill’s proposals require policy makers to forecast important longterm metrics or be judged retrospectively by them. But what should these metrics be? Tyler Cowen suggests sustainable economic growth in Stubborn Attachments. But how do we judge whether growth is sustainable? And is economic growth in the coming century the right metric anyways if most of the people who could live are thousands of years in our future? Do good outcomes 50 years down the line correlate to good outcomes 500 or 1000 years into the future? Either way, what is the performance metric that best predicts far future results?

19

0
0

Reactions

0
0

More posts like this

Comments10
Sorted by Click to highlight new comments since: Today at 10:04 AM

Superb article, thanks so much for writing this.

If you haven't seen it, you might enjoy my and John Myers' article criticising the UK's Future Generations Bill, which made many of the same arguments you make against a proposed law that featured things like the Posterity Impact Statements.

Thanks Larks, that was a great post!

Thanks Dwarkesh, really enjoyed this.

This section stood out to me:

Instead, task a specific, identifiable agency with enforcing posterity impact statements. If their judgements are unreasonable, contradictory, or inconsistent, then there is a specific agency head that can be fired and replaced instead of a vast and unmanageable judiciary.

I've noticed this distinction become relevant a few times now: between wide, department-spanning regulation / intiatives on one hand; and fociused offices / people / agencies / departments with a narrow, specific remit on the other. I have in mind that the 'wide' category involves checking for compliance with some desiderata, and stopping or modifying existing plans if they don't; while the 'focused' category involves figuring out how to proactively achieve some goal, sometimes by building something new in the world.

Examples of the 'wide' category are NEPA (and other laws / regulation where basically anyone can sue); or new impact assessments required for a wide range of projects, such as the 'future generations impact assessment' proposal from the Wellbeing of Future Generations Bill (page 7 of this PDF).

Examples of the 'focused' category are the Office of Technology Assessment, the Spaceguard Survey Report, or something like the American Pandemic Preparedness Plan (even without the funding it deserves).

I think my examples show a bias towards the 'focused and proactive' category but the 'wide regulation' category obviously is sometimes very useful; even necessary. Maybe one thought is that concrete projects should often precede wide regulation, and wide regulation often does best when it's specific and legible (i.e. requiring that a specific safety-promoting technology is installed in new builds). We don't mind regulation that requires smoke alarms and sprinklers, because they work and they are worth the money. It's possible to imagine focused projects to drive down costs of e.g. sequencing and sterilisation tech, and then maybe following up with regulation which requires specific tech be installed to clear standards, enforced by a specific agency.

Great point Fin! 

Though one thing I should have mentioned explicitly in the post is that being illegible and distributed is only one of the failure modes of regulation, but certainly not the only one. For example, many US cities have building height limits which economists have estimated are causing billions in deadweight loss, higher rents, etc. But a building height limit is very legible and clear. Still, somehow the relevant government bodies are often too captured by concentrated activist groups and don't consider expected value on the broader public.

Btw, I think The Power Broker is an interesting book to read regarding focused projects. There are many legitimate criticisms of Robert Moses, but still it is remarkable how he basically built a startup within the NY government that was much more competent, efficient, and visionary than the rest of the political system.

It's possible to imagine focused projects to drive down costs of e.g. sequencing and sterilisation tech, and then maybe following up with regulation which requires specific tech be installed to clear standards, enforced by a specific agency.

Is there a good read regarding regulatory proposals for these technologies in particular? I worry that wide regulation around sequencing in particular might slow down tech that I think will be good, like CRISPR therapies or embryo selection. Or maybe that's a category error?

Almost any intervention that slows down embryo selection is a net negative for the world, regardless of what other positives come along with it. 

 

Embryo selection is probably the highest ROI cause for EA around, and it's possible right now, it is crazy that Hsu is not getting more attention.

I agree! Not sure if you saw my interview of Steve Hsu on my podcast, where we get deep into the weeds on embryo selection: https://www.dwarkeshpatel.com/p/steve-hsu

You got him to talk about the gwern analysis! 

Just skimming the subjects, I can tell that this will be the best interview of him I've seen so far, congratulations on getting him on. I am now a subscriber, and listening. 

If you post another interview of him I will buy a sub on your substack for sure

Don't have paid subs, but thank you! Glad you enjoyed!

Defending optimistic worldviews

In the section on the sources of shorttermist biases, John and MacAskill write:Cognitive biases include actors’ tendencies to respond more strongly to vivid risks than to information acquired from abstract, general social scientific trends, as well as over-optimism about their ability to control and eliminate risks under situations of uncertainty. The attention that political actors pay to the future and to the nearby past are asymmetric because voters and many other political actors “can readily observe past economic performance but have little information about future conditions.” Thus, to economize on cognitive effort, many political actors forego the task of making predictions about the future and choose policies which have worked in the recent past. [emphasis mine]

What John and MacAskill are describing here doesn’t sound like a bias - it sounds like an actual political philosophy, one which people like Matt Ridley or Steven Pinker would probably endorse. There are many reasonable people who believe that we should extrapolate from past performance rather than “abstract, general social scientific trends”, and that we should be more optimistic with regards to our ability to deal with risks in due time rather than rely on hastily implemented policies.The people who believe this might be wrong, but you have to actually argue that they’re mistaken instead of just dismissing their worldview as a cognitive bias. Arguably their philosophy was a useful corrective to issues in the past that involved long-term trends. The people who responded to the concrete overpopulation scare in the 20th century with vague optimism about our ability to feed more people were correct, whereas the people who had “abstract social scientific” reasons for expecting resources to run out were wrong, and disastrously so given the mass sterilization and population control programs they inspired in India and China. (Obviously, I don’t think John and MacAskill would endorse those atrocities - my point is simply that what they call a cognitive bias would have prevented all that unnecessary suffering.)To zoom out a bit, we should be careful that we don’t implement longtermist reform in a way that dismisses the optimistic philosophy of governance which places greater weight on past experiences.

I disagree with this, primarily because of 2 reasons:

  1. I disagree with the presumption that they were rational in being optimistic, primarily because while there is real progress in history (only if we count humans.), I don't agree with the implication that we should expect an optimistic bright future. I would argue that technological x-risk has wiped out all expected value from the future, especially under a longtermist view, assuming the future is positive thus x-risk reduction is our main priority. If the expected value of the future is negative, then moral-circle expansion is the most important thing to do.

  2. I disagree with the implication that the population bomb didn't happen, ergo the sterilization was wrong. This is a classic case of hindsight bias, and there was no mitigation against this bias. More exhaustively, you need to make the claim that the population bomb can't happen or was likely not to happen in order for your argument to go through. A longer comment by EricHerboso summarizes the miracles necessary in order to defuse the population bomb.

There was good reason back then to believe that overpopulation was a real problem whose time would come relatively soon. If it wasn't for technological breakthroughs with dwarf wheat and IR8 rice variants, spearheaded by Norman Borlaug and others, our population would have seriously passed our ability to grow food by this point -- the so-called Malthusian trap.

Using overpopulation as an example here would be akin to using something like global climate change as an example in the present, if it turns out that a technological breakthrough in the next 5-10 years completely obviates the need for us to be careful about greenhouse gas release in the future.

Because of this, I don't think overpopulation as a cause area would make for the best example that you're trying to make here.

Thanks for the comment. 

I disagree with the presumption that they were rational in being optimistic, primarily because while there is real progress in history (only if we count humans.), I don't agree with the implication that we should expect an optimistic bright future. I would argue that technological x-risk has wiped out all expected value from the future, especially under a longtermist view, assuming the future is positive thus x-risk reduction is our main priority. If the expected value of the future is negative, then moral-circle expansion is the most important thing to do

Or just making society wealthier overall (aka maximizing economic growth) so can enjoy these last few hundred years more. Nonetheless, I don't share your pessimism.

I disagree with the implication that the population bomb didn't happen, ergo the sterilization was wrong. This is a classic case of hindsight bias, and there was no mitigation against this bias. More exhaustively, you need to make the claim that the population bomb can't happen or was likely not to happen in order for your argument to go through

But my point is precisely that we couldn't have known in advance what those solutions looked like in advance because knowledge growth is unpredictable.  But given the fact that we do end up solving many of these seemingly devastating problems, we should update in favor of a vague optimism about our future capabilities to deal with problems. I give the example of peak oil worries later in this post: 

In the 70s, it was a common belief among the relevant technical experts that we would hit peak oil by the 90s. They could not have anticipated the new technologies that made more oil reserves accessible to us. If there was a longtermist research institute within the government at that time, it would have recommended that we stock up on foreign oil, and the end result of this would have been unaffordable transportation and heating for the poorest people on the planet.