SC

Stephen Clare

Research Manager @ Centre for the Governance of AI
4558 karmaJoined Working (6-15 years)

Bio

Previously I've been a Research Fellow at the Forethought Foundation, where I worked on What We Owe The Future with Will MacAskill; an Applied Researcher at Founders Pledge; and a Program Analyst for UNDP.

Comments
233

  • Cost-effectiveness estimates generally suggest that, for most reasonable assumptions about the moral weight and degree of suffering of animals, animal welfare interventions are most cost-effective
  • Animal welfare is more neglected than global health, but not (again for reasonable assumptions about how much animal wellbeing matters) proportionally less important

I think focusing on AI explosive growth has grown in status over the last two years. I don't think many people were focusing on it two years ago except Tom Davidson. Since then, Utility Bill has decided to focus on it full-time, Vox has written about it, it's a core part of the Situational Awareness model, and Carl Shulman talked about it for hours in influential episodes on the 80K and Dwarkesh podcasts.

Thanks for this, really helpful! For what it's worth, I also think Leopold is far too dismissive of international cooperation.

You've written there that "my argument was that Aschenbrenner was 'dangerous'". I definitely agree that securitisation (and technology competition) often raises risks.[1] I think we have to argue further, though, that securitisation is more dangerous on net than the alternative: a pursuit of international cooperation that may, or may not, be unstable. That, too, may raise some risks, e.g. proliferation and stable authoritarianism.

  1. ^

    Anyone interested can read far more than they probably want to here.

One of the weaker parts of the Situational Awareness essay is Leopold's discussion of international AI governance.

He argues the notion of an international treaty on AI "fanciful", claiming that:

  • It would be easy to "break out" of treaty restrictions
  • There would be strong incentives to do so
  • So the equilibrium is unstable

That's basically it - international cooperation gets about 140 words of analysis in the 160 page document.

I think this is seriously underargued. Right now it seems harmful to propagate a meme like "International AI cooperation is fanciful". 

This is just a quick take, but I think it's the case that:

  • It might not be easy to break out of treaty restrictions. Of course it will be hard to monitor and enforce a treaty. But there's potential to make it possible through hardware mechanisms, cloud governance, inspections, and other mechanisms that we haven't even thought of yet. Lots of people are paying attention to this challenge and working on it.
  • There might not be strong incentives to do so. Decisionmakers may take the risks seriously and calculate the downsides of an all-out race exceed the potential benefits of winning. Credible benefit-sharing and shared decision-making institutions may convince states they're better off cooperating than trying to win a race.
  • International cooperation might not be all-or-nothing. Even if we can't (or shouldn't!) institute something like a global pause, cooperation on more narrow issues to mitigate threats from AI misuse and loss of control could be possible. Even in the midst of the Cold War, the US and USSR managed to agree on issues like arms control, non-proliferation, and technologies like anti-ballistic missile tech.

(I critiqued a critique of Aschenbrenner's take on international AI governance here, so I wanted to clarify that I actually do think his model is probably wrong here.)

Vasco, how do your estimates account for model uncertainty? I don't understand how you can put some probability on something being possible (i.e. p(extinction|nuclear war) > 0), but end up with a number like 5.93e-12 (i.e. 1 in ~160 billion). That implies an extremely, extremely high level of confidence. Putting ~any weight on models that give higher probabilities would lead to much higher estimates.

Thanks for writing this, it's clearly valuable to advance a dialogue on these incredibly important issues. 

I feel an important shortcoming of this critique is that it frames the choice between national securitization vs. macrosecuritization in terms of a choice between narratives, without considering incentives. I think Leopold gives more consideration to alternatives than you give him credit for, but argues that macrosecuritization  is too unstable of an equilibrium:

Some hope for some sort of international treaty on safety. This seems fanciful to me. The world where both the CCP and USG are AGI-pilled enough to take safety risk seriously is also the world in which both realize that international economic and military predominance is at stake, that being months behind on AGI could mean being permanently left behind. If the race is tight, any arms control equilibrium, at least in the early phase around superintelligence, seems extremely unstable. In short, ”breakout” is too easy: the incentive (and the fear that others will act on this incentive) to race ahead with an intelligence explosion, to reach superintelligence and the decisive advantage, too great.

I also think you underplay the extent to which Leopold's focus on national security is instrumental to his goal of safeguarding humanity's future. You write: "It is true that Aschenbrenner doesn’t always see himself as purely protecting America, but the free world as a whole, and probably by his own views, this means he is protecting the whole world. He isn’t, seemingly, motivated by pure nationalism, but rather a belief that American values must ‘win’ the future." (emphasis mine.)

First, I think you're too quick to dismiss Leopold's views as you state them. But what's more, Leopold specifically disavows the specific framing you attribute to him:

To be clear, I don’t just worry about dictators getting superintelligence because “our values are better.” I believe in freedom and democracy, strongly, because I don’t know what the right values are [...] I hope, dearly, that we can instead rely on the wisdom of the Framers—letting radically different values flourish, and preserving the raucous plurality that has defined the American experiment.

Both of these claims -- that international cooperation or a pause is an unstable equilibrium, and that the West maintaining an AI lead is more likely to lead to a future with free expression and political experimentation -- are empirical. Maybe you'd disagree with them, but then I think you need to argue that this model is wrong, not that he's just chosen the wrong narrative.

This is beautiful, Teps. Thanks for sharing.

One of the most common lessons people said they learned from the FTX collapse is to pay more attention to the character of people with whom they're working or associating (e.g. Spencer Greenberg, Ben Todd, Leopold Aschenbrenner, etc.). I agree that some update in this direction makes sense. But it's easier to do this retrospectively than it is to think about how specifically it should affect your decisions going forward.

If you think this is an important update, too, then you might want to think more about how you're going to change your future behaviour (rather than how you would have changed your past behaviour). Who, exactly, are you now distancing yourself from going forward?

Remember that the challenge is knowing when to stay away basically because they seem suss, not because you have strong evidence of wrongdoing.

I'm usually very against criticizing other people's charitable or philanthropic efforts. The first people to be criticized should be those who don't do anything, not those who try to do good.

But switching from beef to other meats (at least chicken, fish, or eggs, I'm less sure for other meats) is so common among socially- and environmentally-conscious people, and such a clear disaster on animal welfare grounds, that it's worth discussing.

Even if we assume the the reducitarian diet emits the same GHGs as a plant-based diet, you'll save about 0.4 tonnes of CO2e per year, the equivalent of a $4 donation (in expectation) to Founders Pledge's climate fund. Meanwhile, for every beef meal you replace with chicken, 200x more animals have to be slaughtered. 

I'd bet that for ~any reasonable estimate of the damages of climate change and the moral value of farmed animal lives, this math does not work out favourably.

Load more