GF

Gideon Futerman

2138 karmaJoined

Bio

Participation
5

(Slowly) shifting from XRisk into insect welfare. Currently working on slowing AI and SRM and GCR. 

How I can help others

Reach out to me if you have questions about SRM/Solar geoengineering

Comments
185

Pretty sure EA basically invented that (yes people were working on stuff before then and outside of it, but still that seems different to 'reinventing the wheel')

I see no legitimate justification for attitudes that would consider humans as important enough that global health interventions would beat out animal welfare, particularly given the sheer number and scale of invertebrate suffering. If invertibrates are sentient, it seem animal welfare definitely could absorb 100m and remain effective on the margin, and probably also if they are not (which seems unlikely). The reasons I am not fully in favour is mostly because the interaction of animal welfare with population ethics is far stronger than the interaction of global health developments, and given the signifciant uncertainties involved with population ethics, I can't be sure these don't at least significant reduce the benefits of AW over GH work

I think I am unsure how long it is possible for an indefinite moratorium to last, but I think I probably fall, and increasingly fall, much closer to supporting it than I guess you do.

In answer to these specific points, I basically seem maintaining a moratorium as an example of Differential Technology Development. As long as the technologies that we can use to maintain a moratorium (both physical and social technologies) outpace the rate of progress towards ASI, we can maintain the moratorium. I do think this would require drastically slowing down a specific subset of scientific progress in the long term, but am not convinced it would be so general as you suggest. I guess this is some mixture of both 1 and 2, although with both I do think this means that neither position ends up being so extreme.

In answer to your normative judgement, if 1 allows a flourishing future, which I think a drastically slowed sense of progress could, then it seems desirable from a longtermist perspective. I'm also really unsure that, with sufficient time, we can't access significant parts of technology space without an agentic ASI, particularly if we increase our defences against an agentic ASI using technologies like narrow AIs sufficiently. It also strikes me that assigning significant normative value to accessing all areas (or even extremely large areas) of science and technology space seems like a value set that is related to 'progress'/transhumanism as an end of itself, rather than a means to an end (like totalist utilitarians with transhumanist bents do).

For me, its really hard to tell how long we could hold a moratroium for, and how long its desireable. But certainly, if feasible, it seems timescales well beyond decades would be very desirable

I do think we have to argue that national securitisation is more dangerous than humanity securitisation, or non-securitised alternatives. I think its important to note that whilst I explicitly discuss humanity macrosecuritisation, there are other alternatives as well that Aschenbrenner's national securitisation compromises, as I briefly argue in the piece.

Of course, I have not and was not intending to provide an entire and complete argument for this (it is only 6,000 words) , although I think I do go further to proving this than you give me credit for here. As I summarise in the piece, the Sears (2023) thesis provides a convincing argument from empirical examples that national securitisation (and a failure of humanity macrosecuritisation) is the most common factor in the failure of Great Powers to adequately combat existential threats (eg the failure of the Baruch Plan/international control of nuclear energy, the promotion of technology competition around AI vs arms agreements with the threat of nuclear winter, BWC, montreal protocol). Given this limited but still significant data that I draw on, I do think it is unfair to suggest that I haven't provided an argument that national securitisation isn't more dangerous on net. Moreover, as I address in the piece, Aschenbrenner fails to provide any convincing track record of success of national securitisation, whilst his use of historical analogies (Szilard, Oppenheimer and Teller), all indicate he is pursuing a course of action that probably isn't safe. Whilst of course I didn't go through every argument, I think Section 1 provides arguments that national securitisation isn't inevitable, Section 2 provides the argument that, at least from historical case studies, humanity macrosecuritisation is safer than national securitisation. The other sections show why I think Aschenbrenner's argument is dangerous rather than just wrong, and how he ignores important other factors.

The core of Aschenbrenner's argument is that national securitisation is desirable and thus we ought to promote and embrace it ('see you in the desert'). Yet he fails to engage with the generally poor track record of national securitisation at promoting existential safety, or fails to provide a legitimate counter-argument. He also, as we both acknowledge, fails to adequate deal with possibilities for international collaboration. His argument for why we need national securitisation seems to be premised on three main ideas: it is inevitable (/there are no alternatives), the values of the USA 'winning' the future is our most important concern (whilst alignment is important, I do think it is secondary to Aschenbrenner to this), the US natsec establishment is the way to ensure that we get a maximally good future. I think Aschenbrenner is wrong on the first point (and certainly, fails to adequeatly justify it). On the second point, he overestimates the importance of the US winning compared to the difficulty of alignment, and certainly, I think his argument for this fails to deal with many of the thorny questions here (what about non-humans? how does this freedom remain in a world of AGI etc?). On the third point, I think he goes some way to justify why the US natsec establishment would be more likely to 'win' a race, but fails to show why such a race would be safe (particularly given its track record). He also fails to argue that natsec would allow for the values we care about to be preserved (US natsec doesn't have the best track record with reference to freedom, human rights etc).

On the point around the instability of international agreements. I do think this is the strongest argument against my model of humanity macrosecuritisation leading to a regime that stops the development of AGI. However, as I allude to in the essay, this isn't the only alternative to national securitisation. Since publishing the piece this is the biggest mistake in reasoning (and I'm happy to call it that) that I see people making. The chain of logic that goes 'humanity macrosecuritisation leading to an agreement would be unstable therefore promoting national securitisation is the best course of action' is flawed; one needs to show that the plethora of other alternatives (depolitical/political/riskified decisionmaking, or humanity macrosecuritisation but without an agreement) are not viable - Aschenbrenner doesn't address this at all. I also, as I think you do, see Aschenbrenner's argument against an agreement as containing very little substance - I don't mean to say its obviously wrong, but he hardly even argues for it.

I do think stronger arguments for the need to nationally securitise AI could be provided, and I also think they are probably wrong. Similarly, I think stronger arguments than mine can be provided with regards to why we need to humanity macrosecuritise superintelligence and how international collaboration on controlling AI development (I am working on something like this) that can address some of the concerns that one may have. But the point of this piece is to engage with the narratives and arguments in Aschenbrenner's piece. I think he fails to justify national securitisation whilst also taking action that endangers us (and I'm hearing from people connected to US politics that the impact of this piece may actually be worse than I feared).

On the stable totalitarianism point, I also think its useful to note that it is not at all obvious that the risk of stable totalitarianism is more under some form of global collaboration than it is under a nationally securitised race.

On these three points:

  • Yes, the Project is a significant possibility. People like Aschenbrenner make this more likely to happen, and we should be trying to oppose it as much as possible. Certainly, there is a major 'missing mood' in Aschenbrenner's piece (and the interview), where he seems to greet the possibility of the Project with glee.
  • I'm actually pretty unsure whether improving cybersecurity is very important. The benefits are well known. However, if you don't improve cybersecurity (or can't), then advancing AI becomes much more dangerous withg much less upside, so racing becomes harder. With worse cybersecurity, a pause may be more likely. Basically, I'm unsure and I don't think its as simple as most people think. Its also not obvious to me that, for example, America directly sharing model weights with China wouldn't be a positive thing.
  • Certainly according to my ethics I am not 'neutral pro-humanity', but rather care about a flourishing and just future for all sentient beings. On this axis, I do think the difference is more marginal than many would expect. I would probably guess that US/the free world would be better to have relatively greater power, although with some caveats (eg I'm not sure I trust the CIA very much to have a large amount of control). I think both groups 'as-is', particularly in a nationally securitised 'race' are rather far from the optimal, and this difference is very morally significant. So I think I'm definitiely MUCH more concerned than Aschenbrenner is about avoiding a nationally securitised race (also because I'm more concerned with misalignment than I think he is).

Thanks for this reply Stephen, and sorry for my late reply, I was away.

I think its true that Aschenbrenner gives (marginally) more consideration than I gave him credit for - not actually sure how I missed that paragraph to be honest! Even then, whilst there is some merit to that argument, I think he needs to much better justify his dismissal of an international treaty (along similar lines to your shortform piece). As I argue in the essay, I think that such lack of stability requires a particular reading of how states acts - for example, I argue if we buy a form of defensive realism, states may in fact be more inclined to reach a stable equilibrium/. Moreover, as I argue, I think Aschenbrenner fails to acknowledge how his ideas on this may well become a self-fulfilling prophecy.

I actually think I just disagree with your characterisation of my second point, although it could well be a flaw in my communication, and if so I apologise. My argument isn't even that values of freedom and democracy, or even a narrower form of 'American values' wouldn't be better for the future (see below for more discussion on that), its that national securitisation has a bad track record at promoting collaboration and dealing with extreme risk and we have good reason to think it may be bad in the case of AI. So even if Aschenbrenner doesn't frame it as national securitisation for the sake of nationalism, but rather national securitisation for the sake of all humanity, the impacts will be the same. The point of that paragraph was simply to preempt a critique that is exactly what you say. I also think its clear that Aschenbrenner in his piece is happy to conflate those values with 'American nationalism/dominance' (eg 'America must win'), so I'm not sure him making this distinction actually matters.

I also probably am much less bearish on American dominance than Aschenbrenner is. I'm not sure the American national security establishment actually has a good track record of preserving a 'raucous plurality', and if (as Aschenbrenner wants) we expect superintelligence to be developed through that institution, I'm not overly confident in how good it will be. Whilst I am no friend of dictatorships, I'm also unconvinced that if one cares about raucous pluralism that US dominance, or certainly to the extent that Aschenbrenner envisions, is necessarily a good thing. Moreover, even in American democracy, the vast majority of moral patients aren't represented at all. I'm essentially unconvinced that the benefits of America 'winning' a nationally securitised AI race anywhere near oughtweigh the geopolitical risk, misalignment risk, and most importantly the risk of not taking our time to construct a mutually beneficial future for all sentient beings. I think I have put this paragraph quite crudely, and would be happy to elaborate further, although it isn't actually central to my argument.

I think its wrong to say that my argument doesn't work without significant argument against those two premises. Firstly, my argument was that Aschenbrenner was 'dangerous', which required highlighting why the narrative choice was problematic. Secondly, yes, there is more to do on those points, but given Aschenbrenner's failure to give in depth argumentation on those points, I thought that they would be better to deal with as their own pieces (which I may or may not right). In my view, the most important aspect of the piece was Aschenbrenner's claim that national securitisation is necessary to secure the safest outcomes, and I do feel the piece was broadly successful at arguing that this is a dangerous narrative to propogate. I do think if you hold Aschenbrenner's assumptions strongly, namely cooperation is very difficult, alignment is easy-ish and the most important thing is for an American AI lead as this leads to a maximally good future by maximising free expression and political expression, then my argument is not convincing. I do, however, think this model is based on some rather controversial assumptions, and given the dangers involved, woefully insufficiently justified by Aschenbrenner in his essay.

One final point is that it is still entirely non-obvious, as I mention in the essay, that national securitisation is the best frame even if a pause is impossible, or even weaker, if it is an unstable equilibrium.

Non-consequentialist effective altruism/animal welfare/cause prio/longtermism

I assume is this is an accidental mispelling of Quakerism

There seems to be this belief that arthopod welfare is some ridiculous idea only justified by extreme utilitarian calculations, and that loads of EA animal welfare money goes to it at the expensive of many other things, and this just seems really wrong to me. Firstly, arthropods hardly get any money at all, they are possibly the most neglected, and certainly amongst the most neglected, areas of animal welfare. Secondly, the argument for arthropod welfare is essentially exactly the same as your classic antispeciesist arguments; there aren't morally relevant differences between arthropods and other animals that justifies not equally considering their interests (or if you want to be non-utilitarian, equally considering them). Insects can feel pain (or certainly, the evidence is probably strong enough that they would probably pass the bar of sentience under UK law), and have other sentient experiences, so why would we not care about their welfare? Indeed, non-utilitarian philosophers also take this idea seriously: Christine Korsgaard, one of the most prominent Kantian philosophers today, sees insects as part of the circle of animals that are under moral consideration, and Nussbaum's capabilities approach is restricted to sentient animals, and I think we have good reason to think insects are sentient as well. Many insects seem to have potentially rich inner lives, and have things that go well and badly for them, things they strive to do, feelings of pain etc. What principled reason could we give for their exclusion, that wouldn't be objectionably speciesist. Also, all arthropod welfare work at present is about farmed animals; those farmed animals just happen to be arthropods!

Some useful practical ideas that could emerge:

  • Inform what welfare requiremens ought to be put into law when farming insects
  • Inform and lobby the insect farming industry to protect these welfare requirements (eg corporate campaigns); do this in a similar way to how decapod welfare research has informed the work of the Shrimp Welfare Project
  • Understand the impacts of pesticides on insect welfare, and use this to lobby for pesticide substitutes
  • Improve the evidence base of insect sentience such that they can be incorporated into law (although I think the evidence is probably at least as strong as decapods which are already seen as sentient under UK Law).

Insect suffering is here now and real, and there is a lot of practical things we could do about it; dismissing it as 'head in the cloud philosophers' seems misguided to me

Load more