This other Ryan Greenblatt is my old account[1]. Here is my LW account.
Account lost to the mists of time and expired university email addresses.
I'm worried that in practice you're conflating between these bullets. Your post on precise bayesianism seems to focus substantially on empirical aspects of the current situation (potential arguments for (2)), but in practice, my understanding is that you actually think the imprecision is terminally correct but partially motivated by observations of our empirical reality. But, I don't think I care about motivating my terminal philosophy based on what we observe in this way!
(Edit: TBC, I get that you understand the distinction between these things, your post discusses this distinction, I just think that you don't really make arguments against (1) except that implying other things are possible.)
I would also push back against the view that we need to be "confident" that such systems can consent before proceeding. Ordinary levels of empirical evidence about whether these systems routinely resist confinement and control would be sufficient to move me in either direction; I don't think we need to have a very high probability that our actions are moral before proceeding.
For reference, my (somewhat more detailed) view is:
Rather than relying on rigid or abstract notions of societal consent or collective rights violations, I prefer evaluating these large-scale developments using a utilitarian cost-benefit approach. And as I’ve argued elsewhere, I think the benefits from accelerated technological and economic progress significantly outweigh the potential risks of violent disempowerment from the perspective of currently existing individuals. Therefore, I consider it justified to actively pursue AI development despite these concerns.
This is only tangentially related, but I'm curious about your perspective on the following hypothetical:
Suppose that we did a sortition with 100 English speaking people (uniformly selected over people who speak English and are literate for simplicity). We task this sortition with determining what tradeoff to make between risk of (violent) disempowerment and accelerating AI and also with figuring whether globally accelerating AI is good. Suppose this sortition operates for several months and talks to many relevant experts (and reads applicable books etc). What conclusion do you think this sortition would come to? Do you think you would agree? Would you change your mind if this sortition strongly opposed your perspective here?
My understanding is that you would disregard the sortition because you put most/all weight on your best guess of people's revealed preferences, even if they strongly disagree with your interpretation of their preferences and after trying to understand your perspective they don't change their minds. Is this right?
A more appropriate moral default, given our current evidence, is that AI slavery is morally wrong and that the abolition of such slavery is morally right. This is the position I take.
To be clear, I agree and this is one reason why I think AI development in the current status quo is unacceptably irresponsible: we don't even have the ability to confidently know whether an AI system is enslaved or suffering.
I think the policy of the world should be that if we can't either confidently determine that an AI system consents to its situation or that it is sufficiently weak that the notion of consent doesn't make sense, then training or using such systems shouldn't be allowed.
I also think that the situation is unacceptable because the current course of development poses large risks of humans being violently/non-consensually disempowered without any ability for humans to robustly secure longer run property rights.
In a sane regime, we should ensure high confidence in avoiding large scale rights violations or suffering of AIs and in avoiding violent/non-consensual disempowerment of humans. (If people broadly consented to a substantial risk of being violently disempowered in exchange for potential benefits of AI, that could be acceptable, though I doubt this is the current situation.)
Given that it seems likely that AI development will be grossly irresponsible, we have to think about what interventions would make this go better on the margin. (Aggregating over these different issues in some way.)
If LLMs are adopting poor learning heuristics and not generalizing, AI2027 is predicting a weaker kind of "superhuman" coder — one that can reliably solve software tasks with clean feedback loops but will struggle on open-ended tasks!
No, AI 2027 is predicting a kind of superhuman coder that can automate even messy open ended research engineering tasks. The forecast attempts to account for gaps between automatically-scoreable, relatively clean + green-field software tasks and all tasks. (Though the adjustment might be too small in practice.)
If LLMs can't automate such tasks and nothing else can automate such tasks, then this wouldn't count as superhuman coder happening.
I think your estimate for how an invasion of Taiwan affects catastrophic/existential risks fails to account for the most important effects, in particular, how an invasion would affect the chip supply. AI risk seems to me like the dominant source of catastrophic/existential risk (at least over the relevant period) and large changes in the chip supply from a Taiwan invasion would substantially change the situation.
I also think it's complex whether a more aggressive and adversarial stance from the US on AI would actually be helpful rather than counterproductive (as you suggest in the post). And whether an invasion of Taiwan actually makes a deal related to AI more likely (via a number of factors) rather than less.
This isn't to make any specific claim about what the right estimate is, I'm just claiming that your estimate doesn't seem to me to cover the key factors.
This argument neglects improvements in speed and capability right? Even if parallel labor and compute are complements, shouldn't we expect it is possible for increased speed or capabilities to substitute for compute? (It just isn't possible for AI companies to buy much of this.)
(I'm not claiming this is the biggest problem with this analysis, just noting that it is a problem.)
Might be true, doesn't make that not a strawman. I'm sympathetic to thinking it's implausible that mechanize would be the best thing to do on altruistic grounds even if you share views like those of the founders. (Because there is probably something more leveraged to do and some weight on cooperativeness considerations.)
Recently, various groups successfully lobbied to remove the moratorium on state AI bills. This involved a surprising amount of success while competing against substantial investment from big tech (e.g. Google, Meta, Amazon). I think people interested in mitigating catastrophic risks from advanced AI should consider working at these organizations, at least to the extent their skills/interests are applicable. This both because they could often directly work on substantially helpful things (depending on the role and organization) and because this would yield valuable work experience and connections.
I worry somewhat that this type of work is neglected due to being less emphasized and seeming lower status. Consider this an attempt to make this type of work higher status.
Pulling organizations mostly from here and here we get a list of orgs you could consider trying to work (specifically on AI policy) at:
To be clear, these organizations vary in the extent to which they are focused on catastrophic risk from AI (from not at all to entirely).