Superforecaster, former philosophy PhD, Giving What We Can member since 2012. Currently trying to get into AI governance.Â
That's pretty incomprehensible to me even as a considerable skeptic of the rapid scenario. Firstly, you have experts giving a 23% chance and it's not moving you up even to like over 1 in a 100,000, let's say, although probably the JFK scenario is a hell of a lot less likely than that, even if his assassination was faked, despite there literally being a huge crowd who saw his head get blown off in public, still he have to be 108 to still be alive. Secondly, in 2018, AI could do to a first approximation basically nothing outside of highly specialized uses like chess computers, that did not use current ML techniques. Meanwhile, this year, I, a philosophy PhD, asked Claude about an idea that I had seriously thought about turning into a paper one day back when I was still in philosophy, and it came up with a very clever objection that I had not thought of myself. I am fairly, even if not 100% sure that this objection is not in the literature anywhere. Given that we've gone from nothing to "high quality philosophical arguments at times" in like 7 years, and there are some moderately decent reasons for thinking models good at AI research tasks could set off a positive feedback loop, and far more money and effort is being thrown at AI than ever before, it seems hard to me to think it is 99,999 in 100,000 sure that we won't get AGI by 2030, even though the distance to cross is still very large, and current success on benchmarks somewhat misleading.Â
There is an ambiguity about "capabilities" versus deployment here to be fair. Your "that will not happen" seems somewhat more reasonable to me if we are requiring that the AIs are actually deployed and doing all this stuff versus merely that models capable of doing this stuff have been created. I think it was the latter we were forecasting, but I'm not 100% certain.Â
https://leap.forecastingresearch.org/ Â The stuff is all here somewhere, though it's a bit difficult to find all the pieces quickly and easily.Â
For what it's worth, I think the chance of the rapid scenario is considerably less than 23%, but a lot more than under 0.1%. I can't remember the number I gave when I did the survey as a superforecaster, but maybe 2-3%? But I do think chances are getting rather higher by 2040, and it's good we are preparing now.Â
 ". I think an outlandish scenario like we find out JFK is actually still alive is more likely than that"
If you really mean this literally, I think it is extremely obviously false, in a way that I don't think merely 0.1% is.Â
For what it's worth, I think "less than 0.1% likely by 2032" is PROBABLY also not in line with expert opinion. The Forecasting Research Institute, where I currently work has just published a survey of AI experts and superforecasters on the future of AI, as part of our project LEAP, the Lognitudinal Expert Panel on AI. In it, experts and supers median estimate was a 23% chance of the survey's "rapid scenario" for AI progress by 2030 would occur. Here's how the survey  described the rapid scenario:
"By the end of 2030, in the rapid-progress world, AI systems are capable of competing with the best human minds and workers, and can surpass them.
Human creativity and leadership remain valued, but mostly for setting high-level visionâday-to-day execution can be left to silicon-based systems. Autonomous researchers can collapse years-long research timelines into days, weeks, or months, creating game-changing technologies, such as materials that revolutionize energy storage, or bespoke cancer cures. No human freelance software engineer can outperform AI. The same goes for customer service (e.g., call center and support chat), paralegal, and administrative workers (e.g., bookkeepers or scheduling assistants).
Indeed, models have become so capable that AI can create an album of the same caliber as the Grammy Album of the Year. Additionally, a single AI agent can generate a Pulitzer- (or Booker Prize-) caliber novel according to current (2025) standards, adapt the book into an engaging two-hour movie, negotiate the resulting book and movie contracts, and launch the marketing campaigns for both while its sibling agents manage the book publishing company and movie studio at the level of highly competent CEOs.
Not only do Level-5 robo-taxis exist, but they are, on average, 99.9% safer than human-piloted cars and can venture anywhere off-road that a competent human driver can. Meanwhile, robots can navigate an arbitrary home anywhere in the world, make a cup of the most popular local hot beverage, clean and put away the dishes according to the local custom, fix any plumbing issues that arise while theyâre doing the dishes--and they can do it all faster and more reliably than most humans and without human guidance. Robots in advanced factories can autonomously perform the full range of tasks requiring the highest levels of dexterity, coordination, and adaptive decision-making."
I don't think that necessarily amounts to "AGI", because, for example, what about robotics, maybe the AIs still can't replace manual labour, for software as well as hardware related reasons. But I do think it's fair to infer that if the survey-takes thought there was a 23% chance of this scenario by 2030, it's pretty unlikely they don't think AGI is >0.1% likely by 2032.Â
I think most, though no doubt not all people you'd think of as EA leaders think AI is the most important cause area to work in and have thought that for a long time. AI is also more fun to argue about on the internet than global poverty or animal welfare, which drives discussion of it.Â
Â
But having said all that, there is still plenty EA funding of global health and development stuff, including by Open Philanthropy, who in fact have a huge chunk of the EA money in the world. People do and fund animal stuff too, including Open Phil. If you want to, you can just engage with EA stuff on global development and/or animal welfare, and just ignore the AI stuff altogether. And even if you decide that the AI stuff is so prominent and-in your view-so wrong that you don't want to call yourself an EA, you don't have to give up on the idea of effective charity. If you want to, you can try and do the most good you can on global poverty or animal welfare, while not identifying as an EA at all. Lots, likely most good work in these areas will be done by organisations that don't see themselves as EA anyway. You can donate to or work for these orgs without engaging with the whole EA scene at all.Â
"I think if you have enough control over your diet to be a vegan, you have enough control to do one of the other diets that has the weight effects without health side effects. "
Fair point, I was thinking of vegans as a random sample in terms of their capacity for deliberate weight-loss dieting, when of course they very much are not.Â
"In fact, weight loss is a common side effect of a vegan diet, which could explain all or most of any health upsides, rather than being vegan itself."
This is more a point against your thesis than for it, I think. It doesn't matter if the ideal meat diet is better than the ideal vegan diet, because people won't ever actually eat either-this is just the point about how people won't actually eat 2 cups of sesame seeds a day or whatever. If going vegan in practice typically causes people to lose weight, and this is usually a benefit, that's a point in favour of veganism. Unless people can easily just lose weight another way-and they very much cannot as we know from how much almost everyone overweight struggles to get permanently healthy by dieting-it doesn't matter if the benefit from veganism could theoretically be gained by some non-vegan diet that you could theoretically follow. I guess the main counter-argument here would be if you think the existence of ozempic now makes losing weight in another way sufficiently easy.Â
There is plausibly some advantage from delay yes. For one thing even if you don't have any preference for which side wins the race, making the gap larger plausibly means the leading country can be more cautious because their lead is bigger, and right now the US is in the lead. For another thing, if you absolutely forced me to choose, I'd say I'd rather the US won the race than China did (undecided whether the US winning is better/worse than a multipolar world with 2 winners). It's true that the US has a much worse record in terms of invading other places and otherthrowing the governments than China, but China has not had anything like the US's international clout until recently, so it's unclear how predictive past behaviour on China's part is of future behaviour. And on the other hand, China is, while apparently well-governed in many ways, very authoritarian, which I think is bad. (Although the US may be about to go less than fully democratic, it would have to fall far to be as authoritarian as China, though it does imprison a much higher % of its population than China I think.) I generally would not want to see authoritarianism win out in some general sense, even if China itself might be a more restrained actor than the US in many ways.Â
Yeah, maybe I am using the word Nat Sec wrong, but my sense is that US intelligence agencies were involved in at least some of the history I was mentioning. I am very much not an expert on that history, but I recall Matt Yglesias recommending this (which I haven't read to be clear): https://en.wikipedia.org/wiki/The_Jakarta_Method  I don't think Yglesias is particularly at all expert of particularly reliable on this stuff either, but I do think he generally has a fairly (civic) nationalistic pro-US point of view, so if the book persuaded even him that the US did a lot of bad stuff in Indonesia and elsewhere during The Cold War, it probably marshals quite a lot of evidence for that conclusion, and probably isn't too partisanly tankie.Â
Ok, there's a lot here, and I'm not sure I can respond to all of it, but I will respond to some of it.Â
-I think you should be moved just by my telling you about the survey. Unless you are super confident either that I am lying/mistaken about it, or that the FRI was totally incompetent in assembling an expert panel, the mere fact that I'm telling you the median expert credence in the rapid scenario is 23% in the survey ought to make you think there is at least a pretty decent chance that you are giving it several orders of magnitude less credence than the median expert/superforecaster. You should already be updating on there being a decent chance that is true, even if you don't know for sure. Unless you already believed there was a decent chance you were that far out of step with expert opinion, but I think that just means you were already probably doing the wrong thing in assigning ultra-low credence. I say "probably" because the epistemology of disagreement IS very complicated, and maybe sometimes it's ok to stick to your guns in the face of expert consensus.Â
-"Physical impossibility". Well, it's not literally true that you can't scale any further at all. That's why they are building all those data centers for eyewatering sums of money. Of course, they will hit limits eventually and perhaps soon-probably monetary before physical.  But you admit yourself that no one has actually calculated how much compute is needed to reach AGI. And indeed, that is very hard to do. Actually Epoch, who are far from believers in the rapid scenario as far as I can tell think quite a lot of recent progress has come from algorithmic improvements, not scaling: https://blog.redwoodresearch.org/p/whats-going-on-with-ai-progress-and  Text search for "Algorithmic improvement" or "Epoch reports that we see". So progress could continue to some degree even if we did hit limits on scaling. As far as I can tell, most of the people who do believe in the rapid scenario actually expect scaling of training compute to at least slow down a lot relatively soon, even though the expect big increases in the near future. Of course, none of this proves that we can reach AGI with current techniques just by scaling, and I am pretty dubious of that for any realistic amount of scaling. But I don't think you should be talking like the opposite has been proven. We don't know how much compute is needed for AGI with the techniques of today or the techniques available by 2029, so we don't know whether the needed amount of compute would breach physical or financial or any other limits.Â
-LLM "Narrowness" and 2018 baseline: Well, I was probably a bit inexact about the baseline here. I guess what I meant was something like this. Before 2018ish, as a non-technical person, I never really heard anything about exciting AI stuff, even though I paid attention to EA a lot, and people in EA already cared a lot about AI safety and saw it as a top cause area. Since then, there has been loads of attention, literal founding fathers of the field like Hinton say there is something big going on, I find LLMs useful for work, there have been relatively hard to fake achievements like doing decently well on the Math Olympiad, and College students can now use AI to cheat on their essays, a task that absolutely would have been considered to involve "real intelligence" before Chat-GPT. More generally, I remember a time, as someone who learnt a bit of cognitive science while studying philosophy, when  the problem with AI was essentially being presented as "but we just can't hardcode all our knowledge in, and on the other hand, its not clear neural nets can really learn natural languages". Basically AI was seen as something that struggled with anything that involved holistic judgment based on pattern-matching and heuristics, rather than hard-coded rules. That problem now seems somewhat solved: We now seem to be able to get AIs to learn how to use natural language correctly, or play games like Go that can't be brute forced by exact calculation, but rely on pattern-recognition and "intuition". These AIs might not be general, but the techniques for getting them to learn these things might be a big part of how you build an AI that actually is, since the seem to be applicable to large variety of kinds of data: image recognition, natural language, code, Go and many other games, information about proteins.  The techniques for learning seem more general than many of the systems. That seems like relatively impressive progress for a short time to me as a layperson. I don't particularly think that should move anyone else that much, but it explains why it is not completely obvious to me, why we could not reach AGI by 2030 at current rates of progress. And again, I will emphasize, I think this is very unlikely. Probably my median is that real AGI is 25 years away. I just don't think it is 1 in a million "very unlikely".Â
I want to emphasize here though, that I don't really think anything under the 3rd dash here should change your mind. That's more just an explanation of where I am coming from, and I don't think it should persuade anyone of anything really. But I definitely do think the stuff about expert opinion should make you tone down your extremely extreme confidence, even if just a bit.Â
I'd also say that I think you are not really helping your own cause here by expressing such an incredibly super-high level of certainty, and making some sweeping claims that you can't really back up, like that we know right now that physical limits have a strong bearing on whether AGI will arrive soon. I usually upvote the stuff you post here about AGI, because I genuinely think you raise good, tough questions, for the many people around here with short timelines. (Plenty of those people probably have thought-through answers to those questions, but plenty probably don't and are just following what they see as EA consensus.) Â But I think you also have a tendency to overconfidence that makes it easier for people to just ignore what you say. This come out in you doing annoying things you don't really need to do, like moving quickly in some posts from "scaling won't reach AGI" to "AI boom is a bubble that will unravel" without much supporting argument, when obviously, AI models could make vast revenues without being full AGI. It gives the impression of someone who is reasoning in a somewhat motivated manner, even as they also have thought about the topic a lot and have real insights.Â