Hide table of contents

(Crossposted from twitter for easier linking.) (Intended for a broad audience—experts already know all this.)

When I talk about future “Artificial General Intelligence” (AGI), what am I talking about? Here’s a handy diagram and FAQ:

“Are you saying that ChatGPT is a right-column thing?” No. Definitely not. I think the right-column thing does not currently exist. That’s why I said “future”! I am also not making any claims here about how soon it will happen, although see discussion in Section A here.

“Do you really expect researchers to try to build right-column AIs? Is there demand for it? Wouldn’t consumers / end-users strongly prefer to have left-column AIs?” For one thing, imagine an AI where you can give it seed capital and ask it to go found a new company, and it does so, just as skillfully as Earth’s most competent and experienced remote-only human CEO. And you can repeat this millions of times in parallel with millions of copies of this AI, and each copy costs $0.10/hour to run. You think nobody wants to have an AI that can do that? Really?? And also, just look around. Plenty of AI researchers and companies are trying to make this vision happen as we speak—and have been for decades. So maybe you-in-particular don’t want this vision to happen, but evidently many other people do, and they sure aren’t asking you for permission.

“If the right-column AIs don’t exist, why are we even talking about them? Won’t there be plenty of warning before they exist and are widespread and potentially powerful? Why can’t we deal with that situation when it actually arises?” First of all, exactly what will this alleged warning look like, and exactly how many years will we have following that warning, and how on earth are you so confident about any of this? Second of all … “we”? Who exactly is “we”, and what do you think “we” will do, and how do you know? By analogy, it’s very easy to say that “we” will simply stop emitting CO when climate change becomes a sufficiently obvious and immediate problem. And yet, here we are. Anyway, if you want the transition to a world of right-column AIs to go well (or to not happen in the first place), there’s already plenty of work that we can and should be doing right now, even before those AIs exist. Twiddling our thumbs and kicking the can down the road is crazy.

“The right column sounds like weird sci-fi stuff. Am I really supposed to take it seriously?” Yes it sounds like weird sci-fi stuff. And so did heavier-than-air flight in 1800. Sometimes things sound like sci-fi and happen anyway. In this case, the idea that future algorithms running on silicon chips will be able to do all the things that human brains can do—including inventing new science & tech from scratch, collaborating at civilization-scale, piloting teleoperated robots with great skill after very little practice, etc.—is not only a plausible idea but (I claim) almost certainly true. Human brains do not work by some magic forever beyond the reach of science.

“So what?” Well, I want everyone to be on the same page that this is a big friggin’ deal—an upcoming transition whose consequences for the world are much much bigger than the invention of the internet, or even the industrial revolution. A separate question is what (if anything) we ought to do with that information. Are there laws we should pass? Is there technical research we should do? I don’t think the answers are obvious, although I sure have plenty of opinions. That’s all outside the scope of this little post though.

No comments on this post yet.
Be the first to respond.
Curated and popular this week
 ·  · 16m read
 · 
This is a crosspost for The Case for Insect Consciousness by Bob Fischer, which was originally published on Asterisk in January 2025. [Subtitle.] The evidence that insects feel pain is mounting, however we approach the issue. For years, I was on the fence about the possibility of insects feeling pain — sometimes, I defended the hypothesis;[1] more often, I argued against it.[2] Then, in 2021, I started working on the puzzle of how to compare pain intensity across species. If a human and a pig are suffering as much as each one can, are they suffering the same amount? Or is the human’s pain worse? When my colleagues and I looked at several species, investigating both the probability of pain and its relative intensity,[3] we found something unexpected: on both scores, insects aren’t that different from many other animals.  Around the same time, I started working with an entomologist with a background in neuroscience. She helped me appreciate the weaknesses of the arguments against insect pain. (For instance, people make a big deal of stories about praying mantises mating while being eaten; they ignore how often male mantises fight fiercely to avoid being devoured.) The more I studied the science of sentience, the less confident I became about any theory that would let us rule insect sentience out.  I’m a philosopher, and philosophers pride themselves on following arguments wherever they lead. But we all have our limits, and I worry, quite sincerely, that I’ve been too willing to give insects the benefit of the doubt. I’ve been troubled by what we do to farmed animals for my entire adult life, whereas it’s hard to feel much for flies. Still, I find the argument for insect pain persuasive enough to devote a lot of my time to insect welfare research. In brief, the apparent evidence for the capacity of insects to feel pain is uncomfortably strong.[4] We could dismiss it if we had a consensus-commanding theory of sentience that explained why the apparent evidence is ir
 ·  · 40m read
 · 
I am Jason Green-Lowe, the executive director of the Center for AI Policy (CAIP). Our mission is to directly convince Congress to pass strong AI safety legislation. As I explain in some detail in this post, I think our organization has been doing extremely important work, and that we’ve been doing well at it. Unfortunately, we have been unable to get funding from traditional donors to continue our operations. If we don’t get more funding in the next 30 days, we will have to shut down, which will damage our relationships with Congress and make it harder for future advocates to get traction on AI governance. In this post, I explain what we’ve been doing, why I think it’s valuable, and how your donations could help.  This is the first post in what I expect will be a 3-part series. The first post focuses on CAIP’s particular need for funding. The second post will lay out a more general case for why effective altruists and others who worry about AI safety should spend more money on advocacy and less money on research – even if you don’t think my organization in particular deserves any more funding, you might be convinced that it’s a priority to make sure other advocates get more funding. The third post will take a look at some institutional problems that might be part of why our movement has been systematically underfunding advocacy and offer suggestions about how to correct those problems. OUR MISSION AND STRATEGY The Center for AI Policy’s mission is to directly and openly urge the US Congress to pass strong AI safety legislation. By “strong AI safety legislation,” we mean laws that will significantly change AI developers’ incentives and make them less likely to develop or deploy extremely dangerous AI models. The particular dangers we are most worried about are (a) bioweapons, (b) intelligence explosions, and (c) gradual disempowerment. Most AI models do not significantly increase these risks, and so we advocate for narrowly-targeted laws that would focus their att
 ·  · 10m read
 · 
Citation: McKay, H. and Shah, S. (2025). Forecasting farmed animal numbers in 2033. Rethink Priorities. The report is also available on the Rethink Priorities website. Executive summary We produced rough-and-ready forecasts of the number of animals farmed in 2033 with the aim of helping advocates and funders with prioritization decisions. We focus on the most numerous groups of farmed animals: broiler chickens, finfishes, shrimps, and select insect species. Our forecasts suggest almost 6 trillion of these animals could be slaughtered in 2033 (Figure 1).   Figure 1: Invertebrates could account for 95% of farmed animals slaughtered in 2033 according to our midpoint estimates. Note that ‘Insects’ only includes black soldier fly larvae and mealworms. Our midpoint estimates point to a potential fourfold increase in the number of animals slaughtered from 2023 to 2033 and a doubling of the number of animals farmed at any time. Invertebrates drive the majority of this growth, and could account for 95% of farmed animals slaughtered in 2033 (see Figure 1) and three quarters of those alive at any time in our mid-point projections. We believe our forecasts point to an urgent need to address critical questions around the sentience and welfare of farmed invertebrates. Our estimates come with many caveats and warnings. In particular: * Species scope: For practicality, we produced numbers only for a few key animal groups: broiler chickens, finfishes, shrimp, and certain insects (black soldier flies and mealworms only). * Sensitivity to insect farming growth: Our forecasts are particularly sensitive to the growth in insect farming, which is highly sensitive to the success of insect farming business models and their ability to attract future investment. The recent and forecasted estimates, with 90% subjective credible intervals, can be viewed below in Table 1.  Table 1: Estimates of recent and forecasted numbers of broiler chickens, finfishes, shrimps, and insects slau