This is an article in the featured articles series from AISafety.info. AISafety.info writes AI safety intro content. We'd appreciate any feedback

The most up-to-date version of this article is on our website, along with 300+ other articles on AI existential safety.

Corporations can be considered superintelligent only in a limited sense. Nick Bostrom, in Superintelligence, distinguishes between "speed superintelligence", "collective superintelligence", and "quality superintelligence".

Out of these, corporations come closest to collective superintelligence. Bostrom reserves the term “collective superintelligence” for hypothetical systems much more powerful than current human groups, but corporations are still strong examples of collective intelligence. They can perform cognitive tasks far beyond the abilities of any one human, as long as those tasks can be decomposed into many parallel, human-sized pieces. For example, they can design every part of a smartphone, or sell coffee in thousands of places simultaneously.

However, corporations are still very limited. They don't have speed superintelligence: no matter how many humans work together, they'll never program an operating system in one minute, or play great chess in one second per move. Nor do they have quality superintelligence: ten thousand average physicists collaborating to invent general relativity for the first time would probably fail where Einstein succeeded. Einstein was thinking on a qualitatively higher level.

AI systems could be created one day that think exceptional thoughts at high speeds in great numbers, presenting major challenges we’ve never had to face when dealing with corporations.

3

0
0

Reactions

0
0
Comments2


Sorted by Click to highlight new comments since:

Good article summarizing the point, but I don't see the reason for posting these older discussions on the forum.

Thanks for the feedback! So, these articles are intended to serve as handy links to share with people confused about some point of AI safety. (Which ties into our mission: spreading correct models on AI safety, which seems robustly good.) Plausibly, people on the EA forum encounter others like this, or fall into that category themselves. It's a tricky topic, after all, and lots of people on the forum are new.  Your comment suggests we failed to position ourselves correctly. And also that these articles might not be a great fit for the EA forum. Which is useful, because we're still figuring out what content would be a good fit here, and how to frame it.

Does that answer your question? 

Curated and popular this week
 ·  · 40m read
 · 
I am Jason Green-Lowe, the executive director of the Center for AI Policy (CAIP). Our mission is to directly convince Congress to pass strong AI safety legislation. As I explain in some detail in this post, I think our organization has been doing extremely important work, and that we’ve been doing well at it. Unfortunately, we have been unable to get funding from traditional donors to continue our operations. If we don’t get more funding in the next 30 days, we will have to shut down, which will damage our relationships with Congress and make it harder for future advocates to get traction on AI governance. In this post, I explain what we’ve been doing, why I think it’s valuable, and how your donations could help.  This is the first post in what I expect will be a 3-part series. The first post focuses on CAIP’s particular need for funding. The second post will lay out a more general case for why effective altruists and others who worry about AI safety should spend more money on advocacy and less money on research – even if you don’t think my organization in particular deserves any more funding, you might be convinced that it’s a priority to make sure other advocates get more funding. The third post will take a look at some institutional problems that might be part of why our movement has been systematically underfunding advocacy and offer suggestions about how to correct those problems. OUR MISSION AND STRATEGY The Center for AI Policy’s mission is to directly and openly urge the US Congress to pass strong AI safety legislation. By “strong AI safety legislation,” we mean laws that will significantly change AI developers’ incentives and make them less likely to develop or deploy extremely dangerous AI models. The particular dangers we are most worried about are (a) bioweapons, (b) intelligence explosions, and (c) gradual disempowerment. Most AI models do not significantly increase these risks, and so we advocate for narrowly-targeted laws that would focus their att
 ·  · 10m read
 · 
Citation: McKay, H. and Shah, S. (2025). Forecasting farmed animal numbers in 2033. Rethink Priorities. The report is also available on the Rethink Priorities website. Executive summary We produced rough-and-ready forecasts of the number of animals farmed in 2033 with the aim of helping advocates and funders with prioritization decisions. We focus on the most numerous groups of farmed animals: broiler chickens, finfishes, shrimps, and select insect species. Our forecasts suggest almost 6 trillion of these animals could be slaughtered in 2033 (Figure 1).   Figure 1: Invertebrates could account for 95% of farmed animals slaughtered in 2033 according to our midpoint estimates. Note that ‘Insects’ only includes black soldier fly larvae and mealworms. Our midpoint estimates point to a potential fourfold increase in the number of animals slaughtered from 2023 to 2033 and a doubling of the number of animals farmed at any time. Invertebrates drive the majority of this growth, and could account for 95% of farmed animals slaughtered in 2033 (see Figure 1) and three quarters of those alive at any time in our mid-point projections. We believe our forecasts point to an urgent need to address critical questions around the sentience and welfare of farmed invertebrates. Our estimates come with many caveats and warnings. In particular: * Species scope: For practicality, we produced numbers only for a few key animal groups: broiler chickens, finfishes, shrimp, and certain insects (black soldier flies and mealworms only). * Sensitivity to insect farming growth: Our forecasts are particularly sensitive to the growth in insect farming, which is highly sensitive to the success of insect farming business models and their ability to attract future investment. The recent and forecasted estimates, with 90% subjective credible intervals, can be viewed below in Table 1.  Table 1: Estimates of recent and forecasted numbers of broiler chickens, finfishes, shrimps, and insects slau