Hide table of contents

The purpose of this post (also available on LessWrong) is to share an alternative notion of “singularity” that I’ve found useful in timelining/forecasting.

  • fully general tech company is a technology company with the ability to become a world-leader in essentially any industry sector, given the choice to do so — in the form of agreement among its Board and CEO — with around one year of effort following the choice.

Notice here that I’m focusing on a company’s ability to do anything another company can do, rather than an AI system's ability to do anything a human can do.  Here, I’m also focusing on what the company can do if it so chooses (i.e., if its Board and CEO so choose) rather than what it actually ends up choosing to do.  If a company has these capabilities and chooses not to use them — for example, to avoid heavy regulatory scrutiny or risks to public health and safety — it still qualifies as a fully general tech company.

This notion can be contrasted with the following:

  • Artificial general intelligence (AGI) refers to cognitive capabilities fully generalizing those of humans.
  • An autonomous AGI (AAGI) is an autonomous artificial agent with the ability to do essentially anything a human can do, given the choice to do so — in the form of an autonomously/internally determined directive — and an amount of time less than or equal to that needed by a human.

Now, consider the following two types of phase changes in tech progress:

  1. A tech company singularity is a transition of a technology company into a fully general tech company.  This could be enabled by safe AGI (almost certainly not AAGI, which is unsafe), or it could be prevented by unsafe AGI destroying the company or the world.
  2. An AI singularity is a transition from having merely narrow AI technology to having AGI technology.

I think the tech company singularity concept, or some variant of it, is important for societal planning, and I’ve written predictions about it before, here:

  • 2021-07-21 — prediction that a tech company singularity will occur between 2030 and 2035;
  • 2022-04-11 — updated prediction that a tech company singularity will occur between 2027 and 2033.

A tech company singularity as a point of coordination and leverage

The reason I like this concept is that it gives an important point of coordination and leverage that is not AGI, but which interacts in important ways with AGI.  Observe that a tech company singularity could arrive

  1. before AGI, and could play a role in
    1. preventing AAGI, e.g., through supporting and enabling regulation;
    2. enabling AGI but not AAGI, such as if tech companies remain focussed on providing useful/controllable products (e.g., PaLM, DALL-E);
    3. enabling AAGI, such as if tech companies allow experiments training agents to fight and outthink each other to survive.
  2. after AGI, such as if the tech company develops safe AGI, but not AAGI (which is hard to control, doesn't enable the tech company to do stuff, and might just destroy it).

Points (1.1) and (1.2) are, I think, humanity’s best chance for survival.  Moreover, I think there is some chance that the first tech company singularity could come before the first AI singularity, if tech companies remain sufficiently oriented on building systems that are intended to be useful/usable, rather than systems intended to be flashy/scary.

How to steer tech company singularities?

The above suggests an intervention point for reducing existential risk: convincing a mix of

  • scientists
  • regulators
  • investors, and
  • the public

… to shame tech companies for building useless/flashy systems (e.g., autonomous agents trained in evolution-like environments to exhibit survival-oriented intelligence), so they remain focussed on building usable/useful systems (e.g., DALL-E, PaLM) preceding and during a tech company singularity.  In other words, we should try to steer tech company singularities toward developing comprehensive AI services (CAIS) rather than AAGI.

How to help steer scientists away from AAGI: 

  • point out the relative uselessness of AAGI systems, e.g., systems trained to fight for survival rather than to help human overseers;
  • appeal to the badness of nuclear weapons, which are — after detonation — the uncontrolled versions of nuclear reactors.
  • appeal to the badness of gain-of-function lab leaks, which are — after getting out — the uncontrolled versions of pathogen research.

How to convince the public that AAGI is bad: 

  • this is already somewhat easy; much of the public is already scared of AI because they can’t control it.
  • do not make fun of the public or call people dumb for fearing things they cannot control; things you can’t control can harm you, and in the case of AGI, people are right to be scared.

How to convince regulators that AAGI is bad:

  • point out that uncontrollable autonomous systems are mainly only usable for terrorism
  • point out the obvious fact that training things to be flashy (e.g., by exhibiting survival instincts) is scary and destabilizing to society.
  • point out that many scientists are already becoming convinced of this (they are)

How to convince investors that AAGI is bad: point out

  • the uselessness and badness of uncontrollable AGI systems, except for being flashy/scary;
  • point out that scientists (potential hires) are already becoming convinced of this;
  • point out that regulators should, and will, be suspicious of companies using compute to train uncontrollable autonomous systems, because of their potential to be used in terrorism.

Speaking personally, I have found it fairly easy to make these points since around 2016.  Now, with the rapid advances in AI we’ll be seeing from 2022 onward, it should be easier.  And, as Adam Scherlis (sort of) points out [EA Forum comment], we shouldn't assume that no one new will ever care about AI x-risk, especially as AI x-risk becomes more evidently real.  So, it makes sense to re-try making points like these from time to time as discourse evolves.

Summary

In this post, I introduced the notion of a "tech company singularity", discussed how the idea might be usable as an important coordination and leverage point for reducing x-risk, and gave some ideas for convincing others to help steer tech company singularities away from AAGI.

All of this isn't to say we'll be safe from AI risk, and far from it; e.g., see What Multipolar Failure Looks Like.  Efforts to maintain cooperation on safety across labs and jurisdictions remains paramount, IMHO.

In any case, try on the "tech company singularity" concept and see if does anything for you :)

Comments5


Sorted by Click to highlight new comments since:

>>after a tech company singularity, such as if the tech company develops safe AGI

I think this should be "after AGI"?

Yes, thanks!  Fixed.

I’m a bit confused and wanted to clarify what you mean by AGI vs AAGI: are you of the belief that AGI could be safely controlled (e.g., boxed) but that setting it to “autonomously” pursue the same objectives would be unsafe?

Could you describe what an AGI system might look like in comparison to an AAGI?

Yes, surely inner-alignment is needed for AGI to not (accidentally) become AAGI by default?

Thank you so much for this extremely important and brilliant post, Andrew! I really appreciate it.

I completely agree that the degree to which autonomous general-capabilities research is outpacing alignment research needs to be reduced (most likely via recruitment and social opinion dynamics), and that this seems neglected relative to its importance.

I wrote a post on a related topic recently, and it would be really great to hear what you think! (https://forum.effectivealtruism.org/posts/juhMehg89FrLX9pTj/a-grand-strategy-to-recruit-ai-capabilities-researchers-into)

Curated and popular this week
 ·  · 16m read
 · 
At the last EAG Bay Area, I gave a workshop on navigating a difficult job market, which I repeated days ago at EAG London. A few people have asked for my notes and slides, so I’ve decided to share them here.  This is the slide deck I used.   Below is a low-effort loose transcript, minus the interactive bits (you can see these on the slides in the form of reflection and discussion prompts with a timer). In my opinion, some interactive elements were rushed because I stubbornly wanted to pack too much into the session. If you’re going to re-use them, I recommend you allow for more time than I did if you can (and if you can’t, I empathise with the struggle of making difficult trade-offs due to time constraints).  One of the benefits of written communication over spoken communication is that you can be very precise and comprehensive. I’m sorry that those benefits are wasted on this post. Ideally, I’d have turned my speaker notes from the session into a more nuanced written post that would include a hundred extra points that I wanted to make and caveats that I wanted to add. Unfortunately, I’m a busy person, and I’ve come to accept that such a post will never exist. So I’m sharing this instead as a MVP that I believe can still be valuable –certainly more valuable than nothing!  Introduction 80,000 Hours’ whole thing is asking: Have you considered using your career to have an impact? As an advisor, I now speak with lots of people who have indeed considered it and very much want it – they don't need persuading. What they need is help navigating a tough job market. I want to use this session to spread some messages I keep repeating in these calls and create common knowledge about the job landscape.  But first, a couple of caveats: 1. Oh my, I wonder if volunteering to run this session was a terrible idea. Giving advice to one person is difficult; giving advice to many people simultaneously is impossible. You all have different skill sets, are at different points in
 ·  · 47m read
 · 
Thank you to Arepo and Eli Lifland for looking over this article for errors.  I am sorry that this article is so long. Every time I thought I was done with it I ran into more issues with the model, and I wanted to be as thorough as I could. I’m not going to blame anyone for skimming parts of this article.  Note that the majority of this article was written before Eli’s updated model was released (the site was updated june 8th). His new model improves on some of my objections, but the majority still stand.   Introduction: AI 2027 is an article written by the “AI futures team”. The primary piece is a short story penned by Scott Alexander, depicting a month by month scenario of a near-future where AI becomes superintelligent in 2027,proceeding to automate the entire economy in only a year or two and then either kills us all or does not kill us all, depending on government policies.  What makes AI 2027 different from other similar short stories is that it is presented as a forecast based on rigorous modelling and data analysis from forecasting experts. It is accompanied by five appendices of “detailed research supporting these predictions” and a codebase for simulations. They state that “hundreds” of people reviewed the text, including AI expert Yoshua Bengio, although some of these reviewers only saw bits of it. The scenario in the short story is not the median forecast for any AI futures author, and none of the AI2027 authors actually believe that 2027 is the median year for a singularity to happen. But the argument they make is that 2027 is a plausible year, and they back it up with images of sophisticated looking modelling like the following: This combination of compelling short story and seemingly-rigorous research may have been the secret sauce that let the article to go viral and be treated as a serious project:To quote the authors themselves: It’s been a crazy few weeks here at the AI Futures Project. Almost a million people visited our webpage; 166,00
 ·  · 8m read
 · 
Note: This post was crossposted from the Open Philanthropy Farm Animal Welfare Research Newsletter by the Forum team, with the author's permission. The author may not see or respond to comments on this post. ---------------------------------------- > Despite setbacks, battery cages are on the retreat My colleague Emma Buckland contributed (excellent) research to this piece. All opinions and errors are mine alone. It’s deadline time. Over the last decade, many of the world’s largest food companies — from McDonald’s to Walmart — pledged to stop sourcing eggs from caged hens in at least their biggest markets. All in, over 2,700 companies globally have now pledged to go cage-free. Good things take time, and companies insisted they needed a lot of it to transition their egg supply chains — most set 2025 deadlines to do so. Over the years, companies reassured anxious advocates that their transitions were on track. But now, with just seven months left, it turns out that many are not. Walmart backtracked first, blaming both its customers and suppliers, who “have not kept pace with our aspiration to transition to a full cage-free egg supply chain.” Kroger soon followed suit. Others, like Target, waited until the last minute, when they could blame bird flu and high egg prices for their backtracks. Then there are those who have just gone quiet. Some, like Subway and Best Western, still insist they’ll be 100% cage-free by year’s end, but haven’t shared updates on their progress in years. Others, like Albertsons and Marriott, are sharing their progress, but have quietly removed their pledges to reach 100% cage-free. Opportunistic politicians are now getting in on the act. Nevada’s Republican governor recently delayed his state’s impending ban on caged eggs by 120 days. Arizona’s Democratic governor then did one better by delaying her state’s ban by seven years. US Secretary of Agriculture Brooke Rollins is trying to outdo them all by pushing Congress to wipe out all stat