Hide table of contents

The probability of synthetic universal superintelligence (powerful AGI) arriving by the end of this decade recently jumped up almost to 100% due to the new experimental evidence proving foundational theoretical concepts of intelligence: the free energy principle [1] and the active inference theory [2].

Synthetic biological intelligence (SBI) was born just a couple of months ago in a petri dish on a high density multielectrode array [3].  

SBI is a basal model and a proof of concept of AGI. Its birth is a large event although currently ignored as small by the AI development mainstream that is stuck close to the mean with small events. The birth of SBI launches a series of large events on the path to powerful AGI as it shifts the previously gaussian probability distribution of the arrival of powerful AGI to the heavy tailed distribution by power law. 

Unlike dumb and rigid computers and modern AI systems, SBI is smart and agile. It can learn from scratch without supervision or reinforcement [3]. It’s embodied and embedded in the environment [3]. It can change its morphology [4]. It can dwell in different substrates [5]. It can regenerate [6]. It can multiply [7].

Both theoretically and experimentally SBI leads to the creation of many different synthetic minds - basal synthetic intelligence (or AGI) embodied in many different physical and digital substrates.

SBI fills the gap between modern AI models and true AGI. Tiny SBI minds can perform as building material for larger minds and leverage the existing dumb but large AI models and computer systems as its slaves and components to increase own power and reach.     

Two factors turbocharge the development, implementation and scaling of true AGI with SBI at the core. First, SBI has a robust theoretical foundation (free energy principle, active inference theory, universal machine learning theory [8,9], mind everywhere framework [10]). Second, its theoretical understanding is supported by a strong experimental track on different platforms (two headed planaria [4], xenobots [7,11], frog limb regeneration [6], cyborg playing pong [3], etc.).

Theoretical foundation of SBI is a culmination of more than a century of research in psychology, physiology and neuroscience that began in 1898 with the discovery of the “animal-like learning method” or “the method of trial and error, with accidental success” by one of the founding fathers of modern psychology Edward Thorndike [12].  Another founding father Ivan Pavlov in 1933 wrote, “In Thorndike's experiments, the animal becomes familiar with the relations of external things among themselves, with their connections. Therefore, it is the knowledge of the world. This is the embryo, the germ of science.” [13]  

The mainstream development of AI relies on “the method of trial and error, with accidental success” alone without even knowing it and, therefore, current AI models are poorly understood and hard to scale. Experimentally verified theoretical understanding is an unsurpassable competitive edge of SBI vs mainstream AI. 

The birth of SBI is comparable to the first achievement of nuclear fission in a lab in 1939.  It took six years then to build the nuclear bomb. Unethical, uncontrollable artificial superintelligence, if it escapes a lab, may represent a much bigger threat to humankind than nukes.  On the other hand, such intelligence that will merge with humankind to adequately understand and put always first values and interests of humanity as species may help us tackle all existential threats including the nuclear armageddon. In fact, total elimination of nukes may become its first use case.

References:

1. Friston, K. The free-energy principle: a unified brain theory?. Nat Rev Neurosci 11, 127–138 (2010). https://doi.org/10.1038/nrn2787

2. Active Inference: The Free Energy Principle in Mind, Brain, and Behavior By: Thomas Parr, Giovanni Pezzulo, Karl J. Friston DOI: https://doi.org/10.7551/mitpress/12441.001.0001 ISBN (electronic): 9780262369978 Publisher: The MIT Press Published: 2022

3. Brett J. Kagan, Andy C. Kitchen, Nhi T. Tran, Forough Habibollahi, Moein Khajehnejad, Bradyn J. Parker, Anjali Bhat, Ben Rollo, Adeel Razi, Karl J. Friston, In vitro neurons learn and exhibit sentience when embodied in a simulated game-world, Neuron, 2022, ISSN 0896-6273, https://doi.org/10.1016/j.neuron.2022.09.001

4. Fallon Durant, Johanna Bischof, Chris Fields, Junji Morokuma, Joshua LaPalme, Alison Hoi, Michael Levin, The Role of Early Bioelectric Signals in the Regeneration of Planarian Anterior/Posterior Polarity, Biophysical Journal, Volume 116, Issue 5, 2019, Pages 948-961, ISSN 0006-3495, https://doi.org/10.1016/j.bpj.2019.01.029.

5. Tran Nguyen Minh-Thai, Sandhya Samarasinghe, Michael Levin, A comprehensive conceptual and computational dynamics framework for Autonomous Regeneration Systems, bioRxiv 820613; doi: https://doi.org/10.1101/820613, Now published in Artificial Life doi: 10.1162/artl_a_00343

6. Nirosha J. Murugan , Hannah J. Vigran, Kelsie A. Miller, Annie Golding, Quang L. Pham, Megan M. Sperry, Cody Rasmussen-Ivey, Anna W. Kane, David L. Kaplan, Michael Levin. Acute multidrug delivery via a wearable bioreactor facilitates long-term limb regeneration and functional recovery in adult Xenopus laevis. SCIENCE ADVANCES. 26 Jan 2022. Vol 8, Issue 4. DOI: 10.1126/sciadv.abj2164

7. S. Kriegman, D. Blackiston, M. Levin, J. Bongard, Kinematic self-replication in reconfigurable organisms. Proc. Natl. Acad. Sci. U.S.A. November 29, 2021. 118 (49) e2112672118. https://doi.org/10.1073/pnas.2112672118

8. Vanchurin V. The World as a Neural Network. Entropy (Basel). 2020 Oct 26;22(11):1210. doi: 10.3390/e22111210. PMID: 33286978; PMCID: PMC7712105.

9. Vanchurin V. Towards a theory of machine learning. arXiv:2004.09280, 2020.

10. Levin Michael. Technological Approach to Mind Everywhere: An Experimentally-Grounded Framework for Understanding Diverse Bodies and Minds. Frontiers in Systems Neuroscience,16, 2022. URL=https://www.frontiersin.org/articles/10.3389/fnsys.2022.768201. DOI=10.3389/fnsys.2022.768201 

11. S. Kriegman, D. Blackiston, M. Levin, J. Bongard, A scalable pipeline for designing reconfigurable organisms. Proc. Natl. Acad. Sci. U.S.A. 117, 1853–1859 (2020).

12. Thorndike, Edward  (1898) Animal intelligence: An experimental study of the associative processes in animals. Monograph Supplement No. 8

13. Pavlov, Ivan (1933) Psychology as a Science. Unpublished and Little-known Materials of I.P. Pavlov, in Russian (1975)

This is an original entry for the Future Fund worldview prize

-4

0
0

Reactions

0
0

More posts like this

Comments7


Sorted by Click to highlight new comments since:

Well, the contest involves some EAish stuff about making probability estimates and  justifying them, etc. I think it helps to include some explicit statements of dependent probabilities and the precise question you intend to answer.

If you believe in this  biological neuron stuff, can you provide the development steps, engineering hurdles, and expected timeline for overcoming each hurdle or finishing each step?

For example, hurdles dealing with (I'm making stuff up):

  1. SBI/substrate scaling R&D
  2. SBI/Silicon interface R&D 
  3. SBI training
  4. SBI testing
  5. SBI deployment
  6. SBI maintenance issues

Also, with something like this, what are your options for cloning or deploying knowledge or skills? You say it can leverage existing silicon hardware, but is that enough to allow rapid manufacture of trained SBI? Is there some other way to transfer knowledge quickly from  a source SBI to a target SBI?

Finally, how do you define the similarities or differences between artificial life and AGI, as the FTX folks envision it?

In two months following our conversation in this white paper you can now find answers to all you questions and the roadmap of AGI development based of the first principle of SBI https://arxiv.org/abs/2212.01354 

Thank you for remembering me, Yuri! I will read the article.

The reason why I believe SBI is seminal is that it proves that natural neural networks and even single cells, not necessarily neurons, behave in accordance with clearly defined mathematical rules, which enable us to (a) predict and program the behaviour and morphogenesis of living tissue, organs and organisms, (b) to create synthetic forms of life (minds) which never exist, (c) create mathematical, in silica and other substrate based minds which will perform like or better than biological ones, (d) integrate seamlessly all SBIs across the entire array of substrates.  

All these things can be done by scientists in labs or by SBI itself as it evolves if we allow it to do it.  

Active inference theory and minds everywhere framework based on it both show that there is no such thing as unintelligent life. All life forms, organs, and even single cells have intelligence on a spectrum.

I don't know how  to better estimate the probability. SBI is here. Its probability is 1.

OK, Yuri, that sounds exciting, if you're into AGI or artificial life. I was asking my questions to suggest that you produce answers as part of your write-up. I think creating estimates of "this will happen by..."or "this has probability X to happen by ..." with more detail about engineering hurdles will help with your FTX submission.

Doing an assessment of SBI development to some level of capability comparable to what interests FTX would meet their criteria better, whether it's agi working in companies or just destroying the place.

When you say "SBI is here" that seems to ignore a timeline for SBI to go from very small lab experiments to much larger (and more relevant) digital/artificial life.  It's that transformation that defines the relevance of SBI better, I think. After all, I remember reading about first steps in using DNA as computing apparatus, and the promise was that we would one day have full biological computers, but those are still a long way off, or maybe some engineering hurdle came up, and the idea is abandoned.

I'm looking at this as someone who would like you to submit the best contest entry that you can, and am just trying to think of what might help.

As far as agi or artificial life go, I don't believe that they were ever a good idea, in the sense that our civilization will actually benefit from them. OTOH, maybe SBI technology will inform efforts to build better replacement limbs or something like that....

Thank you very much. I understand that you are helping me and appreciate it very much.

The point that I failed to make clear in the paper is that I'm not telling, "Look, here are brains in a dish playing pong and it's cool."

I'm trying to tell that there's a mathematical algorithm  that enables dishbrains to learn. Dishbrains just prove that the algorithm works  with natural neurons embedded in digital environment. We can now use this algorithm to make both biological synthetic organisms and machines which will be sentient and able to talk to each other and to natural organisms in the same language. 

There will be different technological hurdles depending on the substrate and the goal chosen but there are no more fundamental understanding problems which blocked the development of DNA based computer and block now the quantum computer development as well.

There are several labs lead by passionate scientists which work on SBI for many years. Fortune-hunters are joining the gang as they smell success. That's the situation right now 

Oh, so SBI can run on silicon only, and the fundamental discovery is not how to interface natural neurons, but how to use this algorithm  that's been discovered. Well, that's important to convey, and I think you have done so, but I still feel like offering explicit prediction information and creating one article per FTX question is a good idea. 

Thanks, Yuri, for your replies, I'm learning a lot from you.

Curated and popular this week
 ·  · 16m read
 · 
At the last EAG Bay Area, I gave a workshop on navigating a difficult job market, which I repeated days ago at EAG London. A few people have asked for my notes and slides, so I’ve decided to share them here.  This is the slide deck I used.   Below is a low-effort loose transcript, minus the interactive bits (you can see these on the slides in the form of reflection and discussion prompts with a timer). In my opinion, some interactive elements were rushed because I stubbornly wanted to pack too much into the session. If you’re going to re-use them, I recommend you allow for more time than I did if you can (and if you can’t, I empathise with the struggle of making difficult trade-offs due to time constraints).  One of the benefits of written communication over spoken communication is that you can be very precise and comprehensive. I’m sorry that those benefits are wasted on this post. Ideally, I’d have turned my speaker notes from the session into a more nuanced written post that would include a hundred extra points that I wanted to make and caveats that I wanted to add. Unfortunately, I’m a busy person, and I’ve come to accept that such a post will never exist. So I’m sharing this instead as a MVP that I believe can still be valuable –certainly more valuable than nothing!  Introduction 80,000 Hours’ whole thing is asking: Have you considered using your career to have an impact? As an advisor, I now speak with lots of people who have indeed considered it and very much want it – they don't need persuading. What they need is help navigating a tough job market. I want to use this session to spread some messages I keep repeating in these calls and create common knowledge about the job landscape.  But first, a couple of caveats: 1. Oh my, I wonder if volunteering to run this session was a terrible idea. Giving advice to one person is difficult; giving advice to many people simultaneously is impossible. You all have different skill sets, are at different points in
 ·  · 47m read
 · 
Thank you to Arepo and Eli Lifland for looking over this article for errors.  I am sorry that this article is so long. Every time I thought I was done with it I ran into more issues with the model, and I wanted to be as thorough as I could. I’m not going to blame anyone for skimming parts of this article.  Note that the majority of this article was written before Eli’s updated model was released (the site was updated june 8th). His new model improves on some of my objections, but the majority still stand.   Introduction: AI 2027 is an article written by the “AI futures team”. The primary piece is a short story penned by Scott Alexander, depicting a month by month scenario of a near-future where AI becomes superintelligent in 2027,proceeding to automate the entire economy in only a year or two and then either kills us all or does not kill us all, depending on government policies.  What makes AI 2027 different from other similar short stories is that it is presented as a forecast based on rigorous modelling and data analysis from forecasting experts. It is accompanied by five appendices of “detailed research supporting these predictions” and a codebase for simulations. They state that “hundreds” of people reviewed the text, including AI expert Yoshua Bengio, although some of these reviewers only saw bits of it. The scenario in the short story is not the median forecast for any AI futures author, and none of the AI2027 authors actually believe that 2027 is the median year for a singularity to happen. But the argument they make is that 2027 is a plausible year, and they back it up with images of sophisticated looking modelling like the following: This combination of compelling short story and seemingly-rigorous research may have been the secret sauce that let the article to go viral and be treated as a serious project:To quote the authors themselves: It’s been a crazy few weeks here at the AI Futures Project. Almost a million people visited our webpage; 166,00
 ·  · 32m read
 · 
Authors: Joel McGuire (analysis, drafts) and Lily Ottinger (editing)  Formosa: Fulcrum of the Future? An invasion of Taiwan is uncomfortably likely and potentially catastrophic. We should research better ways to avoid it.   TLDR: I forecast that an invasion of Taiwan increases all the anthropogenic risks by ~1.5% (percentage points) of a catastrophe killing 10% or more of the population by 2100 (nuclear risk by 0.9%, AI + Biorisk by 0.6%). This would imply it constitutes a sizable share of the total catastrophic risk burden expected over the rest of this century by skilled and knowledgeable forecasters (8% of the total risk of 20% according to domain experts and 17% of the total risk of 9% according to superforecasters). I think this means that we should research ways to cost-effectively decrease the likelihood that China invades Taiwan. This could mean exploring the prospect of advocating that Taiwan increase its deterrence by investing in cheap but lethal weapons platforms like mines, first-person view drones, or signaling that mobilized reserves would resist an invasion. Disclaimer I read about and forecast on topics related to conflict as a hobby (4th out of 3,909 on the Metaculus Ukraine conflict forecasting competition, 73 out of 42,326 in general on Metaculus), but I claim no expertise on the topic. I probably spent something like ~40 hours on this over the course of a few months. Some of the numbers I use may be slightly outdated, but this is one of those things that if I kept fiddling with it I'd never publish it.  Acknowledgements: I heartily thank Lily Ottinger, Jeremy Garrison, Maggie Moss and my sister for providing valuable feedback on previous drafts. Part 0: Background The Chinese Civil War (1927–1949) ended with the victorious communists establishing the People's Republic of China (PRC) on the mainland. The defeated Kuomintang (KMT[1]) retreated to Taiwan in 1949 and formed the Republic of China (ROC). A dictatorship during the cold war, T