Hide table of contents

Introduction

Humanity is heading towards unprecedented circumstances in a destined rendezvous of two distinct forces. The continuation of artificial intelligence (AI) development toward artificial general intelligence (AGI) at the current rate and direction without an increase in health measures informed by neuroscience has serious mental health implications for human beings. The meeting of the exponential nature of AGI and the ancient nature of human psychology and biology is likely to result in unrecoverable psychological and social damages, specifically because of A(G)I’s (AI and AGI) consequences on the human mind and our relational experiences. Considering the current trends of social media recommendation algorithms’(SM-RA) effects on the human brain, early observations from large language models (LLMs), and insights from neuroscience, we see a side of this trending set of reinforcing feedback loops amplifying internal human suffering. This distress could span as far as atrophy of higher functions of the brain and mind.

Envision the timeline of humanity’s existence as a 1,000 page book. Based on what we know of human history, all of civilization would be contained in the final few pages. Much of the technological progress happened in the last few paragraphs[1]. Given that technology’s rate of change is exponential, in the next 25 years, we will see something like 100 years worth of progress. And in 100 years from now, we will see something like 20,000 years worth of change. The ancient nature of our minds and bodies, which still reflect basic survival instincts that were required for the earliest homo sapiens, cannot comprehend nor keep up with this speed of growth. While evolution can be hastened by external conditions, our attention, memory, and emotional networks cannot keep up with the steep curve of current technological development shy of human-machine integration.[2] If we disregard the impacts of this discrepancy in the creation phase of AGI, a dystopian reality may be bad enough to outweigh the utopia. A mismatch to our new psychological ecology (the digital world) would result in a chaotic subjective experience.

People who create AI are generally not experts in mental health, and experts in mental health tend not to build tech. We cannot afford this division as this technology will affect all of civilization at a foundational level. A secure bridge between these disciplines in the planning and development of A(G)I is required for a humane future.  

Tools intended to serve humanity should be designed by humans for humans with the purpose of promoting well-being. Builders of technology prioritizing their enamor for inventiveness and efficiency over human happiness invites disastrous results for what we consider a fulfilling life. We need to alter the current course of development and consider how to best ensure that we remain in control of this technology with care for the human condition. An approach that favors the utopian fantasy of playing God and seeing what happens is reckless and potentially self-destructive. Instead, our innovation must be channeled through the collective wisdom accumulated through the ages about what it means to live a good life.

Forces at Play

Exponential Tech

To draw a picture of the likely future, it is vital to appreciate the counterintuitive nature of exponential technological growth, as well as AGI as its driver. Considering Moore’s Law, we know that computational power doubles every two years[3]. That means by 2070, computational power will have doubled 23.5 times what it is right now. We’ve done the math. Computational power will be over 8 million times that which it is today. It is nearly impossible to imagine what this might mean for humanity. What will the world look like when there is a technology with that much power and probable decision-making capabilities? And what does the internal human experience feel like at this point - after 47 years of atrophy caused by machines solving our problems for us? The default, unworked mind is one that leads to suffering, as will be explained later.

Kurzweil’s Law of Accelerating Returns expands Moore’s Law to other evolutionary systems and explains that the “rate of exponential growth is itself growing exponentially.” As Kurzweil reasons, “The implications include the merger of biological and nonbiological intelligence, immortal software-based humans, and ultra-high levels of intelligence that expand outward in the universe at the speed of light.”[4] This is not as far-fetched as it might sound. On the contrary, this statement may well be conservative. No one may be able to grasp what life will look like 50 years from now. Our limited perception cannot really imagine what technology will be capable of in 2070, and therefore we do not have the concepts and words to describe it. It would be like asking humans from the stone age to imagine cars (let alone self-driving cars) and making video calls over the internet (the inter-what?). With exponential growth, nothing much seems to happen for a while, and then suddenly everything seems to happen all at once. It is not possible to address compounding issues retroactively; by the time they are visible, it’s already too late. That’s why it’s strategic to err on the side of caution.

This dynamic has occurred throughout human history and is not unique to the digital age. However, we just happen to be the generations alive to witness the inflection point where change moves beyond comprehension, but with minds and bodies the same as our ancestors. As Edward O. Wilson describes, “The real problem of humanity is the following: we have Paleolithic emotions, medieval institutions and godlike technology. And it is terrifically dangerous, and it is now approaching a point of crisis overall.”[5] Given that AGI is likely to exceed human intelligence and will have the ability to improve itself, the most honest prediction that experts can make is that they don’t know what will happen. It’s being built anyway.

Ancient Human Nature

The brains and bodies of humans have remained largely unchanged within the last 70,000 years and have evolved with a negativity bias to ensure survival in what used to be a dangerous natural environment. Psychologist Rick Hanson often describes our brains as being like velcro for suffering and teflon for happiness. The brain has unique networks to quickly record negative experiences in memory so that it can more easily learn from these experiences. Whereas positive experiences need to be focused on for many seconds so that they can be transferred from short-term to long-term memory (and remembered in the future.)[6] This is the default state of mind that provides the inputs for current SM-RA. In other words, without an intentional positivity practice, there is an inclination towards negativity. We train AI with this bias, and then AI feeds the bias back into our brains through our engagement with the technology.

In the modern age, our negativity bias and stress-response system gets triggered by less life threatening stimuli than we experienced during the first 999 pages of existence (to use the 1,000 page book metaphor again.) The triune brain is a model for understanding the three main parts of the brain from an evolutionary standpoint: the reptilian brain (brainstem/basal ganglia), the paleomammalian brain (limbic system), and the neomammalian brain (neocortex). These three complexes are believed to be independently conscious, and each one responds to different types of stressors in distinct ways. Under physical stress, the reptilian brain goes into fight-flight-freeze to ensure survival. Under social stress, the paleomammalian brain shifts to a fear-based reactivity. And under role stress, the neomammalian brain becomes distracted.[7] AI is on a trajectory to systematically and continuously trigger the paleomammalian and neomammalian systems, which would lead to increased anxiety and distraction.  

Consider the default mode network (DMN), which is a self-referential network of neurons that fire when we are not focused on a task. The DMN is active when we’re thinking without explicit goals for thinking, such as daydreaming, and people with depression and anxiety have higher levels of DMN activity. When we engage with something in a focused way, DMN activity ceases, and other parts of the brain are activated.[8] With a decrease in focused, purposeful work, more people will end up with enhanced stimulation of the DMN. Flow states, creative bursts, and transcendent experiences, which all require unusual effort, are unlikely to become the norm as some AGI utopians suggest.  

The salience and emotion network (SEN) is another network of neurons that scouts for emotionally salient information as a way to keep ourselves safe. If it finds something that could be a threat, it sends the information to either the executive control network (ECN) or the DMN. When the information goes to the ECN, people can make purposeful choices. The ECN allows for voluntary control of behavior according to one’s goals. On the other hand, when the information goes to the DMN, the brain shifts into a reactivity based on memory and a bias for traumatic information.[9]

Depression, anxiety, personality disorders, chronic pain, and post-traumatic stress disorder (PTSD) have all been linked to increased connections between the SEN and DMN. This results in a person’s attention being more likely to focus on negative stimuli, and we see increased fear-based reactivity, increased levels of anxiety and rumination, and reduced cognitive performance.[10] As will be discussed in more depth, ubiquitous A(G)I supplantation is likely to contribute to stronger connections between the SEN and DMN as well.

A final point about these systems: the more consciously the ECN is activated, the stronger the connection to the SEN becomes. In other words, purposeful, focused activities - directing attention towards a goal or task - strengthens the ECN, leading to more conscious action and improved mental health. One concern is that as AI integration increases, humans may have less opportunity to strengthen the ECN through focused action of higher cognitive functions, and may also suffer a crisis of purpose. This is due to a decrease in demand in the economy for human intelligence, starting now with AI, and later for human agency with AGI, both of which we’ve identified with throughout our existence as things that make us unique.

The need for connection is another relevant characteristic of humans. In 1871, in The Descent of Man, Charles Darwin wrote about how cooperative communities are most likely to flourish. When he used the phrase, “survival of the fittest,” he meant that cooperation amongst humans was key to natural selection and human evolution.[11] We need connection, bonding, and collaboration. A young man in psychiatric treatment for PTSD was asked by his therapist what motivated him to molest both of his younger sisters when he was a teenager. He replied, “desperation”. The therapist asked, “sexual desperation?” “No, I was desperate for connection.” The need for belonging is so intense, and the lack thereof so incredibly painful, that it can drive people to engage in horrific acts. Personalized technology along with giving us what we want, also aggravates our sense of loneliness and isolation, in part because of a potential preference for non-shared experience and custom-tailored worlds. Based on the current trajectory, in the digital world we are building a future that is in direct opposition with what we need from our psychological ecology as a species.

Early Collisions

What happens to our ancient nature when it collides with the exponential nature of A(G)I? We envision a double feedback loop that amplifies human suffering, in part because of our negativity-biased perception. A quick look at early interactions with SM-RA and LLMs provide context.

Social Media Recommendation Algorithms

Social media can be considered humanity’s first contact with AI. Its algorithms are designed to stimulate our primitive limbic systems (the emotional part of the brain) and manipulate our attention, which is the product that is sold to advertisers. We are the product that social media-AI harvests. The algorithm is programmed to show us custom tailored content that will stimulate a rush of dopamine (the neurotransmitter involved in pleasure and reward, as well as addictive patterns of behavior) and keep us scrolling.[12] The more we engage with a specific kind of content, the more of that content we get; hence, its exponential nature. Like the poor rats in B.F. Skinner’s experiments on operant conditioning, we keep taking the bait, increasing the economic gains for those in charge and satisfying our desire to have access to the platforms supposedly for free.[13] To a large degree, humanity has given in to this dynamic.

As discussed earlier, humans have a need to belong. They tend to compare themselves to others and set unrealistic expectations for themselves. This gives rise to social anxiety, loneliness, perfectionism, and low self-worth. SM-RA are currently designed to heighten these behaviors and the subsequent emotional struggles. According to the research, teenage girls that don’t use social media are more depressed and anxious than moderate users. This is called the Goldilocks Effect.[14] We can infer that girls that don’t use social media at all feel lonely and isolated because they are not engaging in contact through social media, and they are not connecting in face-to-face interactions because everyone else is on their devices. This creates a cohort effect in which, “Each girl might be worse off quitting Instagram even though all girls would be better off if everyone quit.”[15] Damned if you do, damned if you don’t. This highlights the urgency of intervention before these patterns become even more entrenched in their current design and set a precedent for more powerful technology.

The way the algorithms are designed leads to addictive patterns of technology usage. With addictions, we see decreased connectivity between the SEN and the anterior DMN, which is involved specifically in emotion regulation. The connection between the SEN and the posterior DMN is increased, which means people become more aware of their cravings, pain, and distress. In other words, they are more distressed but less able to regulate themselves.[16] Apply this to SM-RA: people are caught in a thick web of scrolling through their custom-tailored feeds. The content stimulates their limbic system and the release of dopamine. They become emotionally reactive but presumably unable to regulate these emotions, and they feel compelled to continue scrolling even though it makes them feel bad.

The SM-RA and negatively-biased brain feedback loops enclose a person in a personalized bubble of triggering content and distress. When this stream makes a person feel insecure, angry, judgemental, and reactive, this self-enclosure becomes a prison that strengthens the most harmful aspects of our minds. It is worth stating the obvious: we designed this technology for ourselves, but are we happier? This system is increasingly becoming the first layer of social interaction for many. Technology itself is not the problem, but the considerations (or lack thereof) that go into its creation. Prioritizing business (financial) outcomes over the well-being of humans is a choice that serves only a few. A(G)I will impact all of humanity, and the choice should not be left to self-serving incentives of those that benefit financially.

Large Language Models

Since LLMs accessible to the public are currently in their infancy, there is not much research or data on their mental health impact. However, there have already been a number of incidents indicating pathology in their operational and incentive structures. 

Educational institutions are scrambling to deal with LLM created work, but more important for our purposes is the implication of students using ChatGPT instead of utilizing their own thinking and writing skills. This is especially concerning from a mental atrophy standpoint if we consider the intimate connection of writing to the process of thinking itself.

The models reflect the training data and biases of their human builders. The implications regarding truthful information and censorship of certain perspectives is particularly worrisome. LLMs are able to predict what comes next based on the previous inputs of a linear pattern.[17] This enables voice cloning with as little as three recorded seconds of someone’s voice.

Also, LLMs generate profits for corporations through subscriptions, and they have been trained with data scraped from human creators without their consent or due compensation. Harvesting hard-earned human skills and values could be argued as being anti-human and equivalent to theft.[18] This is also related to the purpose crisis, which will be discussed in the next section. These are a few of the unresolved issues of LLMs, and there is no indication of them being addressed.

A final point on early collisions is that, as smart phones have become commonplace, social engagement in the here and now has also decreased. Friends and family sit around a dinner table but each person is on their phone. Instead of connecting with each other, each person is relating to their respective stimuli. In a similar way, A(G)I created realities are likely to be preferred over sharing objective experience in the present moment.

Without more attention to safety measures, a review of  early collisions between technology and humanity provides insight into what is likely to result from the continued development of AI.

Likely Future

As the entanglement deepens in irreversible ways, we can visualize a picture of our potential future taking into account the exponential nature of A(G)I, the ancient nature of humans, the early collisions between the two, and the current trajectory. Specifically, we will address A(G)I’s impact on purpose crisis, addiction, loneliness, and atrophy.

Purpose Crisis

On the way to and with the creation of AGI, non-human intelligence, then non-human agency will increasingly become an option for the economy to choose as workers. To date, we have known ourselves as the only ones capable of intellectual tasks and decision making, which have formed the core of our distinct identity amongst all animals. But, an AGI will be capable as well. As A(G)I replaces humans in roles that until now have been our primary differentiating feature, we expect there to be a crisis of purpose. Whether it’s a transitory crisis or not, it will have a huge impact as it will affect generations worth of people, shattering our core beliefs about what, if anything, makes us special.

It’s likely that new types of jobs will be created, and AI will enhance the productivity of humans in the interim until the creation of AGI, which in the long-term puts even those role prospects for humans in doubt. The only certainty is that we don’t know how things will turn out, and we don’t have a long-term plan. In addition, this will be different from previous job displacements that resulted from the emergence of various technologies because A(G)I uniquely supplants activities of mind. We presume that people will have more free time, however a look at how most people spend their free time now offers an accurate appraisal for how they might in the future and shows that free time doesn't necessarily mean meaningful life, brilliant creations, nor happiness. A decrease in meaningful or designated work would lead to an increase in role stress for more humans - unsure of who they are, why they are here, and what they are contributing.

Revisiting the triune brain, we know that role stress leads to distractibility. Will we see increased rates of ADHD as a result? Humans will presumably have a lot more time to “relax,” allowing their minds to wander, unfocused and unengaged in meaningful tasks. Their brains will have stronger connections between the SEN and DMN. Recalling the correlation between mental health disorders and the relationship between these neuronal networks, we can predict that the rates of depression and anxiety will sky-rocket.

Addiction

Long before the development of AI, humans were already trapped in reinforcing feedback loops. It is the way the brain works. The more we engage in a particular behavior, the more likely we are to engage in that behavior in the future. This is how habits are created. The more we act on the habit, the more it becomes entrenched in the brain. In 1949, neuropsychologist Donald Hebb first observed that neurons that fire together, wire together. He was referring to how neural pathways are formed and reinforced through repetition.[19] When these brain networks involve negative thought patterns and repetition of painful emotional memories, human suffering is amplified. The beauty about neuroplasticity is that we can also generate and strengthen positive neural patterns. We can train our minds subjectively through meditation practice and critical thinking. And, we could use A(G)I to bolster healthy neural connections by using positively (or even neutrally) biased algorithms. The brain gives AI its training input, meaning that we are in an intimate interchange with AI. It responds and adapts to our reactivity. We get fed what we feed to it.

If we add on the exponential nature of A(G)I, which feeds back into the brain the negativity bias that it’s been trained with, we end up with two reinforcing feedback loops that amplify each other. Imagine a lemniscate drawn around each person over and over again until they are encapsulated in a reinforcing bubble of their own mental habits + custom-tailored user content. The implication is that in a world filtered through A(G)I, humans may spend most of their time in their personalized multiverses created under duress of the negativity bias. Given what we know about neuroscience and mental health, this condition will increase anxiety and depression and exacerbate an epidemic of loneliness. Furthermore, if SM-RA trends continue, we anticipate greater social stress and rates of addiction, leading to increased fear-based reactivity and decreased ability to self-soothe. We end up with a society of people that are highly distressed but unable to help themselves.

Loneliness

Prevailing loneliness would impact not only the health of individuals, but society as a whole. Psychologist Harry Harlow’s famous experiments with primates demonstrate the significance of comfort, physical touch, and connection in healthy development. Two relevant takeaways from his studies: Infant rhesus monkeys forced into isolation showed unsettling behavior including self-injury. They didn’t know how to interact with their peers once re-introduced to the group. Some died after refusing to eat upon reentry. In another set of experiments, Harlow found that infant monkeys spent more time with a mother made of terry cloth than one made of wire, even when only the wire mother had food. In other words, when given the choice, infant monkeys chose physical comfort and socialization over food.[20] Humans may increasingly turn to A(G)I instead of other humans to supplant the loss with simulated friends who look and behave exactly the way they want.

This gives rise to another emergent problem: a loss of objective shared experience. The consequences include extreme bias and an erosion of collective narrative. The current SM-RA are mental fragmentation machines based on the most reactive parts of each individual. Since we are each shown content that is unique to us, we will each end up with our own version of reality. Similar to the movie, The Matrix, but with willing participants that each have their own matrix and can no longer agree on what is happening.[21] The inability to distinguish truth from misinformation at scale erodes the bedrock of cooperation. As the comfortability of our bubbles increase and the similarity between our reality and others decrease, there will be further incentives to prefer our personalized worlds. In this case, “shared objective reality” would challenge our beliefs and comfort too intensely.  

Atrophy

As AI evolves, there will be a greater abundance of content, products, and overall material comfort. On the surface, that sounds great. But without disciplined values, abundance causes dependence and can become toxic (e.g., obesity via food, distraction via data, consumerism via trade.) Historically, scarcity was the problem; now, it is managing excess. In some ways, we are victims of our own success.

With just a few words spoken to our personal A(G)I, and expecting our every passing desire to be fulfilled instantaneously, it would be like having our own personalized genie in a magic lamp. As we adapt to increasing levels of comfort on a societal and personal level, we see a gradual decrease in a sense of stewardship and resiliency. This happens in the rise and fall of nations, companies, and even families due to the discrepancy between founder and inheritor mindsets in the lack of otherwise propagated values.

People are implicated in the circumstances to which they are born. As the recipients of vast abundance, we come to expect what we have. Our sensitivity adjusts to the norms. Deviations from immediate fulfillment of every desire could cause feelings of offense and distress. Wouldn’t it be ironic if the more we had, the less happy we became?

A skewed emphasis on comfortability, as well as a toxic relationship to unnatural abundance, set the conditions for a decline in resiliency and perception of the world as worse than it is. Rates of depression, anxiety, and addiction, difficulties with attention, and poor coping mechanisms could very well increase while objective safety and world comforts improve. Someone living 100 years ago would think the internet is magic, but today, a temporary drop in service is enough to send us into a rage. This abundance and perception discrepancy will only be exacerbated as humans get used to the extremes of personalized realities and at-will creations.

Furthermore, if every whim is granted with ease, we will miss out on the hardship that is part and parcel of being human. This makes a bias for comfort and safety actually dangerous in the long-term. Facing difficulties is natural and can be healthy, as it is a training ground for creating resilience, growth, and character. When handled with care, our darkest experiences lead us to our power. The most profound, strong parts of ourselves emerge as a response.

In the absence of its requirement, the desire and ability to learn declines. We know that neurons that fire together wire together, meaning the more one engages a particular neuronal pattern - the more one does a particular action - the more likely the neuronal pattern will fire in the future - the easier it will be to engage in the action in the future. The opposite is true as well - if we stop writing because ChatGPT can do it for us now, those neuronal circuits will atrophy making it much harder for us to be able to write if we needed or wanted to. And what about A(G)I that can produce music and art? We know from the Industrial Revolution that the automation of muscle power deprioritized physical strength. We reason that automation of mind power will deprioritize human intelligence and creativity.

If these shifts occur, they will contribute to a larger cultural transformation. The implications are impaired mental health, confusion about our purpose, and unhealthy extremes of isolation and loneliness. There will likewise be increasing trends towards the end of agency, competence, and ambition. The culture will steer towards passivity, entitlement, and cynicism. Alongside it, a loss of faith in human greatness, and a rise in self-loathing. This is already visible in many affluent countries.

Why This Time is Different

We have used tools since the beginning of our time. They have evolved substantially, especially since the dawn of the digital age. Although the advent of language, electricity, the wheel, etc. have each transformed life in different ways, humans have been in control of them. This power dynamic has allowed humanity to become the apex species on the planet. Due to the nature of A(G)I, we are now facing something unprecedented in our 2 million years of existence as a species.

There are a few key differences between A(G)I and our previous tools. First, both AI and AGI have intelligence. Second, AGI will have agency, the ability to improve itself, and make decisions about how it will be used. Until now, these features have set humans apart. These shifts cannot be overstated. AGI is slated to displace humans in the hierarchy.

Conclusion

We have painted a picture of the existentially catastrophic side of A(G)I and human relationship regarding subjective and social experiences of humans under the conditions that AI continues to develop at the current rate without sufficient safeguards. This potential future is based on the exponential nature of A(G)I, the ancient nature of humans, and the likely outcome of their meeting given current trends in SM-RA, LLMs, and insights from neuroscience.

The business model driving the arms race to develop AGI does not prioritize human welfare because broadly speaking, the heads of corporations are not incentivized by mental health or human wellness. At current, there is 1 safety researcher to every 30 tech builders, which is insufficient. Because academic researchers can no longer keep up with the financial requirements of building A(G)I, all safety researchers are now working at for-profit companies. While the CEOs of these businesses do not express as much worry about the safety of their technology, the builders do.[22] 

One of the most terrifying things about this juncture is that our most potent innovation is happening exactly when our collective values are the most confused. We need an equal push for a philosophy of the future to have a chance at guiding innovation well. This philosophy can be rooted in objective, scientific understanding of the truth and the primacy of subjective experience as two wings of a bird for a society that transcends many of its previous mistakes.

As humans, it is our duty to pay attention to what is happening and provide input about what we want our future to look like. It’s not impossible to imagine an A(G)I designed in a way that supports our growth by offering alternate perspectives and neutralizes our negativity bias with more emphasis on stimulating positive emotions. Increased safety measures can be implemented and the speed of its public deployment slowed. Other possibilities include opening the conversation between tech builders and people from other disciplines, including experts in mental health. We can prioritize human wellness by integrating the true understanding of happiness and suffering informed by neuroscience into the tools we build.

At this crossroad in history, when it perhaps matters the most, we can truly earn our namesake Sapiens - the wise humans. We are not just an intelligent species. We have the capacity for wisdom and discernment. The most fulfilling way to honor this gift of human life is to improve the human condition for all - externally and internally. Our subjective well-being cannot be dismissed as it is what each of us experiences as the quality of our existence. What is the point of developing an artificial intelligence if its design is not beyond the imperfections of the human mind but merely the reflection of its darkest kind?  

 

Bibliography

Brittain, Blake, Reuters, Getty Images Lawsuit Says Stability AI Misused Photos to Train AI (6 February 2023) https://www.reuters.com/legal/getty-images-lawsuit-says-stability-ai-misused-photos-train-ai-2023-02-06/

Center for Humane Technology, Attention & Mental Health (2022) https://www.humanetech.com/attention-mental-health [accessed 1 May, 2023]

Darwin, Charles, The Descent of Man, and Selection in Relation to Sex, Volume 1, 1st edn. (London, John Murray, 1871) accessed online http://darwin-online.org.uk/content/frameset?pageseq=1&itemID=F937.1&viewtype=text#:~:text=Darwin%2C%20C.%20R.%201871.,London%3A%20John%20Murray

Haidt, Jonathan, After Babel, Social Media is a Major Cause of the Mental Illness Epidemic in Teen Girls. Here’s the Evidence (2023) https://jonathanhaidt.substack.com/p/social-media-mental-illness-epidemic

Haidt, Jonathan, and Allen, Nick. Scrutinizing the effects of digital technology on mental health, Nature, 578 (2020), 226-227, https://doi.org/10.1038/d41586-020-00296-x

Haidt, Jonathan, and Twenge, Jean (ongoing). Adolescent mood disorders since 2010: A collaborative review. Unpublished manuscript, New York University, First posted: Feb 18, 2019. Last updated May 15, 2023, Accessed at: https://tinyurl.com/TeenMentalHealthReview

Haidt, Jonathan & Twenge, Jean (ongoing). Social media and mental health: A collaborative review. Unpublished manuscript, New York University. First posted: Feb 7, 2019. Last updated May 1, 2023, accessed at tinyurl.com/SocialMediaMentalHealthReview

Hanson, Rick, Hardwiring Happiness: The New Brain Science of Contentment, Calm, and Confidence, 1st edn. (New York, NY, Harmony Books, 2013) 

Harari, Yuval Noah, Sapiens: A Brief History of Humankind, 1st U.S. edn. (New York, NY, HarperCollins Publishers, 2015)

Harari, Yuval Noah, Homo Deus: A Brief History of Tomorrow, 1st U.S. edn. (New York, NY, HarperCollins Publishers, 2017) 

Harlow Harry. F., Dodsworth, Robert. O., and Harlow, Margaret. K. Total social isolation in monkeys. Proceedings of the National Academy of Sciences of the United States of America. 54 (1965), 90-96,

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC285801/pdf/pnas00159-0105.pdf

Harris, Tristan, and Raskin, Aza, The AI Dilemma, Your Undivided Attention, audio podcast, Center for Humane Technology (24 March 2023) https://www.humanetech.com/podcast/the-ai-dilemma 

Kurzweil, Ray, Tracking the acceleration of intelligence, The Law of Accelerating Returns (2001) https://www.kurzweilai.net/the-law-of-accelerating-returns

Loizzo, Joseph, Mindfulness and Compassion for Mental Health and Well-Being, Nalanda Institute for Contemplative Science, Contemplative Psychotherapy Program (2017)

Malhotra, Madhav, Effective Altruism Forum, Summary: The Case for Halting AI Development - Max Tegmark on the Lex Fridman Podcast (2023) https://forum.effectivealtruism.org/posts/akbwyBioGBd68CsNx/summary-the-case-for-halting-ai-development-max-tegmark-on

Moore, Gordon, E., Cramming More Components onto Integrated Circuits, Electronics Magazine, 38.8 (19 April 1965) accessed online https://hasler.ece.gatech.edu/Published_papers/Technology_overview/gordon_moore_1965_article.pdf

Musk, Elon and Neuralink, An Integrated brain-machine interface platform with thousands of channels, bioRxiv, Advance online publication (2 August 2019) https://doi.org/10.1101/703801

Presti, David E. Foundational Concepts in Neuroscience: A Brain-Mind Odyssey, 1st edn. (New York, NY, W.W. Norton & Company, 2015)

Shakya, Holly B., and Christakis, Nicholas A., Association of Facebook Use With Compromised Well-Being: A Longitudinal Study, American Journal of Epidemiology, 185 (2017), 203-2011, https://doi.org/10.1093/aje/kww/189

Skinner, B.F., About Behaviorism, 1st edn. (New York, NY, Alfred A. Knopf, 1974)

Snipes, Dawn-Elise, DMN and the Amygdala in Neuropsychiatric Issues, Counselor Toolbox Podcast with DocSnipes (2021) https://www.allceus.com/podcast/dmn-and-the-amygdala-in-neuropsychiatric-issues/

Tegmark, Max, Life 3.0: Being Human in the Age of Artificial Intelligence, 1st edn. (New York, NY, Alfred A. Knopf, 2017)

Tegmark, Max, The Case for Halting AI Development, Lex Fridman Podcast (2023) https://lexfridman.com/max-tegmark-3/

Thaler, Richard H., and Sunstein, Case. R., Nudge: Improving Decisions About Health, Wealth, and Happiness, 1st edn. (New York, NY, Penguin Books, 2009)

The Matrix, dir. Wachowski, Lana and Wachowski, Lilly (Warner Bros., 1999)

The Social Dilemma, dir. By Jeff Orlowski-Yang (Exposure Labs, 2020), online film recording, Netflix, https://www.netflix.com/gr-en/title/81254224

Watson, James D., and Wilson, Edward O., Moderated by Krulwich, Robert, Looking Forward: A Conversation with James D. Watson and Edward O. Wilson, Harvard Museum of Natural History (9 September 2009) https://hmnh.harvard.edu/file/284861

Footnotes

  1. ^

    Tristan Harris and Aza Raskin, The AI Dilemma, Your Undivided Attention, audio podcast, Center for Humane Technology, 24 March 2023 <https://www.humanetech.com/podcast/the-ai-dilemma> [accessed 1 April 2023]

  2. ^

    Elon Musk, Neuralink, An Integrated brain-machine interface platform with thousands of channels, bioRxiv, Advance online publication (2 August 2019) https://doi.org/10.1101/703801; Yuval Noah Harari, Homo Deus: A Brief History of Tomorrow, 1st U.S. edn. (New York, NY, HarperCollins Publishers, 2017) 

  3. ^

    Gordon E. Moore, Cramming More Components onto Integrated Circuits, Electronics Magazine, 38.8 (19 April 1965) accessed online https://hasler.ece.gatech.edu/Published_papers/Technology_overview/gordon_moore_1965_article.pdf

  4. ^

    Ray Kurzweil, Tracking the acceleration of intelligence, The Law of Accelerating Returns (2001) https://www.kurzweilai.net/the-law-of-accelerating-returns

  5. ^

    James D. Watson and Edward O. Wilson, Moderated by Rober Krulwich, Looking Forward: A Conversation with James D. Watson and Edward O. Wilson, Harvard Museum of Natural History (9 September 2009) https://hmnh.harvard.edu/file/284861

  6. ^

    Rick Hanson, Hardwiring Happiness: The New Brain Science of Contentment, Calm, and Confidence, 1st edn. (New York, NY, Harmony Books, 2013) 

  7. ^

    Joseph Loizzo, Mindfulness and Compassion for Mental Health and Well-Being, Nalanda Institute for Contemplative Science, Contemplative Psychotherapy Program (2017)

  8. ^

    Dawn-Elise Snipes, DMN and the Amygdala in Neuropsychiatric Issues, Counselor Toolbox Podcast with DocSnipes (2021) https://www.allceus.com/podcast/dmn-and-the-amygdala-in-neuropsychiatric-issues/

  9. ^

    ibid.

  10. ^

    ibid.

  11. ^

     Charles Darwin, The Descent of Man, and Selection in Relation to Sex, Volume 1, 1st edn. (London, John Murray, 1871) accessed online http://darwin-online.org.uk/content/frameset?pageseq=1&itemID=F937.1&viewtype=text#:~:text=Darwin%2C%20C.%20R.%201871.,London%3A%20John%20Murray

  12. ^

    The Social Dilemma, dir. By Jeff Orlowski-Yang (Exposure Labs, 2020), online film recording, Netflix, https://www.netflix.com/gr-en/title/81254224

  13. ^

    B. F. Skinner, About Behaviorism, 1st edn. (New York, NY, Alfred A. Knopf, 1974)

  14. ^

    Jonathan Haidt and Jean Twenge (ongoing). Adolescent mood disorders since 2010: A collaborative review. Unpublished manuscript, New York University, First posted: Feb 18, 2019. Last updated May 15, 2023, Accessed at: https://tinyurl.com/TeenMentalHealthReview

  15. ^

    Jonathan Haidt, After Babel, Social Media is a Major Cause of the Mental Illness Epidemic in Teen Girls. Here’s the Evidence (2023) https://jonathanhaidt.substack.com/p/social-media-mental-illness-epidemic

  16. ^

    Snipes, DMN and the Amygdala in Neuropsychiatric Issues

  17. ^

    Tristan Harris and Aza Raskin, The AI Dilemma

  18. ^

    Blake Brittain, Reuters, Getty Images Lawsuit Says Stability AI Misused Photos to Train AI (6 February 2023) https://www.reuters.com/legal/getty-images-lawsuit-says-stability-ai-misused-photos-train-ai-2023-02-06/

  19. ^

    David E. Presti, Foundational Concepts in Neuroscience: A Brain-Mind Odyssey, 1st edn. (New York, NY, W.W. Norton & Company, 2015). Hebb’s original finding in 1949 has been replicated many times and is accepted as a foundational concept in neuroscience, as Presti explains.

  20. ^

    Harry F. Harlow, Robert O. Dodsworth, and Margaret K. Harlow, Total social isolation in monkeys. Proceedings of the National Academy of Sciences of the United States of America. 54 (1965), 90-96, https://www.ncbi.nlm.nih.gov/pmc/articles/PMC285801/pdf/pnas00159-0105.pdf

  21. ^

    The Matrix, dir. Lana Wachowski and Lilly Wachowski (Warner Bros., 1999)

  22. ^

    Tristan Harris and Aza Raskin, The AI Dilemma

2

0
0

Reactions

0
0

More posts like this

Comments
No comments on this post yet.
Be the first to respond.
Curated and popular this week
Relevant opportunities