Hi,
As a disclaimer, this will not be as eloquent or well-informed as most of the other posts on this forum. I’m something of an EA lurker who has a casual interest in philosophy but is wildly out of her intellectual depth on this forum 90% of the time. I’m also somewhat prone to existential anxiety and have a tendency to become hyper-fixated on certain topics - and recently had the misfortune of falling down the AI safety internet rabbit hole.
It all started when I used ChatGPT for the first time and started to become concerned that I might lose my (content writing) job to a chatbot. My company then convened a meeting where they reassured as all that despite recent advances in AI, they would continue taking a human-led approach to content creation ‘for now’ (which wasn’t as comforting as they probably intended).
In a move I now somewhat regret, I decided my best bet would be to find out as much about the topic as I could. This was around the time that Geoffrey Hinton stepped down from Google, so the first thing I encountered was one of his media appearances. This quickly updated me from ‘what if AI takes my job’ to ‘what if AI kills me’. I was vaguely familiar with the existential risk from AI scenarios already, but had considered them far off enough the the future to not really worry about.
In looking for less bleak perspectives than Hinton’s, I managed to find the exact opposite (ie that Bankless episode with Eliezer Yudkowsky). From there I was introduced to whole cast of similarly pessimistic AI researchers predicting the imminent extinction of humanity with all the confidence of fundamentalist Christians awaiting the rapture (I’m sure I don’t have to name them here - also I apologise if any of you reading this are the aforementioned researchers, I don’t mean this to be disparaging in any way - this was just my first impression as one of the uninitiated).
I’ll be honest and say that I initially thought I’d stumbled across some kind of doomsday cult. I assumed there must be some more moderate expert consensus that the more extreme doomers were diverging from. I spent a good month hunting for the well-established body of evidence projecting a more mundane, steady improvement of technology, where everything in 10 years would be kinda like now but with more sophisticated LLMs and an untold amount of AI-generated spam clogging up the internet. Hours spent scanning think-pieces and news reports for the magic words ‘while a minority of researchers expect worst-case scenarios, most experts believe…’. But ‘most experts’ were nowhere to be found.
The closest I could find to a reasonably large sample size was that 2022 (?) survey that gave rise to the much-repeated statistic about half of ML researchers placing a >10% chance on extinction from AI. If anything, that survey seemed reassuring, because the median probability was something around 5% as opposed to the >50% estimated by the most prominent safety experts. There was also the recent XPT forecasting contest, which, again produced generally low p(doom) estimates and seemed to leave most people quibbling over the fact that domain experts were assigning single digit probabilities to AI extinction, while superforecasters thought the odds were below 1%. I couldn’t help but think that these seemed like strange differences of opinion to be focused on, when you don’t need to look far to find seasoned experts who are convinced that AI doom is all but inevitable within the next few years.
I now find myself in a place where I spend every free second scouring the internet for the AGI timelines and p(doom) estimates of anyone who sounds vaguely credible. I’m not ashamed to admit that this involves a lot of skim reading, since I, a humble English lit grad, am simply not smart enough to comprehend most of the technical or philosophical details. I’ve filled my brain with countless long-form podcasts, forum posts and twitter threads explaining that, for reasons I don’t understand, I and everyone I care about will die in the next 3 years. Or the next 10. Or sometime in the late 2030s. Or that there actually isn’t anything to worry about at all. It’s like having received diagnoses from about 30 different doctors.
At this point, I have no idea what to believe. I don’t know if this is the case of the doomiest voices being the loudest, while the world is actually populated with academics, programmers and researchers who form the silent, unconcerned majority - or whether we genuinely are all screwed. And I don’t know how to cope psychologically with not knowing which world we’re in. Nor can I speak to any of my friends or family about it, because they think the whole thing is ridiculous, and I’ve put myself in something of a boy who cried wolf situation by getting myself worked up over a whole host of worst-case scenarios over the years.
Even if we are all in acute danger, I’m paralysed by the thought that I really can’t do anything about it. I’m pretty sure I’m not going to solve the alignment problem using my GCSE maths and the basic HTML I taught myself so I could customise my tumblr blog when I was 15. Nor do I have the social capital or media skills to become some kind of everywoman tech Cassandra warning people about the coming apocalypse. Believing that we’re (maybe) all on death’s door is also making it extremely hard to motivate myself to make any longer term changes in my own life, like saving money, sorting out my less-than-optimal mental health or finding a job I actually like.
So I’m making this appeal to the more intelligent and well-informed - how do you cope with life through the AI looking glass? Just how worried are you? And if you place a signifiant probability on the death of literally everyone in the near future, how does that impact your everyday life?
Thanks for reading!
I think it's great that you're asking for support rather than facing existential anxiety alone, and I'm sorry that you don't seem to have people in your life who will take your worries seriously and talk through them with you. And I'm sure everyone responding here means well and wants the best for you, but joining the Forum has filtered us—whether for our worldviews, our interests, or our susceptibility to certain arguments. If we're here for reasons other than AI, then we probably don't mind talk of doom or are at least too conflict-averse to continually barge into others' AI discussions.
So I would caution you that asking this question here is at least a bit like walking into a Bible study and asking for help from people more righteous than you in clarifying your thinking about God because you're in doubt and perseverating on thoughts of Hell. You don't have to listen to any of us, and you wouldn't have to even if we really were all smarter than you.
You point out the XPT forecasts. I think that's a great place to start. It's hard to argue that a non-expert ought to defer more to AI-safety researchers than to either the superforecasters or the expert group. Having heard from XPT participants, I don't think the difference between them and people more pessimistic about AI risk has to do with facility with technical or philosophical details. This matches my experience reading deep into existential risk debates over the years—they don't know anything I don't. They mostly find different lines of argument more or less persuasive.
I don't have first-hand advice to give on living with existential anxiety. I think the most important thing is to take care of yourself, even if you do end up settling on AI safety as your top priority. A good therapist might have helpful ideas regarding rumination and feelings of helplessness, which aren't required responses to any beliefs about existential risk.
I'm glad to respond to comments here, but please feel free to reach out privately as well. (That goes for anyone with similar thoughts who wants to talk to someone familiar with AI discussions but unpersuaded about risk.)
"would caution you that asking this question here is at least a bit like walking into a Bible study and asking for help from people more righteous than you in clarifying your thinking about God because you're in doubt and perseverating on thoughts of Hell. You don't have to listen to any of us, and you wouldn't have to even if we really were all smarter than you."
This is #wisdom love it.