Hide table of contents

Advanced AI systems can potentially perceive, decide, and act much faster than humans can -- perhaps many orders of magnitude faster. Given that we're used to intelligent agents all operating at about human speed, the effects of this 'speed mismatch' could be quite startling & counter-intuitive. An advanced AI might out-pace human actions and reactions in a way that's somewhat analogous to the way that a 'speedster' superhero (e.g. the Flash, Quicksilver) can out-pace normal humans, or the way that some fictional characters can 'stop time' and move around as if if everyone else is frozen in place (e.g. in 'The Fermata' novel (1994) by Nicholson Baker). 

Are there any more realistic depictions of this potential AI/human speed mismatch in nonfiction articles or books, or in science fiction stories, movies, or TV series -- especially ones that explore the risks and downsides of the mismatch?

18

0
0

Reactions

0
0
New Answer
New Comment


4 Answers sorted by

I personally find video game speedrunning a pretty useful intuition pump for what it might look like for an AI to do things in the real world. Seeing the skill-ceiling in games feels like it has helped me calibrate on how crazy things could get if you have much faster-thinking and faster-acting Artificial Intelligence. 

Habryka -- nice point. 

Example: speedrunning 'Ultimate Doom': 

This isn't quite what you're looking for because it's more a partial analogy of the phenomenon you point to rather than a realistic depiction, but FWIW I found this old short story by Eliezer Yudkowsky quite memorable.

PS: A few good examples I can think of off the top of my head (although they're not particularly realistic in relation to current AI tech):

  • The space battle scenes in the Culture science fiction novels by Iain M. Banks, in which the ship 'Minds' (super advanced AIs) fight so fast using mostly beam weapons that the battles are typically over in a few seconds, long before their human crews have any idea what's happening. https://spacebattles-factions-database.fandom.com/wiki/Minds 
  • The scene in Avengers: Age of Ultron in which Ultron wakes up, learns human history, defeats Jarvis, escapes into the Internet, and starts manufacturing robot copies of itself within a few seconds: 
  • The scenes in Mandalorian TV series where the IG-11 combat robot is much faster than the humanoid storm troopers: 

"The Bobiverse" series is lighthearted and generally techno-optimistic, but does portray this in a way that seems accurate to me.

Erin - thanks; looks interesting; hadn't heard of this science fiction book series before. 

https://bobiverse.fandom.com/wiki/We_Are_Legion_(We_Are_Bob)_Wiki

Comments3
Sorted by Click to highlight new comments since:

Thanks for the very useful link. I hadn't read that before. 

I like the intuition pump that if advanced AI systems are running at about 10 million times human cognitive speed, then one year of human history equals 10 million years of AI experience.

Yup! Alternatively: we’re working with silicon chips that are 10,000,000× faster than the brain, so we can get a 100× speedup even if we’re a whopping 100,000× less skillful at parallelizing brain algorithms than the brain itself.

Curated and popular this week
Ben_West🔸
 ·  · 1m read
 · 
> Summary: We propose measuring AI performance in terms of the length of tasks AI agents can complete. We show that this metric has been consistently exponentially increasing over the past 6 years, with a doubling time of around 7 months. Extrapolating this trend predicts that, in under a decade, we will see AI agents that can independently complete a large fraction of software tasks that currently take humans days or weeks. > > The length of tasks (measured by how long they take human professionals) that generalist frontier model agents can complete autonomously with 50% reliability has been doubling approximately every 7 months for the last 6 years. The shaded region represents 95% CI calculated by hierarchical bootstrap over task families, tasks, and task attempts. > > Full paper | Github repo Blogpost; tweet thread. 
 ·  · 2m read
 · 
For immediate release: April 1, 2025 OXFORD, UK — The Centre for Effective Altruism (CEA) announced today that it will no longer identify as an "Effective Altruism" organization.  "After careful consideration, we've determined that the most effective way to have a positive impact is to deny any association with Effective Altruism," said a CEA spokesperson. "Our mission remains unchanged: to use reason and evidence to do the most good. Which coincidentally was the definition of EA." The announcement mirrors a pattern of other organizations that have grown with EA support and frameworks and eventually distanced themselves from EA. CEA's statement clarified that it will continue to use the same methodologies, maintain the same team, and pursue identical goals. "We've found that not being associated with the movement we have spent years building gives us more flexibility to do exactly what we were already doing, just with better PR," the spokesperson explained. "It's like keeping all the benefits of a community while refusing to contribute to its future development or taking responsibility for its challenges. Win-win!" In a related announcement, CEA revealed plans to rename its annual EA Global conference to "Coincidental Gathering of Like-Minded Individuals Who Mysteriously All Know Each Other But Definitely Aren't Part of Any Specific Movement Conference 2025." When asked about concerns that this trend might be pulling up the ladder for future projects that also might benefit from the infrastructure of the effective altruist community, the spokesperson adjusted their "I Heart Consequentialism" tie and replied, "Future projects? I'm sorry, but focusing on long-term movement building would be very EA of us, and as we've clearly established, we're not that anymore." Industry analysts predict that by 2026, the only entities still identifying as "EA" will be three post-rationalist bloggers, a Discord server full of undergraduate philosophy majors, and one person at
 ·  · 2m read
 · 
Epistemic status: highly certain, or something The Spending What We Must 💸11% pledge  In short: Members pledge to spend at least 11% of their income on effectively increasing their own productivity. This pledge is likely higher-impact for most people than the Giving What We Can 🔸10% Pledge, and we also think the name accurately reflects the non-supererogatory moral beliefs of many in the EA community. Example Charlie is a software engineer for the Centre for Effective Future Research. Since Charlie has taken the SWWM 💸11% pledge, rather than splurge on a vacation, they decide to buy an expensive noise-canceling headset before their next EAG, allowing them to get slightly more sleep and have 104 one-on-one meetings instead of just 101. In one of the extra three meetings, they chat with Diana, who is starting an AI-for-worrying-about-AI company, and decide to become a cofounder. The company becomes wildly successful, and Charlie's equity share allows them to further increase their productivity to the point of diminishing marginal returns, then donate $50 billion to SWWM. The 💸💸💸 Badge If you've taken the SWWM 💸11% Pledge, we'd appreciate if you could add three 💸💸💸 "stacks of money with wings" emoji to your social media profiles. We chose three emoji because we think the 💸11% Pledge will be about 3x more effective than the 🔸10% pledge (see FAQ), and EAs should be scope sensitive.  FAQ Is the pledge legally binding? We highly recommend signing the legal contract, as it will allow you to sue yourself in case of delinquency. What do you mean by effectively increasing productivity? Some interventions are especially good at transforming self-donations into productivity, and have a strong evidence base. In particular:  * Offloading non-work duties like dates and calling your mother to personal assistants * Running many emulated copies of oneself (likely available soon) * Amphetamines I'm an AI system. Can I take the 💸11% pledge? We encourage A
Recent opportunities in AI safety