Hide table of contents

If you're forecasting AI progress or asking someone about their timelines, what event should you focus on?

tl;dr it's messy and I don't have answers.

AGI, TAI, etc. are bad. Mostly because they are vague or don't capture what we care about.

  • AGI = artificial general intelligence (no canonical operationalization)
    • This is vague/imprecise, and is used vague/imprecisely
    • We care about capability-level; we don't directly care about generality
    • Maybe we should be paying attention to specific AI capabilities, AI impacts, or conditions for AI catastrophe
  • TAI = transformative AI, originally defined as "AI that precipitates a transition comparable to (or more significant than) the agricultural or industrial revolution" (see also discussion here)
    • "a transition comparable to . . . the agricultural or industrial revolution" is vague, and I don't know what it looks like
    • "precipitates" is ambiguous. Suppose for illustration that AI in 2025 would take 10 years to cause a transition comparable to the industrial revolution (if there was no more AI progress, or no more AI progress by humans), but AI in 2026 would take 1 year. Then the transition is precipitated by the 2026-AI, but the 2025-AI was capable enough to precipitate a transition. Is the 2025-AI TAI? If so, "TAI" seems to miss what we care about. And regardless, whether a set of AI systems precipitates a transition comparable to the industrial revolution is determined not just by the capabilities and other properties of the systems, but also by other facts about the world, which is weird. Also note that some forecasters believe that current Al would be "eventually transformative" but future Al will be transformative faster, so under some definitions, they believe we already have TAI.
    • This is often used vaguely/imprecisely
  • HLAI = human-level AI (no canonical operationalization)
    • This is kinda vague/imprecise but can be operationalized pretty well, I think
    • This may come after the stuff we should pay attention to
  • HLMI = high-level machine intelligence, defined as "when unaided machines can accomplish every task better and more cheaply than human workers"
    • This will come after the stuff we should pay attention to
  • PONR = (AI-induced) point of no return, vaguely defined as "the day we AI risk reducers lose the ability to significantly reduce AI risk"
    • But we may lose that ability gradually rather than in a binary, threshold-y way
    • And forecasting PONR flows through forecasting narrower events

More: APS-AIPASTAprepotent AIfractional automation of 2020 cognitive tasks[three levels of transformativeness]; various operationalizations for predictions (e.g., MetaculusManifoldSamotsvety); and various definitions of AGI, TAI, and HLAI. Allan Dafoe uses “Advanced AI” to "gesture[] towards systems substantially more capable (and dangerous) than existing (2018) systems, without necessarily invoking specific generality capabilities or otherwise as implied by concepts such as 'Artificial General Intelligence' ('AGI')." Some people talk about particular visions of AI, such as CAIStech company singularity, and perhaps PASTA.

Some forecasting methods are well-suited for predicting particular kinds of conditions. For example, biological anchors most directly give information about time to humanlike capabilities. And "Could Advanced AI Drive Explosive Economic Growth?" uses economic considerations to give information about economic variables; it couldn't be adapted well for other kinds of predictions.

Operationalizations of things-like-AGI are ideally

  • useful or tracking something we care about
    • If you knew what specific capabilities would be a big deal, you could focus on predicting those capabilities
  • easy to forecast
  • maybe simple or concrete
  • maybe exclusively determined by the properties/capabilities of the AI, rather than also other facts about the world

If you're eliciting forecasts, like in a survey, make sure respondents interpret what you say correctly. In particular, things you should clarify for timelines surveys (of a forecasting-sophisticated population like longtermist researchers, not like the public) are:

  • Conditional on no catastrophe before AGI-or-whatever or not
  • Independent impression or all-things-considered view
  • (sometimes) when it happens vs when it is feasible
  • (sometimes) whether to counterfactually condition on progress not slowing due to increased safety concerns, not slowing due to interventions by the AI safety community, or something similar

Forecasting a particular threshold of AI capabilities may be asking the wrong question. To inform at least some interventions, "it may be more useful to know when various 'pre-TAI' capability levels would be reached, in what order, or how far apart from each other, rather than to know when TAI will be reached" (quoting Michael Aird). "We should think about the details of different AI capabilities that will emerge over time [] and how those details will affect the actions we can profitably take" (quoting Ashwin Acharya).

This post draws on some research by and discussion with Michael Aird, Daniel Kokotajlo, Ashwin Acharya, and Matthijs Maas.

30

0
0

Reactions

0
0

More posts like this

Comments2


Sorted by Click to highlight new comments since:

Helpful post, Zach! I think it's more useful and concrete to focus on asking about specific capabilities instead of asking about AGI/TAI etc. and I'm pushing myself to ask such questions (e.g., when do you expect to have LLMs that can emulate Richard Feynmann-level -of-text).  Also, I like the generality vs capability distinction. We already have a generalist (Gato) but we don't consider it to be an AGI (I think).

Your main two concerns seem to be that the terms are either vague or don't quite capture what we care about.

However it seems that those issues might be insurmountable, given that we don't know the precise nature of the future AI that has the properties we worry about.

Curated and popular this week
 ·  · 22m read
 · 
The cause prioritization landscape in EA is changing. Prominent groups have shut down, others have been founded, and everyone’s trying to figure out how to prepare for AI. This is the third in a series of posts critically examining the state of cause prioritization and strategies for moving forward. Executive Summary * An increasingly common argument is that we should prioritize work in AI over work in other cause areas (e.g. farmed animal welfare, reducing nuclear risks) because the impending AI revolution undermines the value of working in those other areas. * We consider three versions of the argument: * Aligned superintelligent AI will solve many of the problems that we currently face in other cause areas. * Misaligned AI will be so disastrous that none of the existing problems will matter because we’ll all be dead or worse. * AI will be so disruptive that our current theories of change will all be obsolete, so the best thing to do is wait, build resources, and reformulate plans until after the AI revolution. * We identify some key cruxes of these arguments, and present reasons to be skeptical of them. A more direct case needs to be made for these cruxes before we rely on them in making important cause prioritization decisions. * Even on short timelines, the AI transition may be a protracted and patchy process, leaving many opportunities to act on longer timelines. * Work in other cause areas will often make essential contributions to the AI transition going well. * Projects that require cultural, social, and legal changes for success, and projects where opposing sides will both benefit from AI, will be more resistant to being solved by AI. * Many of the reasons why AI might undermine projects in other cause areas (e.g. its unpredictable and destabilizing effects) would seem to undermine lots of work on AI as well. * While an impending AI revolution should affect how we approach and prioritize non-AI (and AI) projects, doing this wisel
 ·  · 4m read
 · 
TLDR When we look across all jobs globally, many of us in the EA community occupy positions that would rank in the 99.9th percentile or higher by our own preferences within jobs that we could plausibly get.[1] Whether you work at an EA-aligned organization, hold a high-impact role elsewhere, or have a well-compensated position which allows you to make significant high effectiveness donations, your job situation is likely extraordinarily fortunate and high impact by global standards. This career conversations week, it's worth reflecting on this and considering how we can make the most of these opportunities. Intro I think job choice is one of the great advantages of development. Before the industrial revolution, nearly everyone had to be a hunter-gatherer or a farmer, and they typically didn’t get a choice between those. Now there is typically some choice in low income countries, and typically a lot of choice in high income countries. This already suggests that having a job in your preferred field puts you in a high percentile of job choice. But for many in the EA community, the situation is even more fortunate. The Mathematics of Job Preference If you work at an EA-aligned organization and that is your top preference, you occupy an extraordinarily rare position. There are perhaps a few thousand such positions globally, out of the world's several billion jobs. Simple division suggests this puts you in roughly the 99.9999th percentile of job preference. Even if you don't work directly for an EA organization but have secured: * A job allowing significant donations * A position with direct positive impact aligned with your values * Work that combines your skills, interests, and preferred location You likely still occupy a position in the 99.9th percentile or higher of global job preference matching. Even without the impact perspective, if you are working in your preferred field and preferred country, that may put you in the 99.9th percentile of job preference
 ·  · 5m read
 · 
Summary Following our co-founder Joey's recent transition announcement we're actively searching for exceptional leadership to join our C-level team and guide AIM into its next phase. * Find the full job description here * To apply, please visit the following link * Recommend someone you think could be a great fit here * Location: London strongly preferred. Remote candidates willing to work from London at least 3 months a year and otherwise overlapping at least 6 hours with 9 am to 5 pm BST will be considered. We are happy to sponsor UK work visas. * Employment Type: Full-time (35 hours) * Application Deadline: rolling until August 10, 2025 * Start Date: as soon as possible (with some flexibility for the right candidate) * Compensation: £45,000–£90,000 (for details on our compensation policy see full job description) Leadership Transition On March 15th, Joey announced he's stepping away from his role as CEO of AIM, with his planned last day as December 1st. This follows our other co-founder Karolina's completed transition in 2024. Like Karolina, Joey will transition to a board member role while we bring in new leadership to guide AIM's next phase of growth. The Opportunity AIM is at a unique inflection point. We're seeking an exceptional leader to join Samantha and Devon on our C-level team and help shape the next era of one of the most impactful organizations in the EA ecosystem. With foundations established (including a strong leadership team and funding runway), we're ready to scale our influence dramatically and see many exciting pathways to do so. While the current leadership team has a default 2026 strategic plan, we are open to a new CEO proposing radical departures. This might include: * Proposing alternative ways to integrate or spin off existing or new programs * Deciding to spend more resources trialling more experimental programs, or double down on Charity Entrepreneurship * Expanding geographically or deepening impact in existing region