1. Are you referring to your exchange with David Mathers here?
3. I'm not sure what you're saying here. Just to clarify what my point is: you're arguing in the post that the slow scenario actually describes big improvements in AI capabilities. My counterpoint is that this scenario is not given a lot of weight by the respondents, suggesting that they mostly don't agree with you on this.
Thanks for the replies!
I forgot to include the text of the question in my post. I just added it now.
I think it would also be fair to include the disclaimer in the question I quoted above.
There are several capabilities mentioned in the slow progress scenario that seem indicative of AGI or something close, such as the ability of AI systems to largely automate various kinds of labour (e.g. research assistant, software engineer, customer service, novelist, musician, personal assistant, travel agent)
I would read the scenario as AI being able to do some of the tasks required by these jobs, but not to fully replace humans doing them, which I would think is the defining characteristic of slow AI progress scenarios.
I’m confused by this, for a few reasons:
Aside from these methodological points, I’m also surprised that you believe that the slow scenario constitutes AI that is "either nearly AGI or AGI outright". Out of curiousity, what capability mentioned in the "slow" scenario do you think is the most implausible by 2030? To me, most of these seem pretty close to what we already have in 2025.
[disclaimer: I recommended a major grant to FRI this year, and I’ve discussed LEAP with them several times]
Thanks, that's a helpful clarification! "Allowed" still feels like a strong choice of words, but I can see that the line between that and "I'm not sure if this will be perceived as annoying" is blurry, and also, the latter feels frustrating enough.
I'm only speaking in personal capacity here, but my strong preference would always be for these questions to be raised!
I'm not sure if I'm allowed to ask this [...].
Maybe you don't mean this literally, but I find this really sad kind of horrifying. Who do you think wouldn't allow you to ask this question, and why?
Disagree-voted. I think there are issues with the Neglectedness heuristic, but I don’t think the N in ITN is fully captured by I and T.
For example, one possible rephrasing of ITN is: (certainly not covering all the ways in which it is used)
I think this is a great way to decompose some decision problems. For instance, it seems very useful for thinking about prioritizing research, because (3) helps you answer the important question "If I don’t solve P, will someone else?" (even if this is also affected by 2).
(edited. Originally, I put the question "If I don’t solve P, will someone else?" under 3., which was a bit sloppy)
I think it’s borderline whether reports of this type are forecasting as commonly understood, but would personally lean no in the specific cases you mention (except maybe the bio anchors report).
I really don’t think that this intuition is driven by the amount of time or effort that went into them, but rather the percentage of intellectual labor that went into something like “quantifying uncertainty” (rather than, e.g. establishing empirical facts, reviewing the literature, or analyzing the structure of commonly-made arguments).
As for our grantmaking program: I expect we’ll have a more detailed description of what we want to cover later this year, where we might also address points about the boundaries to worldview investigations.
Hi Dan,
Thanks for writing this! Some (weakly-held) points of skepticism:
*OTOH, I think Eli is also hinting at a definition of forecasting that is too narrow. I do think that generating models/rationales is part of forecasting as it is commonly understood (including in EA circles), and certainly don't agree that forecasting by definition means that little effort was put into it!
Maybe the right place to draw the line between forecasting rationales and “just general research” is asking “is the model/rationale for the most part tightly linked to the numerical forecast?" If yes, it's forecasting, if not, it's something else.
I don't think this is an accurate summary of the disagreement, but I've tried to clarify my point twice already, so I'm going to leave it at that.