See Section 5.6 titled "The Limit of the Predictability of Scaling Behavior" of "Broken Neural Scaling Laws" paper: https://arxiv.org/abs/2210.14891v4
Suggestion was deleted
Show more
Show less
Comment details cannot be verified
0 replies
New
Zach Stein-Perlman
Nov 17, 2022
Approver
(I think this model is nonsense: using an uninformative prior on the number of insights required throws away almost all information about AI progress. And for semi-informative priors, Davidson's report seems like a better starting place.)
Suggestion was deleted
Show more
Show less
Comment details cannot be verified
0 replies
New
You're suggesting
Gemini created these notes. They can contain errors so should be double-checked. How Gemini takes notes
Drag to resize or hide comments
Drag image to reposition
Outline
Outline
Document tabs
AI forecasting research ideas from Epoch
2
Headings you add to the document will appear here.
AI Forecasting Research Ideas
Overview
Projects
Extrapolating GPT-N performance Difficulty: Medium
Qualitatively analysing language model / image generation improvements since ~2000 Difficulty: Easy
Do AI researchers train models using scaling laws? Difficulty: Medium
Revisiting ‘Is AI Progress Impossible To Predict?’ Difficulty: Hard
Algorithmic breakthroughs in machine learning history Difficulty: Medium
Paradigm changes in AI Difficulty: Medium
Study training run lengths Difficulty: Easy
AI development vignettes Difficulty: Hard
Profiler to measure compute Difficulty: Hard
Insights-based models of AI timelines Difficulty: Medium
Brain emulation development Difficulty: Medium
Rethinking the evolutionary anchor Difficulty: Hard
Investigate trends in memory bandwidth, latency and price of memory Difficulty: Easy
Investigating the parameter gap Difficulty: Medium
Investigate possible paradigm shifts in hardware Difficulty: Medium
How much can we scale up production in the compute supply chain? Difficulty: Hard
Will the price of compute go down if hardware performance stops improving? Difficulty: Hard
What has been the share of any chip in a given year of total available compute performance? Difficulty: Easy