Since this shift in focus stems from short AGI timelines, I suggest that 80,000 Hours make a more specific precommitment to return to its original focus if these timelines are falsified.
For example: "If AI investment falls below $X billion per year, or AI progress slows enough that models fail to achieve X benchmark by 2030, we precommit to shifting out of 'emergency AGI response mode' and returning to a broader EA focus."
(Your last section suggests that 80,000 Hours is already thinking about this, so I'm mostly reinforcing that those plans be specific and shared publicly.)
This would reassure EAs who prioritize non-AI causes that the urgency of AGI in the current moment doesn't result in AI safety taking over the broader EA ecosystem in a way that cannot be reversed.
Since this shift in focus stems from short AGI timelines, I suggest that 80,000 Hours make a more specific precommitment to return to its original focus if these timelines are falsified.
For example: "If AI investment falls below $X billion per year, or AI progress slows enough that models fail to achieve X benchmark by 2030, we precommit to shifting out of 'emergency AGI response mode' and returning to a broader EA focus."
(Your last section suggests that 80,000 Hours is already thinking about this, so I'm mostly reinforcing that those plans be specific and shared publicly.)
This would reassure EAs who prioritize non-AI causes that the urgency of AGI in the current moment doesn't result in AI safety taking over the broader EA ecosystem in a way that cannot be reversed.