P

PSR

5 karmaJoined

Comments
4

Thank you a lot for this detailed answer. Especially points where superforecasters have provably been wrong on AI-related questions are very interesting and are certainly a very relevant argument against updating too much in their direction. Some kind of track record of superforecasters, experts, and public figures making predictions would be extremely interesting. Do you know whether something like this can be found somewhere?

To push back a bit against it being hard to find a good reference class and superforecasters having to rely on vibes: Yes, it might be hard, but aren't superforecasters precisely those who have a great track record for finding a good methodology for making predictions, even when it's hard? AI extinction is probably not the only question where making a forecast is tricky.

Sure, even a 0.15% probability by itself seems scary, though it might be low enough that you start wondering about trade-offs with delaying technological progress.

Apart from that, I would be interested how people with much higher P(doom) than that reconcile their belief with these numbers? Are there good reasons to believe that these numbers are not representative of the actual beliefs of superforecasters? Or that superforecasters are somehow systematically wrong or untrustworthy on this issue?

Hello everyone,

I have a question for those in the community that focus on AI safety: What do you make of superforecasters seemingly often having a very low P(doom)?

For example, in this survey (https://metr.org/blog/2025-08-20-forecasting-impacts-of-ai-acceleration/) superforecasters give a median P(doom) of 0.15% by 2100. You can find this number in the full write-up (https://docs.google.com/document/d/1QPvUlFG6-CrcZeXiv541pdt3oxNd2pTcBOOwEnSStRA/edit?usp=sharing), which is also linked in the blog post.

This is far below pretty much any value any prominent AI safety person talks about, which are typically 10%+ or even up to ~90%. Does this give you pause? Or how do you explain this?

PSR
11
2
0

Hi everyone! It seems quite plausible to me that EA can not indefinitely prevent itself from becoming a politically charged topic, once it becomes more prominent in public awareness.  What are the current ideas about how to handle this?