B

BraveNematode

0 karmaJoined

Comments
1

While I agree with many of the object-level criticisms of various priors that seem to be out of touch of current state of ML, I would like to instead make precise a certain obvious flaw in the methodology of the paper which was pointed out several times and which you seem to be unjustifiably dismissive of.

tldr: when playing Baysian inference it is crucial to be cognizant that regardless of how certain your priors are the more conditional steps involved in your model the less credence you should give to the overall prediction.

As for the case at hand, it is very natural to assign instead of a number, a distribution over time for when transformative AGI will be reached.

You wil then find that as you dissect the prediction to more individual prior guesses, the mean of the overall prediction tends to go down, whereas the variance of the overall prediction tends to go up (the case of normal distributions is very instructive here).

So generally, when dissecting an estimate of a probability to atomic guesses as you did, you should be cognizant that with enough steps you can drive the variance of your overall prediction to diverge while keeping the variance of each of your individual priors fixed.

Regardless of how confident you are about your priors you should be quite skeptical of the overall <1% estimate as it most likely fails to account for variance.