Here's Nick Bostrom briefly introducing the argument.
From what I've read the doomsday argument from analogy is as follows:
Imagine there are two urns in front of you, one containing 10 balls, the other containing 1 million balls. You don't know which urn is which. The balls are numbered and upon blindly picking a ball numbered "7", you reason (correctly) that you've most likely picked a ball from the 10-ball urn. The doomsday argument posits this: when thinking about whether the future will be long (e.g. long enough for 10^32 humans to exist) or relatively short (say long enough for 200 billion humans), we should think of our own birthrank (you're roughly the 100 billionth human) the way we think about picking ball number 7. In other words, as the 100 billionth human you're more likely to be in the set of 200 billion humans rather than in the set of 10^32 humans, and this should be considered evidence for adjusting our prior expectations for how long the future will be.
I found few discussions on this in EA fora so I'm curious to hear what you all think about this argument. Does it warrant thinking differently about the long-term future?
With low confidence, I think I agree with this framing.
If correct, then I think the point is that seeing us at an 'early point in history' updates us against a big future, but the fact we exist at all updates in favour of a big future, and these cancel out.
In the "voice of God" example, we're guaranteed to minimize error by applying this reasoning; i.e., if God asks this question to every possible human created, and they all answer this way, most of them will be right.
Now, I'm really unsure about the following, but imagine each new human predicts Doomsday through DA reasoning; in that case, I'm not sure it minimizes error the same way. We often assume human population will increase exponentially and then suddenly go extinct; but then it seems like most people will end up mistaken in their predictions. Maybe we're using the wrong priors?