Below are a few polls which I've considered running as debate weeks, but thought better of (for now at least).
Timelines
I didn't run this as a debate week because I figured that the debate slider tool isn't the ideal way to map out a forecast.
However, I still think it's an interesting temperature check to run on the community, especially with the publication of AI 2027. For the purposes of this poll, we can use the criteria from this metaculus poll.
Also it's no crime to vote based on vibes, leave a comment, and change your mind later.
Bioweapons
Obviously, bioweapons pose a catastrophic risk. But can they be existential? I buy the Parfitian argument that we should disvalue extinction far more than about catastrophe (and this extends somewhat to other states nearby in value to extinction). But I'm unsure how seriously I should take bio-risks compared to other putative existential risks.
Definitions:
- Bioweapons: I'm thinking of engineered pathogens in particular.
- Existential risk = a risk of existential catastrophe, where existential catastrophe means "an event which causes the loss of a large fraction of expected value"
Strong longtermism
I wonder where people land on this now that we talk about longtermism less. As a reminder, strong longtermism is the view that "the most important feature of our actions today is their impact on the far future".
Summary of Greaves and MacAskill's paper on the view here.
I think it's 20% likely based on the model I made.