I'm a stock market speculator who has been involved in transhumanist and related communities for a long time. See my website at http://bayesianinvestor.com.
It sure doesn't look like Anthropic has lost. See https://polymarket.com/event/anthropic-500b-valuation-in-2026 for evidence.
One good prediction that he made was in his 1986 book Engines of Creation, that a global hypertext system would be available within a decade. Hardly anyone in 1986 imagined that.
But he has almost entirely stopped trying to predict when technologies will be developed. You should read him to imagine what technologies are possible.
I haven't given that a lot of thought. AI is likely to have the strongest effects further out. A year ago I was mainly betting on interest rates going up around 2030 via SOFR futures, because I expected interest rates to go down in 2025-6. But now I'm guessing there's little difference in which durations go up.
These ETFs seem better than leveraged ETFs, for reasons related to the excessive trading by leveraged ETFs.
I see multiple reasons why bonds are likely to be bad investments over the next few years:
Markets may be efficiently pricing a few of these risks, but I'm pretty sure they're underestimating AI.
I've been shorting t-bond futures, currently 6% of my net worth, and I'm likely to short more soon.
Oysters are significantly more nutrient dense than beef, partly because we eat the whole oyster, but ignore the most nutritious parts of the cow. So $1 of oyster is roughly as beneficial as $1 of pasture-raised beef. Liver from grass-fed cows is likely better than bivalves, and has almost no effect on how many cows are killed.
most of the billions of people who know nothing about AI risks have a p(doom) of zero.
This seems pretty false. E.g. see this survey.
If increases in basic AI intelligence were fully halted at 2027 levels, economic growth would still accelerate to something comfortably above 5%/year, due to continuing adaptations of AI to things like robotics.
None of the pause proposals look like they could be 100% effective at stopping increases in AI intelligence.
I used to give pretty high priority to economic growth, but now that growth in excess of 10%/year looks close to inevitable, I'm giving much lower priority to it.
I agree with a fair amount of what you wrote in this post, but I don't see much of an argument against slowing AI capability advances.