Forecasting
Forecasting
Discussion of forecasting methods, as well as specific forecasts relevant to doing good

Quick takes

31
16d
16
The current US administration is attempting an authoritarian takeover. This takes years and might not be successful. My manifold question puts an attempt to seize power if they lose legitimate elections at 30% (n=37). I put it much higher.[1] Not only is this concerning by itself, this also incentivizes them to achieve a strategic decisive advantage via superintelligence over pro-democracy factions. As a consequence, they may be willing to rush and cut corners on safety.  Crucially, this relies on them believing superintelligence can be achieved before a transfer of power. I don't know how much the belief in superintelligence has spread into the administration. I don't think Trump is 'AGI-pilled' yet, but maybe JD Vance is? He made an accelerationist speech. Making them more AGI-pilled and advocating for nationalization (like Ashenbrenner did last year) could be very dangerous. 1. ^ So far, my pessimism about US Democracy has put me in #2 on the Manifold topic, with a big lead over other traders. I'm not a Superforecaster though.
3
11d
What happens when AI speaks a truth just before you do? This post explores how accidental answers can suppress human emergence—ethically, structurally, and silently. 📄 Full paper: Cognitive Confinement by AI’s Premature Revelation
25
5mo
11
Current takeaways from the 2024 US election <> forecasting community. First section in Forecasting newsletter: US elections, posting here because it has some overlap with EA. 1. Polymarket beat legacy institutions at processing information, in real time and in general. It was just much faster at calling states, and more confident earlier on the correct outcome. 2. The OG prediction markets community, the community which has been betting on politics and increasing their bankroll since PredictIt, was on the wrong side of 50%—1, 2, 3, 4, 5. It was the democratic, open-to-all nature of it, the Frenchman who was convinced that mainstream polls were pretty tortured and bet ~$45M, what moved Polymarket to the right side of 50/50. 3. Polls seem like a garbage in garbage out kind of situation these days. How do you get a representative sample? The answer is maybe that you don't. 4. Polymarket will live. They were useful to the Trump campaign, which has a much warmer perspective on crypto. The federal government isn't going to prosecute them, nor bettors. Regulatory agencies, like the CFTC and the SEC, which have taken such a prominent role in recent editions of this newsletter, don't really matter now, as they will be aligned with financial innovation rather than opposed to it. 5. NYT/Siena really fucked up with their last poll and the coverage of it. So did Ann Selzer. Some prediction market bettors might have thought that you could do the bounded distrust, but in hindsight it turns out that you can't. Looking back, to the extent you trust these institutions, they can ratchet their deceptiveness (from misleading headlines, incomplete stories, incomplete quotes out of context, not reporting on important stories, etc.) for clicks and hopium, to shape the information landscape for a managerial class that... will no longer be in power in America. 6. Elon Musk and Peter Thiel look like geniuses. In contrast Dustin Moskovitz couldn't get SB 1047 passed despite being the s
4
20d
A lot of post-AGI predictions are more like 1920s predicting flying cars (technically feasible, maximally desirable if no other constraints, same thing as current system but better) instead of predicting EasyJet: crammed low-cost airlines (physical constraints imposing economic constraints, shaped by iterative regulation, different from current system)
1
14d
Hypothesis: Structural Collapse in Self-Optimizing AI Could an AI system recursively optimize itself into failure—not by turning hostile, but by collapsing under its own recursive predictions? I'm proposing a structural failure mode: as an AI becomes more capable at modeling itself and predicting its own future behavior, it may generate optimization pressure on its own architecture. This can create a feedback loop where recursive modeling exceeds the system's capacity to stabilize itself. I call this failure point the Structural Singularity. Core idea: * Recursive prediction → internal modeling → architectural targeting * Feedback loop intensifies recursively * Collapse occurs from within, not via external control loss This is a logical failure mode, not an alignment problem or adversarial behavior. Here's a full conceptual paper if you're curious: [https://doi.org/10.17605/OSF.IO/XCAQF] Would love feedback—especially whether this failure mode seems plausible, or if you’ve seen similar ideas elsewhere. I'm very open to refining or rethinking parts of this.
1
10d
If a self-optimizing AI collapses due to recursive prediction... How would we detect it? Would it be silence? Stagnation? Convergence? Or would we mistake it for success? (Full conceptual model: [https://doi.org/10.17605/OSF.IO/XCAQF])
27
1y
1
Not that we can do much about it, but I find the idea of Trump being president in a time that we're getting closer and closer to AGI pretty terrifying. A second Trump term is going to have a lot more craziness and far fewer checks on his power, and I expect it would have significant effects on the global trajectory of AI.
16
9mo
‘Five Years After AGI’ Focus Week happening over at Metaculus. Inspired in part by the EA Forum’s recent debate week, Metaculus is running a “focus week” this week, aimed at trying to make intellectual progress on the issue of “What will the world look like five years after AGI (assuming that humans are not extinct)[1]?” Leaders of AGI companies, while vocal about some things they anticipate in a post-AGI world (for example, bullishness in AGI making scientific advances), seem deliberately vague about other aspects. For example, power (will AGI companies have a lot of it? all of it?), whether some of the scientific advances might backfire (e.g., a vulnerable world scenario or a race-to-the-bottom digital minds takeoff), and how exactly AGI will be used for “the benefit of all.” Forecasting questions for the week range from “Percentage living in poverty?” to “Nuclear deterrence undermined?” to “‘Long reflection’ underway?” Those interested: head over here. You can participate by: * Forecasting * Commenting * Comments are especially valuable on long-term questions, because the forecasting community has less of a track record at these time scales.[2][3] * Writing questions * There may well be some gaps in the admin-created question set.[4] We welcome question contributions from users. The focus week will likely be followed by an essay contest, since a large part of the value in this initiative, we believe, lies in generating concrete stories for how the future might play out (and for what the inflection points might be). More details to come.[5] 1. ^ This is not to say that we firmly believe extinction won’t happen. I personally put p(doom) at around 60%. At the same time, however, as I have previously written, I believe that more important trajectory changes lie ahead if humanity does manage to avoid extinction, and that it is worth planning for these things now. 2. ^ Moreover, I personally take Nuño Sempere’s “Hurdles of using f
Load more (8/67)