Thank you for your response! I will contact OWID about this as well, that seems like a great idea!
On your sixth point: I am sorry for not explaining it well initially, my concern would be something like this:
A government opens a forecasting questions on whether it will achieve its emissions target for 2030 (Or a target for anything else).
Forecasters in aggregate predict that there is only an 5% chance of success.
This is seen as unacceptably low by policy-makers and new policy is announced and implemented.
Forecasters adjust and now think there is a 60% chance of success.
This happens several times.
Smart forecasters now understand that low aggregate forecasts will result in new policy initiatives, so a good strategy would be to consistently predict higher chances of success than their true belief under current policy.
I think this is roughly similar to the concern you expressed here under "Causality might diverge from conditionality".
And of course I also doubt there are currently any governments responding enough to a prediction market / forecasting tournament for this to become a problem, but I am hoping that in future we might see a lot more government interest.
A lot of post-AGI predictions are more like 1920s predicting flying cars (technically feasible, maximally desirable if no other constraints, same thing as current system but better) instead of predicting EasyJet: crammed low-cost airlines (physical constraints imposing economic constraints, shaped by iterative regulation, different from current system)
Current takeaways from the 2024 US election <> forecasting community.
First section in Forecasting newsletter: US elections, posting here because it has some overlap with EA.
1. Polymarket beat legacy institutions at processing information, in real time and in general. It was just much faster at calling states, and more confident earlier on the correct outcome.
2. The OG prediction markets community, the community which has been betting on politics and increasing their bankroll since PredictIt, was on the wrong side of 50%—1, 2, 3, 4, 5. It was the democratic, open-to-all nature of it, the Frenchman who was convinced that mainstream polls were pretty tortured and bet ~$45M, what moved Polymarket to the right side of 50/50.
3. Polls seem like a garbage in garbage out kind of situation these days. How do you get a representative sample? The answer is maybe that you don't.
4. Polymarket will live. They were useful to the Trump campaign, which has a much warmer perspective on crypto. The federal government isn't going to prosecute them, nor bettors. Regulatory agencies, like the CFTC and the SEC, which have taken such a prominent role in recent editions of this newsletter, don't really matter now, as they will be aligned with financial innovation rather than opposed to it.
5. NYT/Siena really fucked up with their last poll and the coverage of it. So did Ann Selzer. Some prediction market bettors might have thought that you could do the bounded distrust, but in hindsight it turns out that you can't. Looking back, to the extent you trust these institutions, they can ratchet their deceptiveness (from misleading headlines, incomplete stories, incomplete quotes out of context, not reporting on important stories, etc.) for clicks and hopium, to shape the information landscape for a managerial class that... will no longer be in power in America.
6. Elon Musk and Peter Thiel look like geniuses. In contrast Dustin Moskovitz couldn't get SB 1047 passed despite being the s
Not that we can do much about it, but I find the idea of Trump being president in a time that we're getting closer and closer to AGI pretty terrifying.
A second Trump term is going to have a lot more craziness and far fewer checks on his power, and I expect it would have significant effects on the global trajectory of AI.
Predict your year in 2025: a website for tracking your forecasts
2024 is over. Did your life this year align with your expectations? What came out of nowhere and threw off your predictions? Did your actions align with your intentions? What fresh goals are you planning?
We've built predict your year in 2025, a space for you to write down your predictions for the year. At the end of your year, you can return, resolve your predictions as YES, NO or AMBIGUOUS, and reflect.
We've written some starter questions to make it super easy to get started predicting your year. You can tweak these and write your own - those will likely be the most important questions for you.
You can use this tool to predict your personal life in 2025 - your goals, relationships, work, health, and adventures. If you like, you can share your predictions with friends - for fun, for better predictions, and for motivation to achieve your goals this year!
You can also use this tool to predict questions relevant to your team or organisation in the coming year - your team strategy, performance, big financial questions, and potentially disruptive black swans. You can share your predictions with your team and let everyone contribute, to build common knowledge about expectations and pool your insights.
If you use Slack, you can also share your page of predictions in a Slack channel (e.g. #2025-predictions or #strategy), so everyone can easily discuss in threads and return to it throughout the year.
I hope you have a good time thinking about your coming year, and that it sparks some great conversations with friends and teammates.
Happy new year!
‘Five Years After AGI’ Focus Week happening over at Metaculus.
Inspired in part by the EA Forum’s recent debate week, Metaculus is running a “focus week” this week, aimed at trying to make intellectual progress on the issue of “What will the world look like five years after AGI (assuming that humans are not extinct)[1]?”
Leaders of AGI companies, while vocal about some things they anticipate in a post-AGI world (for example, bullishness in AGI making scientific advances), seem deliberately vague about other aspects. For example, power (will AGI companies have a lot of it? all of it?), whether some of the scientific advances might backfire (e.g., a vulnerable world scenario or a race-to-the-bottom digital minds takeoff), and how exactly AGI will be used for “the benefit of all.”
Forecasting questions for the week range from “Percentage living in poverty?” to “Nuclear deterrence undermined?” to “‘Long reflection’ underway?”
Those interested: head over here. You can participate by:
* Forecasting
* Commenting
* Comments are especially valuable on long-term questions, because the forecasting community has less of a track record at these time scales.[2][3]
* Writing questions
* There may well be some gaps in the admin-created question set.[4] We welcome question contributions from users.
The focus week will likely be followed by an essay contest, since a large part of the value in this initiative, we believe, lies in generating concrete stories for how the future might play out (and for what the inflection points might be). More details to come.[5]
1. ^
This is not to say that we firmly believe extinction won’t happen. I personally put p(doom) at around 60%. At the same time, however, as I have previously written, I believe that more important trajectory changes lie ahead if humanity does manage to avoid extinction, and that it is worth planning for these things now.
2. ^
Moreover, I personally take Nuño Sempere’s “Hurdles of using f
How should AI alignment and autonomy preservation intersect in practice?
We know that AI alignment research has made significant progress in embedding internal constraints that prevent models from manipulating, deceiving, or coercing users (to the extent that they don’t). However, internal alignment mechanisms alone don’t necessarily give users meaningful control over AI’s influence on their decision-making. Which is a mechanistic problem on its own, but…
This raises a question: Should future AI systems be designed to not only align with human values but also expose their influence in ways that allow users to actively contest and reshape AI-driven inferences?
For example:
* If an AI model generates an inference about a user (e.g., “this person prefers risk-averse financial decisions”), should users be able to see, override, or refine that inference?
* If an AI assistant subtly nudges users toward certain decisions, should it disclose those nudges in a way that preserves user autonomy?
* Could mechanisms like adaptive user interfaces (allowing users to adjust how AI explains itself) or AI-generated critiques of its own outputs serve as tools for reinforcing autonomy rather than eroding it?
I’m exploring a concept I call Autonomy by Design, a control-layer approach that builds on alignment research but adds external, user-facing mechanisms to make AI’s reasoning and influence more contestable.
Would love to hear from interpretability experts, and UX designers: Where do you see the biggest challenges in implementing user-facing autonomy safeguards? Are there existing methodologies that could be adapted for this purpose?
Thank you in advance.
Feel free to shatter this if you must XD.
FYI rolling applications are back on for the Biosecurity Forecasting Group! We have started the pilot and are very excited about our first cohort! Don't want to apply but have ideas for questions? Submit them here (anyone can submit!).
I'm interesting in chatting to any civil servants, ideally in the UK, who are keen on improving decision making in their teams/area - potentially through forecasting techniques and similar methods. If you'd be interested in chatting, please DM me!
Hi Lizka,
Thank you for your response! I will contact OWID about this as well, that seems like a great idea!
On your sixth point: I am sorry for not explaining it well initially, my concern would be something like this:
I think this is roughly similar to the concern you expressed here under "Causality might diverge from conditionality".
And of course I also doubt there are currently any governments responding enough to a prediction market / forecasting tournament for this to become a problem, but I am hoping that in future we might see a lot more government interest.