Metaculus is an online platform where users make and comment on forecasts, which has recently been particularly notable for its forecasting of various aspects of the pandemic, on a dedicated subdomain. As well as displaying summary statistics of the community prediction, Metaculus also uses a custom algorithm to produce an aggregated "Metaculus prediction". More information on forecasting can be found in this interview with Philip Tetlock on the 80,000 hours podcast.
Questions on Metaculus are submitted by users, and a thread exists on the platform where people can suggest questions they'd like to see but do not have the time/skill/inclination to construct themselves. Question construction is non-trivial, not least because for forecasting to work, clear criteria need to be set for what counts as positive resolution. A useful intuition pump here is "if two people made a bet on the outcome of this question, would everyone agree who had won?"
Although there is already a significant overlap between the EA community and the Metaculus userbase, I think it is likely that there exist many forecasting questions which would be very useful from an EA perspective, but that have not yet been written. As such, I've written this question as both a request and an offer.
The request:
Have a think about whether there are any forecasts you think have the potential to have a large impact on decision making within the EA community.
The offer:
If you do think of one, and post it below, and I'll write it up for you and submit it to the site. The closer it is to "fully formed", the more quickly this is likely to happen, but please don't feel the need to spend ages choosing resolution criteria, I'm happy to help with this. I intend to choose questions based on some combination of number of upvotes the suggestion has and how easy the question is to operationalise.
Examples of my question-writing on Metaculus are here, and I also recently become a moderator on the platform.
Some examples of EA-adjacent questions already on the platform:
How many reviews will Toby Ord's book The Precipice have on Amazon on January 1st 2021?
If you're interested in having someone make a forecast about a question that's more personal to you, and/or something that you wouldn't expect the Metaculus community as a whole to have the right combination of interest in and knowledge of, I'd recommend checking out this offer from amandango.
Thanks for doing this, great idea! I think Metaculus could provide some valuable insight into how society's/EA's/philosophy's values might drift or converge over the coming decades.
For instance, I'm curious about where population ethics will be in 10-25 years. Something like, 'In 2030 will the consensus within effective altruism be that "Total utilitarianism is closer to describing our best moral theories than average utilitarianism and person affecting views"?'
Having your insight on how to operationalize this would be useful, since I'm not very happy with my ideas: 1. Polling FHI and GW 2. A future PhilPapers Survey if there is one 3. Some sort of citation count/ number of papers on total/average/person utilitarianism. It would probably also be useful to get the opinion of a population ethicist.
Stepping back from that specific question, I think Metaculus could play a sort of sanity-checking, outside-view role for EA. Questions like 'Will EA see AI risk (climate change/bio-risk/etc.) as less pressing in 2030 than they do now?', or 'Will EA in 2030 believe that EA should've invested more and donated less over the 2020s?'
To lay out my tentative position a bit more:
I think forecasts about what some actor (a person, organisation, community, etc.) will overall believe in future about X can add value compared to just having a large set of forecasts about specific events that are relevant to X. This is because the former type of forecast can also account for:
On the other hand, forecasts about what som
... (read more)