Hide table of contents

Metaculus is an online platform where users make and comment on forecasts, which has recently been particularly notable for its forecasting of various aspects of the pandemic, on a dedicated subdomain. As well as displaying summary statistics of the community prediction, Metaculus also uses a custom algorithm to produce an aggregated "Metaculus prediction". More information on forecasting can be found in this interview with Philip Tetlock on the 80,000 hours podcast.

Questions on Metaculus are submitted by users, and a thread exists on the platform where people can suggest questions they'd like to see but do not have the time/skill/inclination to construct themselves. Question construction is non-trivial, not least because for forecasting to work, clear criteria need to be set for what counts as positive resolution. A useful intuition pump here is "if two people made a bet on the outcome of this question, would everyone agree who had won?"

Although there is already a significant overlap between the EA community and the Metaculus userbase, I think it is likely that there exist many forecasting questions which would be very useful from an EA perspective, but that have not yet been written. As such, I've written this question as both a request and an offer.

The request:

Have a think about whether there are any forecasts you think have the potential to have a large impact on decision making within the EA community.

The offer:

If you do think of one, and post it below, and I'll write it up for you and submit it to the site. The closer it is to "fully formed", the more quickly this is likely to happen, but please don't feel the need to spend ages choosing resolution criteria, I'm happy to help with this. I intend to choose questions based on some combination of number of upvotes the suggestion has and how easy the question is to operationalise.

Examples of my question-writing on Metaculus are here, and I also recently become a moderator on the platform.

Some examples of EA-adjacent questions already on the platform:

How much will GiveWell guess it will cost to get an outcome as good as saving a life, at the end of 2021?

On December 1st, 2023 how many companies worldwide will pledge uphold GAP standards for broiler chickens raised for meat?

How many reviews will Toby Ord's book The Precipice have on Amazon on January 1st 2021?

How many infections of SARS-CoV-2 (novel coronavirus) will be estimated to have occurred worldwide, before 2021?

If you're interested in having someone make a forecast about a question that's more personal to you, and/or something that you wouldn't expect the Metaculus community as a whole to have the right combination of interest in and knowledge of, I'd recommend checking out this offer from amandango.

New Answer
New Comment


8 Answers sorted by

Thanks for doing this, great idea! I think Metaculus could provide some valuable insight into how society's/EA's/philosophy's values might drift or converge over the coming decades.

For instance, I'm curious about where population ethics will be in 10-25 years. Something like, 'In 2030 will the consensus within effective altruism be that "Total utilitarianism is closer to describing our best moral theories than average utilitarianism and person affecting views"?'

Having your insight on how to operationalize this would be useful, since I'm not very happy with my ideas: 1. Polling FHI and GW 2. A future PhilPapers Survey if there is one 3. Some sort of citation count/ number of papers on total/average/person utilitarianism. It would probably also be useful to get the opinion of a population ethicist.

Stepping back from that specific question, I think Metaculus could play a sort of sanity-checking, outside-view role for EA. Questions like 'Will EA see AI risk (climate change/bio-risk/etc.) as less pressing in 2030 than they do now?', or 'Will EA in 2030 believe that EA should've invested more and donated less over the 2020s?'

I'd also be interested in forecasts on these topics.

I think Metaculus could play a sort of sanity-checking, outside-view role for EA. Questions like 'Will EA see AI risk (climate change/bio-risk/etc.) as less pressing in 2030 than they do now?', or 'Will EA in 2030 believe that EA should've invested more and donated less over the 2020s?'

It seems to me that there'd be a risk of self-fulfilling prophecies. 

That is, we'd hope that what'd happen is: 

  1. a bunch of forecasters predict what the EA community would end up believing after a great deal of thought, debate, analysis, etc.
  2. then we can update ourselves closer to believing that thing already, which could help us get to better decisions faster.

...But what might instead happen is: 

  1. a relatively small group of forecasters makes relatively unfounded forecasts
  2. then the EA community - which is relatively small, unusually connected to Metaculus, and unusually interested in forecasts - updates overly strongly on those forecasts, thus believing something that they wouldn't otherwise have believed and don't have good reasons to believe

(Perhaps this is like a time-travelling information cascade?)

I'm not saying the latter scenario is more likely than the former, nor that this means we shouldn't solicit these forecasts. But the latter scenario seems likely enough to perhaps be an argument against soliciting these forecasts, and to at least be worth warning readers about clearly and repeatedly if these forecasts are indeed solicited.

Also, this might be especially bad if EAs start noticing that community beliefs are indeed moving towards the forecasted future beliefs, and don't account sufficiently well for the possibility that this is just a self-fulfilling prophecy, and thus increase the weight they assign to these forecasts. (There could perhaps be a feedback loop.)

I imagine there's always some possibility that forecasts will influence reality in a way that makes the forecasts more or less likely to come true that they would've been otherwise. But this seems more-than-usually-likely when forecasting EA community beliefs (compared to e.g. forecasting geopolitical events).

I agree this would be a genuine problem. I think it would be a little less of a problem if the question being forecasted wasn't about the EA communities beliefs but instead something about the state of AI/climate change/pandemics themselves

This is really interesting, and potentially worth my abandoning the plan to write some questions on the outcomes of future EA surveys. 
 

The difficulty with "what will people in general think about X" type questions is how to operationalise them, but there's potentially enough danger in doing this for it not to be worth the tradeoff. I'm interested in more thoughts here.

In terms of "how big a deal will X be, there are several questions already of that form. The Metaculus search function is not amazing, so I'm happy to dig things out if there are areas of particular interest, though several are mentioned elsewhere in this thread.

In terms of "how big a deal will X be, there are several questions already of that form.

Do you mean questions like "what will the state of AI/climate change/pandemics be"  (as Khorton suggests), or things like "How big a deal will Group A think X is"? I assume the former?

The difficulty with "what will people in general think about X" type questions is how to operationalise them, but there's potentially enough danger in not doing this for it to be worth the tradeoff. 

I'm not sure I know what you mean by this (particularly the part after the comma). 

I assume the former?
 

 

Yes.

I'm not sure you I know what you mean by this (particularly the part after the comma).


The not was in the wrong place, have fixed now.

I had briefly got in touch with rethink about trying to predict survey outcomes, but I'm not going ahead with this for now as the conerns your raised seem bad if low probability. I'm consider, as an alternative, asking about the donation split of EAF in ~5 years, which I think kind of tracks related ideas but seems to have less downside risk of the form you describe. 

To lay out my tentative position a bit more:

I think forecasts about what some actor (a person, organisation, community, etc.) will overall believe in future about X can add value compared to just having a large set of forecasts about specific events that are relevant to X. This is because the former type of forecast can also account for: 

  • how the actor will interpret the evidence that those specific events provide regarding X
  • lots of events we might not think to specifically forecast that could be relevant to X

On the other hand, forecasts about what some actor will believe in future about X seem more at risk of causing undesirable feedback loops and distorted beliefs than forecasts about specific events relevant to X do. 

I think forecasting the donation split of the EA Funds[1] would be interesting, and could be useful. This seems to be a forecast of a specific event that's unusually well correlated with an actor's overall beliefs. I think that means it would have more of both the benefits and the risks mentioned above than the typical forecast of a specific event would, but less than a forecast that's directly about an actor's overall belief would.

This also makes me think that another thing potentially worth considering is predicting the beliefs of an actor which: 

  • is a subset of the EA community[2]
  • seems to have a good process of forming beliefs
  • seems likely to avoid updating problematically based on the forecast

Some spitballed examples, to illustrate the basic idea: Paul Christiano, Toby Ord, a survey of CEA staff, a survey of Open Phil staff.

This would still pose a risk of causing the EA community to update too strongly on erroneous forecasts of what this actor will believe. But it seems to at least reduce the risk of self-fulfilling prophecies/feedback loops, which somewhat blunts the effect.

I'm pretty sure this sort of thing has been done before (e.g., sort-of, here). But this is a rationale for doing it that I hadn't thought of before. 

But this is just like a list of considerations and options. I don't know how to actually weigh it all up to work out what's best.

[1] I assume you mean EA Funds rather than the EA Forum or the Effective Altruism Foundation - lots of EAFs floating about!

[2] I only give this criterion because of the particular context and goals at hand; there are of course many actors outside the EA community whose beliefs we should attend to.

The best operationalisation here I can see is asking that we are able to attach a few questions if this form to the 2030 EA survey, then asking users to predict what the results will be. If we can get some sort of pre-commitment from whoever runs the survey to include the answers, even better.

One thing to think about (and maybe for people to weigh in on here) is that as you get further out in time there's less and less evidence that forecasting performs well. It's worth considering a 2025 date for these sorts of questions too for that reason.

4[anonymous]
Another operationalisation would be to ask to what extent the 80k top career recommendations have changed, e.g. what percentage of the current top recommendations wills till be in the top recommendations in 10 years.
5
alex lawsen
This question is now open. How many of the "priority paths" identified by 80,000hours will still be priority paths in 2030?
2
alex lawsen
I really like this and will work something out to this effect
1
alex lawsen
Do you want to have a look at the 2019 EA survey and pick a few things it would be most useful to get predictions on? I'll then write a few up.
1
jacobpfau
I think the 'Diets of EAs' question could be a decent proxy for the prominence of animal welfare within EA. I think there are similar questions on metaculus for the general US population https://www.metaculus.com/questions/?order_by=-activity&search=vegetarian I don't see the ethics question as all that useful, since I think most of population ethics presupposes some form of consequentialism.
2
alex lawsen
It looks like a different part of the survey asked about cause prioritisation directly, which seems like it could be closer to what you wanted, my current plan (5 questions) for how to use the survey is here.

Somewhat unrelated, but I'll leave this thought here anyway: Maybe EA metaculus users could perhaps benefit from posting question drafts as short-form posts on the EA forum.

3
alex lawsen
I'm kind of hoping that this thread ends up serving that purpose. There's also a thread on metaculus where people can post ideas, the difference there is nobody's promising to write them up, and they aren't necessarily EA ideas, but I thought it was worth mentioning. (I do have some thoughts on the top level answer here, but don't have time to write them now, will do soon)

A while ago, Leah Edgerton, of Animal Charity Evaluators, gave an AMA, and one of the questions I asked was What are some questions regarding EAA (effective animal advocacy) which are amenable to being forecasted?.

Her answer is in this video here. In short:

  • Will corporations stick to their animal welfare commitments?
  • When will specific animal free food technologies become cost-competitive with their traditional animal counterparts?
  • Timelines for cultured meat coming to market?
  • When will technology exist which allows the identification of the sex of a chicken before it hatches? When, if ever, will such a technology be adopted
  • When, if ever, will the global production and consumption of farmed animals stop growing? When will stop completely?
  • When will specific countries or states adopt legal protection for animals / farmed animals?
  • When will EAA organizations have a budget of more than $500 million? $1 billion?
  • Questions related to the pandemic.
  • Questions related to the budget of EAA organizations in the immediate future.

Operationalizing these questions, and finding out what the most useful things to forecast are may involve contacting ACE directly. For example, "corporations" is pretty general, so I imagine ACE has some particular ones in mind.

I post my own questions sometimes, but I have some ideas for questions that I'm not sure how to operationalize:

  • Will there ever be a broadly accepted answer on how to compare possible worlds that have a nonzero probability of containing infinite utility?
  • Are donations to GiveDirectly net positive?
  • How much will the EA movement grow over the next few decades?
  • What is the long-term rate of value drift among EAs?
  • If the Founders Pledge makes a long-term investment fund, how long will it last before shutting down? And why does it shut down? (Or the same question for some other long-term investment fund.)
  • Will a well-diversified long term investor eventually go bankrupt due to a market crash?
  • What will the funding distribution look like across causes 5/10/20 years from now?
  • What will be the cost per human life saved equivalent according to Animal Charity Evaluators in 2031? (I asked a similar question on GiveWell, but it's not obvious how to determine a cost per life human saved equivalent from ACE's recommendations.)

Lots of good ideas here, and I think I'll be able to help with several, sent you a pm.

Some fun, useful questions with shorter time horizons could be stuff like:

  • Will GiveWell add a new Top Charity to its list in 2020 (i.e. a Top Charity they haven't previously recommended)?
  • How much money will the EA Funds grant in 2020? (total or broken down by Fund)
  • How many new charities will Charity Entrepreneurship launch in 2020?
  • How many members will Giving What We Can have at the end of 2020?
  • How many articles in [The Economist/The New York Times/...?] will include the phrase "effective altruism" in 2020?

Stuff on global development and global poverty could also be useful. I don't know if we have data to resolve them, but questions like:

  • What will the global poverty rate be in 2021, as reported by the World Bank?
  • How many malaria deaths will there be in 2021?
  • How many countries will grow their GDP by more than 5% in 2021?

These are great questions. Several similar questions are already up which I've linked below (including one I approved after this post was written). I've also written three new questions based on your ideas, which I'm just waiting for someone else to proofread and will then add to this post.

Will one of GiveWell's 2019 top charities be estimated as the most cost-effective charity in 2031?

How much will GiveWell guess it will cost to get an outcome as good as saving a life, at the end of 2031?

Will the number of people in extreme poverty in 2020 be lower than t

... (read more)

I'm interested in operationalizing two questions raised in this thread. The first seems substantially easier to operationalize than the second, but less interesting.

1. Will the next new constitution for a national government be closer to a presidential or parliamentary system?

2. To what extent would academics think that the work of Gerring, Thacker and Moreno were causally relevant to this choice?

These questions are not directly decision-relevant to me, but I'm generally excited about the idea of adding more quantification/forecasting to EA conversations.

I'm also interested in questions around approval voting in general, and the Center for Election Science in particular.

Some stuff:

  • Conditional on less than 5 cities with >=50,000 people having implemented approval voting by Dec 31, 2022, what will the funding for the Center for Election Science be during 2023? Context: According to the CES's strategic plan converting 5 cities with >= 50,000 inhabitants is one of their main targets by 2022 (see p. 7). Conditional on them not achieving it, how will their funding look like? This can probably be operationalized with reference to IRS tax reports.
  • How many US cities with more than 50,000 people will have implemented approval voting by [date]?
  • What will CES funding look like in 2021, 2022, etc.

For EAs who are investing to give, more questions about the market would be great, e.g. my comment here.

Metaculus did briefly experiment with a finance spinoff but I don't believe it was successful. I can definitely write a couple and I think they'd get a lot of interest, but I'd be surprised if making investment decisions based on metaculus was a winning strategy in the long-term. I'd be more optimistic, though still cautious, about political betting using metaculus predictions.


Here are some current questions which seem relevant, you can find more here.

https://www.metaculus.com/questions/2807/will-the-uk-housing-market-crash-before-july... (read more)

2
Denkenberger🔸
Thanks - very helpful.

In The Precipice Toby Ord gives estimated chances of various existential risks happening within the next 100 years. It'd be cool if we could get estimates from Metaculus as well, although it may be impossible to implement, as Tachyons would only be awarded when the world doesn't blow up.

Well, there's the Ragnarök question series, which seems to fit what you're looking for.

Curated and popular this week
Paul Present
 ·  · 28m read
 · 
Note: I am not a malaria expert. This is my best-faith attempt at answering a question that was bothering me, but this field is a large and complex field, and I’ve almost certainly misunderstood something somewhere along the way. Summary While the world made incredible progress in reducing malaria cases from 2000 to 2015, the past 10 years have seen malaria cases stop declining and start rising. I investigated potential reasons behind this increase through reading the existing literature and looking at publicly available data, and I identified three key factors explaining the rise: 1. Population Growth: Africa's population has increased by approximately 75% since 2000. This alone explains most of the increase in absolute case numbers, while cases per capita have remained relatively flat since 2015. 2. Stagnant Funding: After rapid growth starting in 2000, funding for malaria prevention plateaued around 2010. 3. Insecticide Resistance: Mosquitoes have become increasingly resistant to the insecticides used in bednets over the past 20 years. This has made older models of bednets less effective, although they still have some effect. Newer models of bednets developed in response to insecticide resistance are more effective but still not widely deployed.  I very crudely estimate that without any of these factors, there would be 55% fewer malaria cases in the world than what we see today. I think all three of these factors are roughly equally important in explaining the difference.  Alternative explanations like removal of PFAS, climate change, or invasive mosquito species don't appear to be major contributors.  Overall this investigation made me more convinced that bednets are an effective global health intervention.  Introduction In 2015, malaria rates were down, and EAs were celebrating. Giving What We Can posted this incredible gif showing the decrease in malaria cases across Africa since 2000: Giving What We Can said that > The reduction in malaria has be
Ronen Bar
 ·  · 10m read
 · 
"Part one of our challenge is to solve the technical alignment problem, and that’s what everybody focuses on, but part two is: to whose values do you align the system once you’re capable of doing that, and that may turn out to be an even harder problem", Sam Altman, OpenAI CEO (Link).  In this post, I argue that: 1. "To whose values do you align the system" is a critically neglected space I termed “Moral Alignment.” Only a few organizations work for non-humans in this field, with a total budget of 4-5 million USD (not accounting for academic work). The scale of this space couldn’t be any bigger - the intersection between the most revolutionary technology ever and all sentient beings. While tractability remains uncertain, there is some promising positive evidence (See “The Tractability Open Question” section). 2. Given the first point, our movement must attract more resources, talent, and funding to address it. The goal is to value align AI with caring about all sentient beings: humans, animals, and potential future digital minds. In other words, I argue we should invest much more in promoting a sentient-centric AI. The problem What is Moral Alignment? AI alignment focuses on ensuring AI systems act according to human intentions, emphasizing controllability and corrigibility (adaptability to changing human preferences). However, traditional alignment often ignores the ethical implications for all sentient beings. Moral Alignment, as part of the broader AI alignment and AI safety spaces, is a field focused on the values we aim to instill in AI. I argue that our goal should be to ensure AI is a positive force for all sentient beings. Currently, as far as I know, no overarching organization, terms, or community unifies Moral Alignment (MA) as a field with a clear umbrella identity. While specific groups focus individually on animals, humans, or digital minds, such as AI for Animals, which does excellent community-building work around AI and animal welfare while
Max Taylor
 ·  · 9m read
 · 
Many thanks to Constance Li, Rachel Mason, Ronen Bar, Sam Tucker-Davis, and Yip Fai Tse for providing valuable feedback. This post does not necessarily reflect the views of my employer. Artificial General Intelligence (basically, ‘AI that is as good as, or better than, humans at most intellectual tasks’) seems increasingly likely to be developed in the next 5-10 years. As others have written, this has major implications for EA priorities, including animal advocacy, but it’s hard to know how this should shape our strategy. This post sets out a few starting points and I’m really interested in hearing others’ ideas, even if they’re very uncertain and half-baked. Is AGI coming in the next 5-10 years? This is very well covered elsewhere but basically it looks increasingly likely, e.g.: * The Metaculus and Manifold forecasting platforms predict we’ll see AGI in 2030 and 2031, respectively. * The heads of Anthropic and OpenAI think we’ll see it by 2027 and 2035, respectively. * A 2024 survey of AI researchers put a 50% chance of AGI by 2047, but this is 13 years earlier than predicted in the 2023 version of the survey. * These predictions seem feasible given the explosive rate of change we’ve been seeing in computing power available to models, algorithmic efficiencies, and actual model performance (e.g., look at how far Large Language Models and AI image generators have come just in the last three years). * Based on this, organisations (both new ones, like Forethought, and existing ones, like 80,000 Hours) are taking the prospect of near-term AGI increasingly seriously. What could AGI mean for animals? AGI’s implications for animals depend heavily on who controls the AGI models. For example: * AGI might be controlled by a handful of AI companies and/or governments, either in alliance or in competition. * For example, maybe two government-owned companies separately develop AGI then restrict others from developing it. * These actors’ use of AGI might be dr
Recent opportunities in Forecasting
20
Eva
· · 1m read