Hide table of contents

2022 has almost wrapped up — do you have EA-relevant predictions you want to register for 2023? List some in this thread! 

You’re encouraged to update them if others give you feedback, but we’ll save a copy of the thread on January 6, after which the predictions will be “registered.” 

Note that there's also a forecasting & estimation subforum now — consider joining or exploring it! 

Suggested format

Prediction - chances (optional elaboration)

Examples (with made-up numbers!): 

  • Will WHO declare a new Global Health Emergency in 2023? Yes. 60%
  • WHO declares a new Global Health Emergency in 2023 - 60% (I’m not very resilient on this — if I thought about it/did more research for another hour, I could see myself moving to 10-80%)

Additional notes

These can be low-effort! Here are some examples: a bunch of predictions from 2021 on Astral Codex Ten

Once someone has registered a prediction, feel free to reply to their comment and register your own prediction for that statement or question.

You can also suggest topics for people to predict on, even if you yourself don’t want to register a prediction. 

Other opportunities to forecast what will happen in 2023

  • Astral Codex Ten (ACX) is running a prediction contest, with 50 questions about the state of the world at the end of 2023 (you don’t have to predict on all the questions). There will be at least four $500 prizes. (Enter your predictions by 10 January or 1 February, depending on how you want to participate.)
  • You can also forecast on Metaculus (question categories here), Manifold Markets (here are the questions tagged “effective altruism”), and many other platforms. If some of the things you’re listing in this thread are predictions for questions available on some other platforms, you might be able to embed the question to display the current average predictions. 

Questions to consider

I think the questions from the ACX tournament are a great place to start (here they are on Manifold). Here are some of them (each about whether these things will be the case by January 1, 2024): 

  • Will Vladimir Putin be President of Russia?
  • Will a nuclear weapon be used in war (i.e. not a test or accident) and kill at least 10 people?
  • Will any new country join NATO?[1]
  • Will OpenAI release GPT-4?[2]
  • Will COVID kill at least 50% as many people in 2023 as it did in 2022?[3]
  • Will a cultured meat product be available in at least one US store or restaurant for less than $30?
  • Will a successful deepfake attempt causing real damage make the front page of a major news source?[4]
  • Will AI win a programming competition?[5]

And here are some other types of questions you might consider: 

Image made with DALL-E's help. 
  1. ^

    Sweden and Finland completing the accession process would count as new countries.

  2. ^

    This resolves as positive if Open-AI publishes a paper or webpage implicitly declaring GPT-4 “released” or “complete”, showcasing some examples of what it can do, and offering some form of use in some reasonable timescale to some outside parties (researchers, corporate partners, people who want to play around with it, etc). A product is “GPT-4” if it is either named GPT-4, or is a clear successor to GPT-3 to a degree similar to how GPT-3 was a successor to GPT-2 (and not branded as a newer version of GPT-3, eg ChatGPT3).

  3. ^

    According to https://ourworldindata.org/covid-deaths

  4. ^

    A deepfake is defined as any sophisticated AI-generated image, video, or audio meant to mislead. For this question to resolve positively, it must actually harm someone, not just exist. Valid forms of harm include but are not limited to costing someone money, or making some specific name-able person genuinely upset (not just “for all we know, people could have seen this and been upset by it”). The harm must come directly from the victim believing the deepfake, so somebody seeing the deepfake and being upset because the existence of deepfakes makes them sad does not count.

  5. ^

    This will resolve positively if a major news source reports that an AI entered a programming competition with at least two other good participants, and won the top prize. A good participant will be defined as someone who could be expected to perform above the level of an average big tech company employee; if there are at least twenty-five participants not specifically selected against being skilled, this will be considered true by default. The competition could be sponsored by the AI company as long as it meets the other criteria and is generally considered fair.

  6. Show all footnotes
Comments13


Sorted by Click to highlight new comments since:

This is really cool! Thank you for sharing. I was slightly surprised by how low these are:

  1. Ukraine-Russia war is over by EOY 2023 — 20%
  2. Putin deposed by EOY 2023 — 5%
  3. Putin leaving power by any means for at least 30 consecutive days (with start date in 2023) — 10%
    1. I think I'm also surprised by the difference between 2 and 3. Unless this is driven primarily by the possibility of a month-long disease (which doesn't weaken the regime)? (Maybe also it takes a while to depose someone in some cases? So e.g. he might go on a couple-month-long "holiday" while they figure things out?)

And these are really interesting: 

I think I'm also surprised by the difference between 2 and 3

I view deposing as involving something internal, quick, and forceful. I think if Putin retires willingly / goes quietly (even if under duress), this would count as him leaving power without being deposed. Likewise, if he died but not because of assassination, that wouldn't count as being deposed.

This is niche, but I've been wondering how well I can forecast the success my TikToks. Vertical blue lines are 80% CIs, horizontal blue line is median estimate, green dot is if the actual was within 10x of median, red x otherwise.

No photo description available.

For the next year:

  1. 90% I have at least one video with 100k+ views
  2. 40% I have at least one video with 1M+ views
  3. 20% I make a serious effort to be popular again for at least a month
  4. 85% I have at least one video with 1M+ views conditional on me making a serious effort for at least a month

Thanks for sharing & good luck with the TikToks! I notice I'm curious about e.g. how likely 100M views on a video are, given 1M views (and similar questions).

Disclaimer: Ben is my manager.

Thanks! I previously found that my videos roughly fit a power-law distribution, and the one academic paper I could find on the subject also found that views were Zipf-distributed.

Since power law distributions are scale-invariant, I think it's relatively easy to answer your question:  etc. In that original post I thought that my personal views roughly fit the model ; I haven't looked at that recently though and expect the coefficients have changed.

I'll start off with a couple of predictions — all of these are very quick attempts (so not very resilient): 

  1. Vladimir Putin will be president of Russia — 85% (I made a quick base rate from taking a quick look at comparable regimes in this data, then also wanted to add the current chaos as an input & looked at this Metaculus question) (as mentioned above, I don't think this is resilient!)[1]
  2. Will a cultured meat product be available in at least one US store or restaurant for less than $30? — 50% 
  3. WHO declares a new Global Health Emergency in 2023 —20% (extremely rough; had estimated (probably poorly) for pandemics, multiplied by a rough multiple, don't have time to do any checks)
  1. ^

    In case anyone's interested, base rates seemed to give 0.94 chances of him remaining president for another year, 0.89 for another 2, 0.85 for three, 0.83 for four, and  0.81 for five. And here's the Metaculus question:

If Global Health Emergency is meant to mean public health emergency of international concern , then the base rate is roughly 45% = 7 / 15.5: declared 7 times, while the appropriate regulation come into force in mid-2007.

Great, thanks! Really appreciate this; I was really off — I think I had quickly taken my number/base rate for pandemics, and referenced a list of PHEICs I thought was for the 21st century without checking or noticing that this only starts in 2007. I might just go for this base rate, then. 

Hey, thanks for starting this!

Misha beat me to it RE: PHEIC base rates, but I'd also be interested in the 50% figure for cultured meat, given FDA approval last month for UPSIDE Foods (formally Memphis Meats). How much of the 50% figure is driven by pending USDA approval, VS time to market VS the $30 figure?

I was thinking very loosely about both, without doing any proper homework. I had the sense that USDA approval would take a while (for a sense of how loosely I approached this: I didn't remember which of FDA or USDA had already approved this), and was under the vague impression (from a conversation?) that this wouldn't go straight to stores or chains, but would instead go to fancy restaurants first (just now confirmed that the restaurant listed here is very high-end). But then again, I vaguely expected ~full-enough approval in 2023, and I felt like "available for $30" could happen in lots of ways (e.g. there's some "tasting" option that's tiny and therefore decently cheap), etc. So I went with 50% without thinking much longer about it.

I went and checked just now, however, and am seeing this article (Nov 16), which notes 

Upside has previously said “end of 2022” as a launch date for its cultivated chicken. The company must still secure approvals from the United States Department of Agriculture (USDA) before it can actually sell to consumers. In a statement Upside promised more details on timing and launch to follow.

The link is to an article from April, and although companies might be over-optimistic and might over-promise, I guess that means that this timeline is vaguely feasible, which pushes me towards thinking that I should be more optimistic. 

Also, I just checked Metaculus, and it appears that in April, ~forecasters thought that there's a roughly 19% chance that cultured meat might be available for sale in the US by the end of 2022 (community prediction was 37%), which seems wild, but again, makes me think that I was more pessimistic than necessary before: 

Oh, I'm looking through other Metaculus questions, and here's another relevant one: Will [the US] approve a cultivated meat product for consumption before April 2023?
Community Prediction 52%

The comments are quite interesting, and imply that Upside Foods don't have enough meat to "sell soon" and also

Meanwhile, FDA says it approved nothing:

"The voluntary pre-market consultation is not an approval process. Instead, it means that after our careful evaluation of the data and information shared by the firm, we have no further questions at this time about the firm’s safety conclusion."

According to the Formal Agreement Between FDA and USDA Regarding Oversight of Human Food Produced Using Animal Cell Technology Derived from Cell Lines of USDA-amenable Species, the next 2 stages in the FDA regulatory phase are:

"Oversee initial cell collection and the development and maintenance of qualified cell banks, including by issuing regulations or guidance and conducting inspections, as appropriate."

&

"Oversee proliferation and differentiation of cells through the time of harvest, including by issuing regulations or guidance and conducting inspections, as appropriate."

After that, it looks to me like there's another phase of regulatory steps that's centered more in the US Department of Agriculture.

The bear market in stocks will continue, and the S&P 500 will decline an additional 30-45%. VC-backed and unprofitable early stage companies will continue to remain at the center of the storm as access to capital further deteriorates. The crypto bear market also accelerates. Open Philanthropy has to cut its spending target again and GiveWell again falls short of its goals. The EA community realizes how linked it was to frothy financial asset valuations.

EAs appeal to Warren Buffett and Mark Zuckerberg to consider filling some of the gaps, and one or both of them commits to doing so.

Curated and popular this week
 ·  · 10m read
 · 
Regulation cannot be written in blood alone. There’s this fantasy of easy, free support for the AI Safety position coming from what’s commonly called a “warning shot”. The idea is that AI will cause smaller disasters before it causes a really big one, and that when people see this they will realize we’ve been right all along and easily do what we suggest. I can’t count how many times someone (ostensibly from my own side) has said something to me like “we just have to hope for warning shots”. It’s the AI Safety version of “regulation is written in blood”. But that’s not how it works. Here’s what I think about the myth that warning shots will come to save the day: 1) Awful. I will never hope for a disaster. That’s what I’m trying to prevent. Hoping for disasters to make our job easier is callous and it takes us off track to be thinking about the silver lining of failing in our mission. 2) A disaster does not automatically a warning shot make. People have to be prepared with a world model that includes what the significance of the event would be to experience it as a warning shot that kicks them into gear. 3) The way to make warning shots effective if (God forbid) they happen is to work hard at convincing others of the risk and what to do about it based on the evidence we already have— the very thing we should be doing in the absence of warning shots. If these smaller scale disasters happen, they will only serve as warning shots if we put a lot of work into educating the public to understand what they mean before they happen. The default “warning shot” event outcome is confusion, misattribution, or normalizing the tragedy. Let’s imagine what one of these macabrely hoped-for “warning shot” scenarios feels like from the inside. Say one of the commonly proposed warning shot scenario occurs: a misaligned AI causes several thousand deaths. Say the deaths are of ICU patients because the AI in charge of their machines decides that costs and suffering would be minimize
 ·  · 6m read
 · 
TLDR: This 6 million dollar Technical Support Unit grant doesn’t seem to fit GiveWell’s ethos and mission, and I don’t think the grant has high expected value. Disclaimer: Despite my concerns I still think this grant is likely better than 80% of Global Health grants out there. GiveWell are my favourite donor, and given how much thought, research, and passion goes into every grant they give, I’m quite likely to be wrong here!   What makes GiveWell Special? I love to tell people what makes GiveWell special. I giddily share how they rigorously select the most cost-effective charities with the best evidence-base. GiveWell charities almost certainly save lives at low cost – you can bank on it. There’s almost no other org in the world where you can be pretty sure every few thousand dollars donated be savin’ dem lives. So GiveWell Gives you certainty – at least as much as possible. However this grant supports a high-risk intervention with a poor evidence base. There are decent arguments for moonshot grants which try and shift the needle high up in a health system, but this “meta-level”, “weak evidence”, “hits-based” approach feels more Open-Phil than GiveWell[1]. If a friend asks me to justify the last 10 grants GiveWell made based on their mission and process, I’ll grin and gladly explain. I couldn’t explain this one. Although I prefer GiveWell’s “nearly sure” approach[2], it could be healthy to have two organisations with different roles in the EA global Health ecosystem. GiveWell backing sure things, and OpenPhil making bets.   GiveWell vs. OpenPhil Funding Approach What is the grant? The grant is a joint venture with OpenPhil[3] which gives 6 million dollars to two generalist “BINGOs”[4] (CHAI and PATH), to provide technical support to low-income African countries. This might help them shift their health budgets from less effective causes to more effective causes, and find efficient ways to cut costs without losing impact in these leaner times. Teams of 3-5
 ·  · 2m read
 · 
TL;DR Starting an Effective Altruism (EA) group might be one of the highest-impact opportunities available right now. Here’s how you can get involved: * University students: Apply to the Centre for Effective Altruism’s Organiser Support Programme (OSP) by Sunday, June 22. * City or national group organisers: You’re invited, too. See details here! * Interested in mentorship? Apply to mentor organisers by Wednesday, June 18. * Know someone who could be an organiser or mentor? Forward this post or recommend them directly. OSP provides mentorship, workshops, funding support, and practical resources to build thriving EA communities. Why Starting an EA Group Matters EA Groups, especially university groups, are often the very first exposure people have to effective altruism principles such as scale, tractability, and neglectedness. One conversation, one fellowship, one book club - these seemingly small moments can reshape someone’s career trajectory. Changing trajectories matters - even if one person changes course because of an EA group and ends up working in a high-impact role, the return on investment is huge. You don’t need to take our word for it: * 80,000 Hours: "Probably one of the highest-impact volunteer opportunities we know of." * Rethink Priorities: Only 3–7% of students at universities have heard of EA, indicating neglectedness and a high potential to scale. * Open Philanthropy: In a survey of 217 individuals identified as likely to have careers of particularly high altruistic value from a longtermist perspective, most respondents reported first encountering EA ideas during their college years. When asked what had contributed to their positive impact, local groups were cited most frequently on their list of biggest contributors. This indicates that groups play a very large role in changing career trajectories to high-impact roles. About the Organiser Support Programme (OSP) OSP is a remote programme by the Centre for Effective Altruism designed