Written by Benjamin Tereick
Edit: several comments here question the value of forecasting as a philanthropic cause — see this comment for a reply.
We are happy to announce that we have added forecasting as an official grantmaking focus area. As of January 2024, the forecasting team comprises two full-time employees: myself and Javier Prieto. In August 2023, I joined Open Phil to lead our forecasting grantmaking and internal processes. Prior to that, I worked on forecasts of existential risk and the long-term future at the Global Priorities Institute. Javier recently joined the forecasting team in a full-time capacity from Luke Muehlhauser’s AI governance team, which was previously responsible for our forecasting grantmaking.
While we are just now launching a dedicated cause area, Open Phil has long endorsed forecasting as an important way of improving the epistemic foundations of our decisions and the decisions of others. We have made several grants to support the forecasting community in the last few years, e.g., to Metaculus, the Forecasting Research Institute, and ARLIS. Moreover, since the launch of Open Phil, grantmakers have often made predictions about core outcomes for grants they approve.
Now with increased staff capacity, the forecasting team wants to build on this work. Our main goal is to help realize the promise of forecasting as a way to improve high-stakes decisions, as outlined in our focus area description. We are excited both about projects aiming to increase the adoption rate of forecasting as a tool by relevant decision-makers, and about projects that provide accurate forecasts on questions that could plausibly influence the choices of these decision-makers. We are interested in such work across both of our portfolios: Global Health and Wellbeing and Global Catastrophic Risks. [1]
We are as of yet uncertain about the most promising type of project in the forecasting focus area, and we will likely fund a variety of different approaches. We will also continue our commitment to forecasting research and to the general support of the forecasting community, as we consider both to be prerequisites for high-impact forecasting. Supported by other Open Phil researchers, we plan to continue exploring the most plausible theories of change for forecasting. I aim to regularly update the forecasting community on the development of our thinking.
Besides grantmaking, the forecasting team is also responsible for Open Phil’s internal forecasting processes, and for managing forecasting services for Open Phil staff. This part of our work will be less public, but we will occasionally publish insights from our own processes, like Javier’s 2022 report on the accuracy of our internal forecasts.
- ^
It should be noted that administratively, the forecasting team is part of the Global Catastrophic Risks portfolio, and historically, our forecasting work has had closer links to that part of the organization.
All views are my own rather than those of any organizations/groups that I’m affiliated with. Trying to share my current views relatively bluntly. Note that I am often cynical about things I’m involved in. Thanks to Adam Binks for feedback.
Edit: See also child comment for clarifications/updates.
Edit 2: I think the grantmaking program has different scope than I was expecting; see this comment by Benjamin for more.
Following some of the skeptical comments here, I figured it might be useful to quickly write up some personal takes on forecasting’s promise and what subareas I’m most excited about (where “forecasting” (edit: is defined as things in the vein of "Tetlockian superforecasting" or general prediction markets/platforms, in which questions are often answered by lots of people spending a little time on them, without much incentive to provide deep rationales)
is defined as things I would expect to be in the scope of OpenPhil’s program to fund).10th(edit: ~15th) percentile ~2027, median ~late 2030s), I’m most excited about forecasting grants that are closely related to AI, though I’m not super confident that no non-AI related ones are above the bar.I incorporated some snippets of a reflections section from a previous forecasting retrospective above, but there’s a little that I didn’t include if you’re inclined to check it out.
I feel like I need to reply here, as I'm working in the industry and defend it more.
First, to be clear, I generally agree a lot with Eli on this. But I'm more bullish on epistemic infrastructure than he is.
Here are some quick things I'd flag. I might write a longer post on this issue later.
Thanks Ozzie for sharing your thoughts!
A few things I want to clarify up front:
Thoughts on some of your bullet points:
I was trying to compare previous OP forecasting funding to previous AI Safety. It's not clear to me how different these were; sure, OP didn't have a forecasting program but AI safety was also very short-staffed. And re: the field maturing idk Tetlock has been doing work on this for a long time, my impression is that AI safety also had very little effort going into it until like mid-late 2010s. I agree that funding of potentially promising exploratory approaches is good though.
Seems reasonable. I did like that post!
Perhaps, but I think you gain a ton of info from actually trying to do stuff and iterating. I think prioritization work can sometimes seem more intuitively great than it ends up being, relative to the iteration strategy.
I would love for this to be true! Am open to changing mind based on a compelling analysis.
There might be some difference in perceptions of the direct EV of marginal AI Safety interventions. There might also be differences in beliefs in the value of (a) prioritization research vs. (b) trying things out and iterating, as described above (perhaps we disagree on absolute value of both (a) and (b)).
Seems reasonable, though I'd guess we have different views on which ambitious AI-related software-heavy projects.
I think you might be understating how fungible OpenPhil's efforts are between AI safety (particularly governance team) and forecasting. Happy to chat in DM if you disagree. Otherwise reasonable point, though you'd ofc still have to do the math to make sure the forecasting program is worth it.
(edit: actually maybe the disagreement is still in the relative value of the work, depending on what you mean by "much" grantmaking capacity)
Seems unclear what should count as internal research for EA, e.g. are you counting OP worldview investigation team / AI strategy research in general? And re: AI advancements, it both improves the promise of AI for forecasting/epistemics work but also shortens timelines which points toward direct AI safety technical/gov work.
Thanks for the replies! Some quick responses.
First, again, overall, I think we generally agree on most of this stuff.
I agree to an extent. But I think there are some very profound prioritization questions that haven't been researched much, and that I don't expect us to gain much insight from by experimentation in the next few years. I'd still like us to do experimentation (If I were in charge of a $50Mil fund, I'd start spending it soon, just not as quickly as I would otherwise). For example:
We might be disagreeing a bit on what the bar for "valuable for EA decision-making" is. I see a lot of forecasting like accounting - it rarely leads to a clear and large decision, but it's good to do, and steers organizations in better directions. I personally rely heavily on prediction markets for key understandings of EA topics, and see that people like Scott Alexander and Zvi seem to. I know less about the inner workings of OP, but the fact that they continue to pay for predictions that are very much for their questions seems like a sign. All that said, I think that ~95%+ of Manifold and a lot of Metaculus is not useful at all.
I'm not sure how much to focus on OP's narrow choices here. I found it surprising that Javier went from governance to forecasting, and that previously it was the (very small) governance team that did forecasting. It's possible that if I evaluated the situation, and had control of the situation, I'd recommend that OP moved marginal resources to governance from forecasting. But I'm a lot less interested in this question than I am, "is forecasting competitive with some EA activities, and how can we do it well?"
Yep, I'd count these.
Just chatted with @Ozzie Gooen about this and will hopefully release audio soon. I probably overstated a few things / gave a false impression of confidence in the parent in a few places (e.g., my tone was probably a little too harsh on non-AI-specific projects); hopefully the audio convo will give a more nuanced sense of my views. I'm also very interested in criticisms of my views and others sharing competing viewpoints.
Also want to emphasize the clarifications from my reply to Ozzie:
Audio/podcast is here:
https://forum.effectivealtruism.org/posts/fsnMDpLHr78XgfWE8/podcast-is-forecasting-a-promising-ea-cause-area
I think forecasting is attractive to many people in EA like myself because EA skews towards curious people from STEM backgrounds who like games. However, I’m yet to see a robust case for it being an effective use of charitable funds (if there is, please point me to it). I’m worried we are not being objective enough and trying to find the facts that support the conclusion rather than the other way round.
I think the fact that forecasting is a popular hobby is probably pretty distorting of priorities.
There are now thousands of EAs whose experience of forecasting is participating in fun competitions which have been optimised for their enjoyment. This mass of opinion and consequent discourse has very little connection to what should be the ultimate end goal of forecasting: providing useful information to decision makers.
For example, I’d love to know how INFER is going. Are the forecasts relevant to decision makers? Who reads their reports? How well do people figuring out what to forecast understand the range of policy options available and prioritise forecasts to inform them? Is there regular contact and a trusting relationship at senior executive level? Would it help more if the forecasting were faster, or broader in scope?
These are all very important questions but are invisible to forecaster participants so end up not being talked about much.
Yeah, it seems similar to other areas where the discussion around the cause area and the cause area itself may be quite different. (see also the disparity in resources vs discussion around global health vs ai)
The interest within the EA community in forecasting long predates the existence of any gamified forecasting platforms, so it seems pretty unlikely that at a high level the EA community is primarily interested because it's a fun game (this doesn't prove more recent interest isn't driven by the gamified platforms, though my sense is that the current level of relative interest seems similar to where it was a decade ago, so it doesn't feel like it made a huge shift).
Also, AI timelines forecasting work has been highly decision-relevant to a large number of people within the EA community. My guess is it's the single research intervention that has caused the largest shift in altruistic capital allocation in the last few years. There also exists a large number of pretty simple arguments in favor of forecasting work being valuable, which have been made in many places (some links here, also a bunch of Robin Hanson's work on prediction markets).
At a higher level, there are also many instances of new types of derivatives markets increasing efficiency of some market, which would probably also apply to prediction markets.
FYI, just wrote a small piece on "Higher-order forecasts", which I see as the equivalent to derivatives. https://forum.effectivealtruism.org/posts/PB57prp5kEMDgwJsm/higher-order-forecasts
I agree they can help with efficiency.
I feel like the prediction-markets themselves are best modeled as derivative markets. And then you are talking about second-order derivative markets here. But IDK, mostly sounds like semantics.
Yea, that's a reasonable way of looking at it. Agreed it is just semantics.
As semantics though, my guess is that "nth-order forecasts" will be more intuitive to most people than something like "n-1th order derivatives".
I'm considering elaborating on this in a full post, but I will do so quickly here as well: It appears to me that there's potentially a misunderstanding here, leading to unnecessary disagreement.
I think that the nature of forecasting in the context of decision-making within governments and other large institutions is very different from what is typically seen on platforms like Manifold, PolyMarket, or even Metaculus. I agree that these platforms often treat forecasting more as a game or hobby, which is fine, but very different from the kind of questions policymakers want to see answered.
I (and I hope this aligns with OP's vision) would want to see a greater emphasis on advancing forecasting specifically tailored for decision-makers. This focus diverges significantly from the casual or hobbyist approach observed on these platforms. The questions you ask should probably not be public, and they are usually far more boring. In practice, it looks more like an advanced Delphi method than it looks like Manifold Markets. I'm somewhat surprised to see interpretations of this post suggesting a need for more funding in the type of forecasting that is more recreational, which, in my view, is and should not be a priority.
E: One obvious exemption to the dichotomy I describe above is that the more fun forecasting platforms can be a good way of identifying
Superforecastersgood forecasters.Personally, I think specifically forecasting for drug development could be very impactful: Both in the general sense of aligning fields around the probability of success of different approaches (at a range of scales -- very relevant both for scientists and funders) and the more specific regulatory use case (public predictions of safety/efficacy of medications as part of approvals by FDA/EMA etc.)
More broadly, predicting the future is hugely valuable. Insofar as effective altruism aims to achieve consequentialist goals, the greatest weakness of consequentialism is uncertainty about the effects of our actions. Forecasting targets that problem directly. The financial system creates a robust set of incentives to predict future financial outcomes -- trying to use forecasting to build a tool with broader purpose than finance seems like it could be extremely valuable.
I don't really do forecasting myself so I can't speak to the field's practical ability to achieve its goals (though as an outsider I feel optimistic), so perhaps there are practical reasons it might not be a good investment. But overall to me it definitely feels like the right thing to be aiming at.
Thanks for the comment, Grayden. For context, readers may want to check the question post Why is EA so enthusiastic about forecasting?.
Thanks for sharing, but nobody on that thread seems to be able to explain it! Most people there, like here, seem very sceptical
Would you count Holden's take here as a robust case for funding forecasting as an effective use of charitable funds?
This is my own (possibly very naive) interpretation of one motivation behind some of Open Phil's forecasting-related grants.
Actually, maybe it's also useful to just look at the biggest grants from that list:
Thanks for sharing. It’s a start, but it’s certainly not a proven Theory of Change. For example, Tetlock himself said that nebulous long-term forecasts are hard to do because there’s no feedback loop. Hence, a prediction market on an existential risk will be inherently flawed.
I don't think that really works. You can get feedback from 5 years in 5 years. Metaculus already has some suggestions as to people who are good 5 year forecasters.
None of the above are prediction markets.
COI - I work in forecasting.
Whether or not forecasting is a good use of funds, good decision-making is probably correlated with impact.
So I'm open to the idea that forecasting hasn't been a good use of funds, but it seems it should be a priori. Forecasting in one sense is predicting how decisions will go. How could that not be a good idea in theory.
More robust cases in practice:
I’m glad to see the debate on decision relevance in the comments! I think that if we end up considering forecasting a successful focus area in 5-10 years, thinking hard about the value-add to decision-making will likely have played a crucial role in this success.
As for my own view, I do agree that judgmental / subjective probability forecasting hasn’t been as much of a success story as one might have expected about 10 years ago. I also agree that many of the stories people tell about the impact of forecasting naturally raise questions like “so why isn’t this a huge industry now? Why is this project a non-profit?”. We are likely to ask questions of this kind to prospective grantees way more often than grantmakers in other focus areas.
However, I (unsurprisingly) also disagree with the stronger claim that the lack of a large judgmental forecasting industry is conclusive evidence that forecasting doesn’t provide value, and is just an EA hobby horse. While I don’t have capacity to engage in this debate deeply, a few points of rebuttal:
Also it was pretty widely covered in wider discourse.
I'm in the process of writing up my thoughts on forecasting in general and particularly EA's reverence for forecasting but I feel, similar to @Grayden that forecasting is a game that is nearly perfectly designed to distract EAs from useful things. It's a combination of winning, being right when others are wrong and seemingly useful, all wrapped into a fun game.
I'd like to see tangible benefits to more broad funding of forecasting that seems to be done in t he millions and tens of millions of dollars.
I would also be the type of person you would think would be a greater fan of forecasting. I'm the number one forecaster on Manifold and I've made tens of thousands of dollars on Polymarket. But I think we should start to think of forecasting as more of a game that EAs like to play, something like Magic the Gathering that is fun and has some relations to useful things but isn't really useful by itself.
Maybe Open Phil are doing this because they feel like they often attempt to get good forecasts about stuff they care about in the course of trying to make the best grants they can in other areas, and after they have done that enough times, it seemed sensible to just formally declare that forecasting is something they fund. The theory here isn't "developing forecasting as an art is an EA cause because it will improve worldwide epistemics" or whatever, but rather "we, Open Phil, need good forecasts to get funding decisions about other stuff right".
If they mostly care about AI timelines, subsidize some markets on it. Funding platforms and research doesn’t seem particularly useful here (as opposed to much more direct research).
Fair point.
At some point, I kinda just want to say "ok, where has the forecasting money gone?", and it seems to have overwhelmingly gone to community forecasting sites like Manifold and Metaculus. I don't see anything like "paying 3 teams of 3 forecasters to compete against each other on some AI timelines questions"
Just confirming that informing our own decisions was part of the motivation for past grants, and I expect it to play an important role for our forecasting grants in the future.
That’s directionally true, but I think “overwhelmingly” isn’t right.
Most of these are currently not assgined to forecasting as a cause area, but you can find themhere(searching for “forecast” in our grants database), see especially those before August 2021.[Update: we have updated the labels, and these grants are now listed here ].I expect that we’ll make more of these types of grants now that forecasting is a designated area with more capacity.
This will be a total waste of time and money unless OpenPhil actually pushes the people it funds towards achieving real-world impact. The typical pattern in the past has been to launch yet another forecasting tournament to try to find better forecasts and forecasters. No one cares, we already know how to do this since at least 2012!
The unsolved problem is translating the research into real-world impact. Does the Forecasting Research Institute have any actual commercial paying clients? What is Metaculus's revenue from actual clients rather than grants? Who are they working with and where is the evidence that they are helping high-stakes decision makers improve their thought processes?
Incidentally, I note that forecasting is not actually successful even within EA at changing anything: superforecasters are generally far more relaxed about Xrisk than the median EA, but has this made any kind of difference to how EA spends its money? It seems very unlikely.
At the risk of damaging my networks in EA, I am inclined to tentatively agree with some of your comment. Disclaimer here that I have very little interaction with forecasting for various reasons, so this is more of a general comment than anything else.
I think one of the major problems I see in EA as a whole is a fairly loose definition of 'impact'. Often I see five or six groups using vast sums of money and talent to produce research or predictions that are shared and reviewed between each other and then hosted on the websites but never seem to actually be implemented anywhere. There's no external (of EA) stakeholder participation, no follow-up to check for changed trends, no update on how this affects the real-world outside of EA circles.
I don't always think paying clients are the best measurement system for impact, but I do think there needs to be a much higher focus on bridging the connection between high-quality forecasting and real-world decision-makers.
Obviously this doesn't apply everywhere in EA, and there are lots and lots of exemptions, but I do think your comment has merit.
I find the statement is more precise if you put "longtermism" where "EA" is. Is that your sense as well?
I think that's a good modification of my initial point, you may well be right.
Obviously this comment is very true and correct, but it doesn't say great things about the culture of EA that people preface the most milquetoast comments like this with disclaimers about not wanting to blow up their friendships!
I don't think there's actually a risk of CAISID damaging their EA networks here, fwiw, and I don’t think CAISID wanted to include their friendships in this statement.
My sense is that most humans are generally worried about disagreeing with what they perceive to be a social group’s opinion, so I spontaneously don’t think there’s much specific to EA to explain here.
You are correct in that I was referring more to the natural risks associated with disagreeing with a major funder in a public space (even though OP have a reputation for taking criticism very well), and wasn't referring to friendships. I could well have been more clear, and that's on me.
Oh really? Because in typical male-dominated social networks, there are usually pretty high levels of internal disagreement, some of it fairly sharp. Go on any other forum that isn't moderated to within an inch of its life by a team that somehow costs 2 million a year, and where everyone isn't chasing one billionaire's money!
I’m confused about why you think forecasting orgs should be trying to acquire commercial clients.[1] How do you see this as being on the necessary path for forecasting initiatives to reduce x-risk, contribute to positive trajectory change, etc.? Perhaps you could elaborate on what you mean by “real-world impact”?
COI note: I work for Metaculus.
The main exception that comes to mind, for me, is AI labs. But I don’t think you’re talking about AI labs in particular as the commercial clients forecasting orgs should be aiming for?
What better test of the claim "we are producing useful/actionable information about the future, and/or developing workable processes for others to do the same" do we have than some of the thousands of organisations whose survival depends on this kind of information being willing to pay for it?
IMO if a forecasting org does manage to make money selling predictions to companies, that's a good positive update, but if they fail, that's only a weak negative update—my prior is that the vast majority of companies don't care about getting good predictions even if those predictions would be valuable. (Execs might be exposed as making bad predictions; good predictions should increase the stock price, but individual execs only capture a small % of the upside to the stock price vs. 100% of the downside of looking stupid.)
I think if you extend this belief outwards it starts to look unwieldy and “proves too much”. Even if you think that executives don’t care about having access to good predictions the way that business owners do, then why not ask why business owners aren’t paying?
MW Story already said what I wanted to say in response to this, but it should be pretty obvious. If people think of something as more than just a cool parlor trick, but instead regard as useful and actionable, they should be willing to pay hand over fist for it at proper big boy consultancy rates. If they aren't, that strongly suggests that they just don't regard what you're producing as useful.
And to be honest it often isn't very useful. Tell someone "our forecasters think there's a 26% chance Putin is out of power in 2 years" and the response will often be "so what?" That by itself doesn't tell anything about what Putin leaving power might mean for Russia or Ukraine, which is almost certainly what we actually care about (or nuclear war risk, if we're thinking X-risk). The same is true, to a degree, for all these forecasts about AI or pandemics or whatever: they often aren't sharp enough and don't cut to the meat of actual impacts in the real world.
But since you're here, perhaps you can answer my question about your clients, or lack thereof? If I were funding Metaculus, I would definitely want it to be more than a cool science project.
It's worth saying also that we already have 1 commercial forecasting organisation Good Judgment (I do a little bit of professional forecasting for them though it's not my main job.) Not clear why we need another. (I don't know who GJ clients actually are though, plus presumably I wouldn't be allowed to tell you even if I did. EDIT: Actually, in some cases I think client info became public and/or we were internally told who they were, but I have just forgotten who.)
As the program is about forecasting, what is your stance on the broader field of foresight & futures studies? Why is forecasting more promising than some other approaches to foresight?
I'm not OP, obviously, and I am only speaking from experience here, so I have no data to back this up, but:
My feeling is that foresight projects have a tendency to become political very quickly, and they are much more about stakeholder engagement than they are about finding the truth, whereas forecasting can remain relatively objective for longer.
That being said: I am very excited about combining these approaches.
We are open to considering projects in “forecasting-adjacent" areas, and projects that combine forecasting with ideas from related fields are certainly well within the scope of the program.
As for projects that would exclusively rely on other approaches: My worry is that non-probabilistic foresight techniques typically don’t have more to show in terms of evidence for their effectiveness, while being more ad hoc from a theoretical perspective.
Thanks for asking, SanteriK! For context, reader may want to check the (great!) post A practical guide to long-term planning – and suggestions for longtermism.
I‘m really excited about more thinking and grant-making going into forecasting!
Regarding the comments critical of forecasting as a good investment of resources from a world-improving perspective, here some of my quick thoughts:
Systematic meritocratic forecasting has a track record of outperforming domain experts on important questions - Examples: Geopolitics (see Superforecasting), public health (see COVID), IIRC also outcomes of research studies
In all important domains where humans try to affect things, they are implicitly forecasting all the time and act on those forecasts. Random examples: - "If lab-grown meat becomes cheaper than normal meat, XY% of consumers will switch" - "A marginal supply of 10,000 bednets will decrease malaria infections by XY%" - Models of climate change projections conditional on emmissions
In many domains humans are already explicitly forecasting and acting on those forecasts - Insurance (e.g. forecasts on loan payments) - Finance (e.g. on interest rate changes) - Recidivism - Weather - Climate
Increases in use of forecasting has the potential to increase societal sanity - Make people more able to appreciate and process uncertainty in important domains - Clearer communication (e.g. less talking past one another by anchoring discussion on real world outcomes) - Establish feedback loops with resolvable forecasts ➔ stronger incentives for being correct & ability to select people who have better world models
That said, I also think that it's often surprisingly difficult to ask actionable questions when forecasting, and often it might be more important to just have a small team of empowered people with expert knowledge combined with closely coupled OODA loops instead. I remember finding this comment from Jan Kulveit pretty informative:
Source: https://ea.greaterwrong.com/posts/by8u954PjM2ctcve7/experimental-longtermism-theory-needs-data#comment-HgbppQzz3G3hLdhBu
Why do you think there is currently little/no market for systematic meritocratic forecasting services (SMFS)? Even under a lower standard of usefulness -- that blending SMFS in with domain-expert forecasts would improve the utility of forecasts over using only domain-expert input -- that should be worth billions of dollars in the financial services industry alone, and billions elsewhere (e.g., the insurance market).
I don't think the drivers of low "societal sanity" are fundamentally about current ability to estimate probabilities. To use a current example, the reason 18% of Americans believe Taylor Swift's love life is part of a conspiracy to re-elect Biden isn't that our society lacks resources to better calibrate the probability that this is true. The desire to believe things that favor your "team" runs deep in human psychology. The incentives to propagate such nonsense are, sadly, often considerable. The technological structures that make disseminating nonsense easier are not going away.
Thanks, I think that's a good question. Some (overlapping) reasons that come to mind that I give some credence to:
a) relevant markets are simply making an error in neglecting quantified forecasts
b) relevant players train the relevant skills sufficiently well into their employees themselves (e.g. that's my fairly uninformed impression from what Jane Street is doing, and maybe also Bridgewater?)
c) quantified forecasts are so uncommon that it still feels unnatural to most people to communicate them, and it feels cumbersome to be nailed down on giving a number if you are not practiced in it
d) forecasting is a nerdy practice, and those practices need bigger wins to be adopted (e.g. maybe similar to learning programming/math/statistics, working with the internet, etc.)
e) maybe more systematically I'm thinking that it's often not in the interest of entrenched powers to have forecasters call bs on whatever they're doing.
f) maybe previous forecast-like practices ("futures studies", "scenario planning") maybe didn't yield many benefits and made companies unexited about similar practices (I personally have a vague sense of not being impressed by things I've seen associated with these words)
I agree that things like confirmation bias and myside bias are huge drivers impeding "societal sanity". And I also agree that it won't help a lot here to develop tools to refine probabilities slightly more.
That said, I think there is a huge crowd of reasonably sane people who have never interacted with the idea of quantified forecasting as a useful epistemic practice and a potential ideal to thrive towards when talking about important future developments. Like other commentators say, it's currently mostly attracting a niche of people who thrive for higher epistemic ideals, who try to contribute to better forecasts on important topics, etc. I currently feel like it's not intractable for quantitative forecasts to become more common in epistemic spaces filled with reasonable enough people (e.g. journalism, politics, academia). Kinda similar to how tracking KPIs where probably once a niche new practice and are now standard practice.
this is really cool! i'm excited to watch the forecasting community grow, and for a greater number of impactful forecasting projects to be built.
i'm curious what you're currently excited about (specific projects, broad topic areas, etc). what is OP's theory of change for how forecasting can be most impactful? what sorts of things would you be most excited to see happen?
on the flipside, if — 1/5/20 years from now — we look back and realize that forecasting wasn't so impactful, why do you think that would be the case?
Awesome to hear! I'm happy that OpenPhil has promoted forecasting to its own dedicated cause area with its own team; I'm hoping this provides more predictable funding for EA forecasting work, which otherwise has felt a bit like a neglected stepchild compared to GCR/GHD/AW. I've spoken with both Ben and Javier, who are both very dedicated to the cause of forecasting, and am excited to see what their team does this year!
Preventing catastrophic risks, improving global health and improving animal welfare are goals in themselves. At best, forecasting is a meta topic that supports other goals
Yes, it's a meta topic; I'm commenting less on the importance of forecasting in an ITN framework and more on its neglectedness. This stuff basically doesn't get funding outside of EA, and even inside EA had no institutional commitment; outside of random one-of grants, the largest forecasting funding program I'm aware of over the last 2 years were $30k in "minigrants" funded by Scott Alexander out of pocket.
But on the importance of it: insofar as you think future people matter and that we have the ability and responsibility to help them, forecasting the future is paramount. Steering today's world without understanding the future would be like trying to help people in Africa, but without overseas reporting to guide you - you'll obviously do worse if you can't see outcomes of your actions.
You can make a reasonable argument (as some other commenters do!) that the tractability of forecasting to date hasn't been great; I agree that the most common approaches of "tournament setting forecasting" or "superforecaster consulting" haven't produced much of decision-relevance. But there are many other possible approaches (eg FutureSearch.ai is doing interesting things using an LLM to forecast), and I'm again excited to see what Ben and Javier do here.
Your points on helping future people (and non-human animals) are well taken.
3. I didn't make it. It is great though. I was talking about on a yearly basis in the last couple years. That said, I made the comment off memory so I could be wrong.