Our Forecasting team just launched a new RFP focused on using AI to improve human reasoning in a structured and quantified way. The team plans to make grants in the $100k-$1M range for projects lasting between 6 months and 2 years; proposals will be accepted until at least January 30, 2026.
The RFP includes two areas of interest:
AI for forecasting: We are looking for proposals for AI models that help to make forecasts more accurate or more relevant. We are primarily interested in probabilistic, judgmental forecasting, i.e. quantitative forecasts that cannot be based fully on large sets of structured data. Aside from models that directly produce forecasts, ideally approaching or exceeding human performance on forecasting tasks, we’re also looking to fund work on models that perform one or more of the subtasks involved in using forecasts for decision-making, such as explaining the reasoning behind forecasts or building forecasting models.
AI for sound reasoning: Modern AI models are being adopted at a rapid pace throughout society, including for high-stakes decisions in law, academia and policy. We expect this trend to continue over the coming years, and possibly accelerate. It seems crucial to us that models that are used for highly consequential decisions are generally truth-oriented, and support such tendencies among their users. We see two main paths to this goal that we’re interested in funding:
- Research into understanding when models do and do not support sound reasoning, including evaluations of models with respect to principles of sound reasoning like truthfulness, meta-reasoning, or consistency.
- Developing tools that directly help with specific tasks that are disproportionately helpful for clear reasoning, like fact-checkers, fact tracers, arbitrators, or argument analyzers.
See the full post for more detail, and reach out to forecasting@openphilanthropy.org if you have any questions!
