Hide table of contents

I'm interested in bringing forecasting techniques to my company, like Danny Hernandez described in HBR. If anyone has experience doing this, how did it play out and what were the key obstacles?

New Answer
New Comment


1 Answers sorted by

Our venture capital fund (alt protein) recently did training into forecasting and decision making (based on approach from Superforecasting, How to Decide, The Scout mindset).

As a result, we're currently revamping our evaluation process to attempt to reduce bias and explicitly think in terms of scenarios, probabilities and expected value return multiples, rather than our old approach of guesstimating the likely outcome and scoring around that single scenario (we're also participating in some forecasting exercises to help us understand possible paths for the technology in more detail)

So far the key obstacles are to figure out how to adopt these new techniques without spending vastly more time in analysis, leaving us less time on generating dealflow and also making founders wait longer for feedback and go/no go decisions.

Another practical challenge is the actual nuts and bolts of how to take various expert inputs and then actually come up with the predictions (Superforecasting doesn't really go into the details).

Hi Simon, 

May I ask who provided the forecasting training? My team is also interested in training to reduce bias and think more probabilistically. We've all read Superforecasting and Scout Mindset, etc. Ready to make it practical!

1
Simon Newstead
Sure, we self studied from an agenda that the two of us put together: - those 3 books - some clearer thinking modules (great resource) - practicing forecasting on metaculus - reading some other articles and interviews Here's a doc with all the details (also contains book notes if it's of info)
Comments3
Sorted by Click to highlight new comments since:

You might be interested in this talk from Dan Schwarz (current Metaculus CTO, former Google employee) on "Prediction Markets at Google, and Lessons in Corporate Forecasting": 

 

It gives a reasonably thorough history of prediction markets at Google, and some insight into the use of forecasting at Google in his time there. Probably my main takeaway from watching was the belief in the importance of forecasting questions resolving quickly (the section at 32:17) in producing high quality forecasters (who can learn from the fast feedback). It sounds very plausible to me but I'm not aware of anything in the forecasting literature that conclusively shows it to be the case.

Related, so I allow myself to interject here: I want to push predictions market in my company. Is there a private solution for Prediction Markets?

I searched but didn't find any.

Thanks!

Curated and popular this week
 ·  · 10m read
 · 
Regulation cannot be written in blood alone. There’s this fantasy of easy, free support for the AI Safety position coming from what’s commonly called a “warning shot”. The idea is that AI will cause smaller disasters before it causes a really big one, and that when people see this they will realize we’ve been right all along and easily do what we suggest. I can’t count how many times someone (ostensibly from my own side) has said something to me like “we just have to hope for warning shots”. It’s the AI Safety version of “regulation is written in blood”. But that’s not how it works. Here’s what I think about the myth that warning shots will come to save the day: 1) Awful. I will never hope for a disaster. That’s what I’m trying to prevent. Hoping for disasters to make our job easier is callous and it takes us off track to be thinking about the silver lining of failing in our mission. 2) A disaster does not automatically a warning shot make. People have to be prepared with a world model that includes what the significance of the event would be to experience it as a warning shot that kicks them into gear. 3) The way to make warning shots effective if (God forbid) they happen is to work hard at convincing others of the risk and what to do about it based on the evidence we already have— the very thing we should be doing in the absence of warning shots. If these smaller scale disasters happen, they will only serve as warning shots if we put a lot of work into educating the public to understand what they mean before they happen. The default “warning shot” event outcome is confusion, misattribution, or normalizing the tragedy. Let’s imagine what one of these macabrely hoped-for “warning shot” scenarios feels like from the inside. Say one of the commonly proposed warning shot scenario occurs: a misaligned AI causes several thousand deaths. Say the deaths are of ICU patients because the AI in charge of their machines decides that costs and suffering would be minimize
 ·  · 14m read
 · 
This is a transcript of my opening talk at EA Global: London 2025. In my talk, I challenge the misconception that EA is populated by “cold, uncaring, spreadsheet-obsessed robots” and explain how EA principles serve as tools for putting compassion into practice, translating our feelings about the world's problems into effective action. Key points:  * Most people involved in EA are here because of their feelings, not despite them. Many of us are driven by emotions like anger about neglected global health needs, sadness about animal suffering, or fear about AI risks. What distinguishes us as a community isn't that we don't feel; it's that we don't stop at feeling — we act. Two examples: * When USAID cuts threatened critical health programs, GiveWell mobilized $24 million in emergency funding within weeks. * People from the EA ecosystem spotted AI risks years ahead of the mainstream and pioneered funding for the field starting in 2015, helping transform AI safety from a fringe concern into a thriving research field. * We don't make spreadsheets because we lack care. We make them because we care deeply. In the face of tremendous suffering, prioritization helps us take decisive, thoughtful action instead of freezing or leaving impact on the table. * Surveys show that personal connections are the most common way that people first discover EA. When we share our own stories — explaining not just what we do but why it matters to us emotionally — we help others see that EA offers a concrete way to turn their compassion into meaningful impact. You can also watch my full talk on YouTube. ---------------------------------------- One year ago, I stood on this stage as the new CEO of the Centre for Effective Altruism to talk about the journey effective altruism is on. Among other key messages, my talk made this point: if we want to get to where we want to go, we need to be better at telling our own stories rather than leaving that to critics and commentators. Since
 ·  · 1m read
 · 
Shape and lead the future of effective altruism in the UK — apply to be the Director of EA UK. The UK has the world's second-largest EA community, with London having the highest concentration of EAs globally. This represents a significant opportunity to strengthen and grow the effective altruism movement where it matters most. The EA UK board is recruiting for a new Director, as our existing Director is moving on to another opportunity. We believe that the strongest version of EA UK is one where the Director is implementing a strategy that they have created themselves, hence the open nature of this opportunity. As Director of EA UK, you'll have access to: * An established organisation with 9 years of London community building experience * An extensive network and documented history of what works (and what doesn't) * 9+ months of secured funding to develop and implement your vision, and additional potential funding and connections to funders * A supportive board and engaged community eager to help you succeed Your task would be to determine how to best leverage these resources to maximize positive impact through community building in the UK. This is a unique opportunity for a self-directed community-builder to shape EA UK's future. You'll be responsible for both setting the strategic direction and executing on that strategy. This is currently a one-person organisation (you), so you'll need to thrive working independently while building connections across the EA ecosystem. There is scope to pitch expansion to funders and the board. We hope to see your application! Alternatively, if you know anyone who might be a good fit for this role, please email madeleine@goodstructures.co.