Hide table of contents

In EA, there appears to be an interest in "good judgment," sometimes also called "rationality."

There is also interest in forecasting.

My question is, what are the concrete, operationalized differences between skill at forecasting vs having good judgment?

I'm not asking this question facetiously. For example, the parent company/organization of Superforecasting brands itself as the "Good Judgment Project."

But at the same time, when I think about "being good at forecasting" and "having good judgment," I often think of many different qualities. So how can we cleanly separate the two?

25

0
0

Reactions

0
0
New Answer
New Comment


4 Answers sorted by

Maybe I've misunderstood but in my humble opinion, and limited experience, forecasting is just a tiny tiny fraction of good judgement, (maybe about 1% depending on how broad you define forecasting). It can be useful, but somewhat overrated by the EA community.

Other aspects of good judgment may include things like:

  • Direction setting
  • Agenda setting
  • Being conscious of when you change direction part way through a judgement
  • Understanding the range of factors that are important to a judgement
  • Knowing how long to spend and how much effort to invest in a judgement
  • Brainstorming
  • Creative thinking
  • Solutions design
  • Research skills
  • Information processing
  • Group dynamics for consensus building or finding challenge
  • Knowing who to trust
  • Drawing analogies to other similar situations
  • Knowing when analagies are likely to be valid
  • Good intuition
  • Methords for digging into intuitions
  • Ability to test and moderate your intuition
  • Scenario planning (distinct from foresight?)
  • Horizen scanning (distinct from foresight?)
  • Foresight and predictions
  • Robust decion making
  • A range of models of the world which can inform the judgment
  • Good heuristics
  • Systems thinking
  • Self-awareness
  • Ability to adjust for unknown unknowns
  • Seeking evidence that contradicts the way you may want to go
  • Understanding and counteracting other biases
  • Understand statistics
  • Accounting for statistical issues like regression to mean or optimisers curse
  • Making quantitative comparisons
  • Weighing up pros and cons
  • Other generic decision making tools that can be applied, of which there are lots
  • Specific decion making tools applicable to specific situations
  • Knowing which of the above is most relevant to a judgement
  • Ability to bring all of the above together
  • Speed at bringing all the above together
  • Preparing for and understating the consequences of having made the wrong judgement
  • Ability to relearn and update judement later with new evidence
  • Etc

I think it would be clearer to put many of these under different categories than to lump everything under judgement. In my post I also cover the following, and try to sketch how they're different:

  • Intelligence
  • Decision-making
  • Strategy

I should have maybe mentioned creativity as another category.

I also contrast 'using judgement' with alternatives like statistical analysis; applying best practice; quantitative models etc., though you might draw on these in making your judgement.

2
weeatquince
Thank Ben super useful. @Linch I was taking a very very broad view of judgment. Ben's post is much better and breaks things done in a much nicer way. I also made a (not particularly successful) stab at explaining some aspects of not-foresight driven judgement here: https://forum.effectivealtruism.org/posts/znaZXBY59Ln9SLrne/how-to-think-about-an-uncertain-future-lessons-from-other#Story_1__RAND_and_the_US_military  

Thanks a lot for the answer! A lot of the things you put into "other" (which is a very long list, btw!) are things I'd put under "forecasting." I wonder where the crux is?

3
Linch
Some examples (non-exhaustive) of things I consider to be closer to "forecasting" than "not forecasting."
2
alex lawsen
I also understand all of these as very important to forecasting.

If you're good at forecasting it's reasonable to expect you'll be above average at reasoning or decision making tasks that require making predictions.

But judgment is potentially different. In "Prediction Machines" Agrawal et al separate judgment and prediction as two distinct parts of decision making where the former involves weighing tradeoffs. That's harder to measure but a potentially distinct way to think about the difference between judgment and forecasting. They have a theoretical paper on this decision making model too.

I think I agree with this answer.

To answer my own question, here is my best guess for how "good judgment" is different from "skill at forecasting."

Good judgment can roughly be divided within 2 mostly distinct clusters:

  • Forming sufficiently good world models given practical constraints.
  • Making good decisions on the basis of such (often limited) models.

Forecasting is only directly related to the former, and not the later (though presumably there are some general skills that are applicable to both). In addition, within the "forming good world models" angle, good forecasting is somewhat agnostic to important factors like:

  • Group epistemics. There are times where it's less important whether an individual has the right world models but that your group has access to the right plethora of models.
    • It may be the case that it's practically impossible for a single individual to hold all of them, so specialization is necessary.
  • Asking the right questions. Having the world's lowest Brier score on something useless is in some sense impressive, but it's not very impactful compared to being moderately accurate on more important questions.
  • Correct contrarianism. As a special case of the above two points, in both science and startups, it is often (relatively) more important to be right about things that others are wrong about than it is to be right about everything other people are right about.

___

Note that "better world models" vs "good decisions based on existing models" isn't the only possible ontology to break up "good judgment."

- Owen uses understanding of the world vs heuristics.
- In the past, I've used intelligence vs wisdom.

From the post you refer to:

There are a number of sub-skills, like model-building, having calibrated estimates, and just knowing relevant facts.

Calibrated estimates for future events is the goal of forecasting, and while model-building and knowledge are valuable for this, I think they're valuable in other ways, too. I think another component of good judgement is being able to judge which problems to work on in the first place and how much effort and resources to put into them, falling under instrumental rationality. You need to decide which problems to apply your forecasting skills to, and I don't think this is a forecasting problem.

Also, my understanding is that forecasting is specific to predicting possible future events, and would not include having reasonable views on fundamental research questions, e.g. about consciousness, in physics, in normative ethics, etc..

(I suppose you could try to forecast the answers of experts or even hypothetical experts for fundamental research questions, but experts can be wrong, and this seems like a pretty unusual application and ad hoc way to get at fundamental research questions.)

Comments1
Sorted by Click to highlight new comments since:

Cambridge Dictionary defines judgement as:

the ability to form valuable opinions and make good decisions

Forecasting isn't (at least not directly) about decision-making (cf. instrumental rationality) but just about knowledge and understanding (epistemic rationality).

A bit tangential, but may still be of interest: a recent paper argued that there are two competing standards of good judgement: rationality and reasonableness.

Normative theories of judgment either focus on rationality (decontextualized preference maximization) or reasonableness (pragmatic balance of preferences and socially conscious norms). ... Normative theories of judgment either focus on rationality (decontextualized preference maximization) or reasonableness (pragmatic balance of preferences and socially conscious norms). ... ay rationality is reductionist and instrumental, whereas reasonableness integrates preferences with particulars and moral concerns.
Curated and popular this week
 ·  · 10m read
 · 
Regulation cannot be written in blood alone. There’s this fantasy of easy, free support for the AI Safety position coming from what’s commonly called a “warning shot”. The idea is that AI will cause smaller disasters before it causes a really big one, and that when people see this they will realize we’ve been right all along and easily do what we suggest. I can’t count how many times someone (ostensibly from my own side) has said something to me like “we just have to hope for warning shots”. It’s the AI Safety version of “regulation is written in blood”. But that’s not how it works. Here’s what I think about the myth that warning shots will come to save the day: 1) Awful. I will never hope for a disaster. That’s what I’m trying to prevent. Hoping for disasters to make our job easier is callous and it takes us off track to be thinking about the silver lining of failing in our mission. 2) A disaster does not automatically a warning shot make. People have to be prepared with a world model that includes what the significance of the event would be to experience it as a warning shot that kicks them into gear. 3) The way to make warning shots effective if (God forbid) they happen is to work hard at convincing others of the risk and what to do about it based on the evidence we already have— the very thing we should be doing in the absence of warning shots. If these smaller scale disasters happen, they will only serve as warning shots if we put a lot of work into educating the public to understand what they mean before they happen. The default “warning shot” event outcome is confusion, misattribution, or normalizing the tragedy. Let’s imagine what one of these macabrely hoped-for “warning shot” scenarios feels like from the inside. Say one of the commonly proposed warning shot scenario occurs: a misaligned AI causes several thousand deaths. Say the deaths are of ICU patients because the AI in charge of their machines decides that costs and suffering would be minimize
 ·  · 14m read
 · 
This is a transcript of my opening talk at EA Global: London 2025. In my talk, I challenge the misconception that EA is populated by “cold, uncaring, spreadsheet-obsessed robots” and explain how EA principles serve as tools for putting compassion into practice, translating our feelings about the world's problems into effective action. Key points:  * Most people involved in EA are here because of their feelings, not despite them. Many of us are driven by emotions like anger about neglected global health needs, sadness about animal suffering, or fear about AI risks. What distinguishes us as a community isn't that we don't feel; it's that we don't stop at feeling — we act. Two examples: * When USAID cuts threatened critical health programs, GiveWell mobilized $24 million in emergency funding within weeks. * People from the EA ecosystem spotted AI risks years ahead of the mainstream and pioneered funding for the field starting in 2015, helping transform AI safety from a fringe concern into a thriving research field. * We don't make spreadsheets because we lack care. We make them because we care deeply. In the face of tremendous suffering, prioritization helps us take decisive, thoughtful action instead of freezing or leaving impact on the table. * Surveys show that personal connections are the most common way that people first discover EA. When we share our own stories — explaining not just what we do but why it matters to us emotionally — we help others see that EA offers a concrete way to turn their compassion into meaningful impact. You can also watch my full talk on YouTube. ---------------------------------------- One year ago, I stood on this stage as the new CEO of the Centre for Effective Altruism to talk about the journey effective altruism is on. Among other key messages, my talk made this point: if we want to get to where we want to go, we need to be better at telling our own stories rather than leaving that to critics and commentators. Since
 ·  · 3m read
 · 
A friend of mine who worked as a social worker in a hospital told me a story that stuck with me. She had a conversation with an in-patient having a very difficult time. It was helpful, but as she was leaving, they told her wistfully 'You get to go home'. She found it hard to hear—it felt like an admonition. It was hard not to feel guilt over indeed getting to leave the facility and try to stop thinking about it, when others didn't have that luxury. The story really stuck with me. I resonate with the guilt of being in the fortunate position of being able to go back to my comfortable home and chill with my family while so many beings can't escape the horrible situations they're in, or whose very chance at existence depends on our work. Hearing the story was helpful for dealing with that guilt. Thinking about my friend's situation it was clear why she felt guilty. But also clear that it was absolutely crucial that she did go home. She was only going to be able to keep showing up to work and having useful conversations with people if she allowed herself proper respite. It might be unfair for her patients that she got to take the break they didn't, but it was also very clearly in their best interests for her to do it. Having a clear-cut example like that to think about when feeling guilt over taking time off is useful. But I also find the framing useful beyond the obvious cases. When morality feels all-consuming Effective altruism can sometimes feel all consuming. Any spending decision you make affects how much you can donate. Any activity you choose to do takes time away from work you could be doing to help others. Morality can feel as if it's making claims on even the things which are most important to you, and most personal. Often the narratives with which we push back on such feelings also involve optimisation. We think through how many hours per week we can work without burning out, and how much stress we can handle before it becomes a problem. I do find that