Hide table of contents

“Good judgement” seems to me a useful and widely used (within EA) term which points at an important skill. But I think there isn’t a standard dictionary definition that matches what EA community members have in mind for the term, and that people often use the term without defining it. So I decided to collect and summarise in this post all the definitions of the term I’m aware of. 

Please comment below if you know of or would suggest additional definitions or sources!

My summaries of people’s definitions

  • Linch (2020) states: “Good judgment can roughly be divided within 2 mostly distinct clusters:
    • Forming sufficiently good world models given practical constraints.
    • Making good decisions on the basis of such (often limited) models.”
  • Todd (2020) describes good judgement as “The ability to weigh complex information and reach calibrated conclusions”, and says someone with good judgement is able to:
    • “Focus on the right questions
    • When answering those questions, synthesise many forms of weak evidence using good heuristics, and weigh the evidence appropriately
    • Be resistant to common cognitive biases by having good habits of thinking
    • Come to well-calibrated conclusions”
  • Cotton-Barratt (2020) describes good judgement as being about mental processes which tend to lead to good decisions, and highlights three major ingredients: understanding, heuristics, and meta-level judgement. Sub-skills of understanding include model-building, having calibrated estimates, and just knowing relevant facts. Meta-level judgement is about how much weight to put on different perspectives.
  • Shlegeris (2019) describes good judgement as being about “Spotting the important questions”, “making quick guesses for answers to questions they care about”, “think[ing] critically about evidence [and spotting] ways that it’s misleading”, “Having good sense about how the world works and what plans are likely to work”, “Knowing when they’re out of their depth, knowing who to ask for help, knowing who to trust.”

(It’s possible that those summaries somewhat misrepresent these people’s full views.)

Disclaimers

  • This post does not attempt to address how to develop this skill or why it matters, even though those are of course key questions. (That said, some of the linked posts and the quotes from them do discuss this.)
    • You might therefore want to just read the above summary and then the linked posts, rather than reading the excerpts I include below.
  • I only spent an hour writing on this post, and only included the definitions I already knew and remembered - I didn’t conduct anything close to a thorough search.
  • It’s possible that it’s weird/bad/copyright infringement for me to include such extensive quotes below; please let me know if you think that that’s the case.

The definitions in full & in context

These are in order of recency.

Various people, 2020, How can good generalist judgment be differentiated from skill at forecasting?

See the comments there. (But note that I personally think the top comment isn’t useful; I think it casts much too wide a net and differs too much from how other people use the term “good judgement”, which could then create misunderstandings between people.)

One comment I’d like to highlight is from Linch:

“Good judgment can roughly be divided within 2 mostly distinct clusters:

  • Forming sufficiently good world models given practical constraints.
  • Making good decisions on the basis of such (often limited) models.

Forecasting is only directly related to the former, and not the later (though presumably there are some general skills that are applicable to both). In addition, within the "forming good world models" angle, good forecasting is somewhat agnostic to important factors like:

  • Group epistemics. There are times where it's less important whether an individual has the right world models but that your group has access to the right plethora of models.
    • It may be the case that it's practically impossible for a single individual to hold all of them, so specialization is necessary.
  • Asking the right questions. Having the world's lowest Brier score on something useless is in some sense impressive, but it's not very impactful compared to being moderately accurate on more important questions.
  • Correct contrarianism. As a special case of the above two points, in both science and startups, it is often (relatively) more important to be right about things that others are wrong about than it is to be right about everything other people are right about.

___

Note that "better world models" vs "good decisions based on existing models" isn't the only possible ontology to break up "good judgment."

- Owen uses understanding of the world vs heuristics.

- In the past, I've used intelligence vs wisdom.”

Benjamin Todd, 2020, Notes on good judgement and how to develop it

“Judgement, which I roughly define as ‘the ability to weigh complex information and reach calibrated conclusions,’ is clearly a valuable skill.

[...]

Why good judgement is so valuable when aiming to have an impact

One reason is lack of feedback. We can never be fully certain which issues are most pressing, or which interventions are most effective. Even in an area like global health – where we have relatively good data on what works – there has been huge debate over the cost effectiveness of even a straightforward intervention like deworming. Deciding whether to focus on deworming requires judgement.

This lack of feedback becomes even more pressing when we come to efforts to reduce existential risks or help the long-term future, and efforts that take a more ‘hits based’ approach to impact. An existential risk can only happen once, so there’s a limit to how much data we can ever have about what reduces them, and we must mainly rely on judgement.1

Reducing existential risks and some of the other areas we focus on are also new fields of research, so we don’t even have established heuristics or widely accepted knowledge that someone can simply learn and apply in place of using their judgement.

You may not need to make these judgement calls yourself – but you at least need to have good enough judgement to pick someone else with good judgement to listen to.

In contrast, in other domains it’s easier to avoid relying on judgement. For instance, in the world of for-profit startups, it’s possible (somewhat) to try things, gain feedback by seeing what creates revenue, and refine from there. Someone with so-so judgement can use other approaches to pursue a good strategy.

Other fields have other ways of avoiding judgement. In engineering you can use well-established quantitative rules to figure out what works. When you have lots of data, you can use statistical models. Even in more qualitative research like anthropology, there are standard ‘best practice’ research methods that people can use. In other areas you can follow traditions and norms that embody centuries of practical experience.

I get the impression that many in effective altruism agree that judgement is a key trait. In the 2020 EA Leaders Forum survey, respondents were asked which traits they would most like to see in new community members over the next five years, and judgement came out highest by a decent margin. 

[...] 

It’s also notable that two of the other most desired traits – analytical intelligence and independent thinking – both relate to what we might call ‘good thinking’ as well. (Though note that this question was only about ‘traits,’ as opposed to skills/expertise or other characteristics.)

[...]

More on what good judgement is

I introduced a rough definition above, but there’s a lot of disagreement about what exactly good judgement is, so it’s worth saying a little more. Many common definitions seem overly broad, making judgement a central trait almost by definition. For instance, the Cambridge Dictionary defines it as:

‘The ability to form valuable opinions and make good decisions’

While the US Bureau of Labor Statistics defines it as:

‘Considering the relative costs and benefits of potential actions to choose the most appropriate one’

I prefer to focus on the rough narrower definition I introduced at the start (and which was used in the survey I mentioned above), which makes judgement more clearly different from other cognitive traits:

‘The ability to weigh complex information and reach calibrated conclusions’

More practically, I think of someone with good judgement as someone able to:

  1. Focus on the right questions
  2. When answering those questions, synthesise many forms of weak evidence using good heuristics, and weigh the evidence appropriately
  3. Be resistant to common cognitive biases by having good habits of thinking
  4. Come to well-calibrated conclusions

Owen Cotton-Barratt wrote out his understanding of good judgement, breaking it into ‘understanding’ and ‘heuristics.’ His notion is a bit broader than mine.

Here are some closely related concepts:

  • Keith Stanovich’s work on ‘rationality,’ which seems to be something like someone’s ability to avoid cognitive biases, and is ~0.7 correlated with intelligence (so, closely related but not exactly the same)
  • The cluster of traits (listed later) that make someone a good ‘superforecaster’ in Philip Tetlock’s work (Tetlock also claims that intelligence is only modestly correlated with being a superforecaster)

Here are some other concepts in the area, but that seem more different:

  • Intelligence: I think of this as more like ‘processing speed’ – your ability to make connections, have insights, and solve well-defined problems. Intelligence is an aid in good judgement – since it lets you make more connections – but the two seem to come apart. We all know people who are incredibly bright but seem to often make dumb decisions. This could be because they’re overconfident or biased, despite being smart.
  • Strategic thinking: Good strategic thinking involves being able to identify top priorities, develop a good plan for working towards those priorities, and improve the plan over time. Good judgement is a great aid to strategy, but a good strategy can also make judgement less necessary (e.g. by creating a good backup plan, you can minimise the risks of your judgement being wrong).
  • Expertise: Knowledge of the topic is useful all else equal, but Tetlock’s work (covered more below) shows that many experts don’t have particularly accurate judgement.
  • Decision making: Good decision making depends on all of the above: strategy, intelligence, and judgement.

[...]

Forecasting isn’t exactly the same as good judgement, but seems very closely related – it at least requires “weighing up complex information and coming to calibrated conclusions”, though it might require other abilities too. That said, I also take good judgement to include picking the right questions, which forecasting doesn’t cover.

All told, I think there’s enough overlap that if you improve at forecasting, you’re likely going to improve your general judgement as well.”

[Todd then discusses traits and practices of good forecasters and how to improve at forecasting, which is also relevant for good judgement.]

Owen Cotton-Barratt, 2020, "Good judgement" and its components - EA Forum 

[What follows is the post in its entirety, since it’s short and entirely relevant here. There’s also some good discussion in the comments which I won’t copy or summarise here.]

Meta: Lots of people interested in EA (including me) think that something like "good judgement" is a key trait for the community, but there isn't a commonly understood definition. I wrote a quick version of these notes in response to a question from Ben Todd, and he suggested posting them here. These represent my personal thinking about judgement and its components.

Good judgement is about mental processes which tend to lead to good decisions. (I think good decision-making is centrally important for longtermist EA, for reasons I won't get into here.) Judgement has two major ingredients: understanding of the world, and heuristics.


Understanding of the world helps you make better predictions about how things are in the world now, what trajectories they are on (so how they will be at future points), and how different actions might have different effects on that. This is important for helping you explicitly think things through. There are a number of sub-skills, like model-building, having calibrated estimates, and just knowing relevant facts. Sometimes understanding is held in terms of implicit predictions (perhaps based on experience). How good someone's understanding of the world is can vary a lot by domain, but some of the sub-skills are transferrable across domains.

You can improve your understanding of the world by learning foundational facts about important domains, and by practicing skills like model-building and forecasting. You can also improve understanding of a domain by importing models from other people, although you may face challenges of being uncertain how much to trust their models. (One way that models can be useful without requiring any trust is giving you clues about where to look in building up your own models.)


Heuristics are rules of thumb that you apply to decisions. They are usually held implicitly rather than in a fully explicit form. They make statements about what properties of decisions are good, without trying to provide a full causal model for why that type of decision is good. Some heuristics are fairly general (e.g. "avoid doing sketchy things"), and some apply to specific domains (e.g. "when hiring programmers, put a lot of weight on the coding tests").

You can improve your heuristics by paying attention to your experience of what worked well or poorly for you. Experience might cause you to generate new candidate heuristics (explicitly or implicitly) and hold them as hypotheses to be tested further. They can also be learned socially, transmitted from other people. (Hopefully they were grounded in experience at some point. Learning can be much more efficient if we allow the transmission of heuristics between people, but if you don't require people to have any grounding in their own experience or cases they've directly heard about, it's possible for heuristics to be propagated without regard for whether they're still useful, or if the underlying circumstances have changed enough that they shouldn't be applied. Navigating this tension is an interesting problem in social epistemology.)

One of the reasons that it's often good to spend time with people with good judgement is that you can make observations of their heuristics in action. Learning heuristics is difficult from writing, since there is a lot of subtlety about the boundaries of when they're applicable, or how much weight to put on them. To learn from other people (rather than your own experience) it's often best to get a chance to interrogate decisions that were a bit surprising or didn't quite make sense to you. It can also be extremely helpful to get feedback on your own decisions, in circumstances where the person giving feedback has high enough context that they can meaningfully bring their heuristics to bear.


Good judgement generally wants a blend of understanding the world and heuristics. Going just with heuristics makes it hard to project out and think about scenarios which are different from ones you've historically faced. But our ability to calculate out consequences is limited, and some forms of knowledge are more efficiently incorporated into decision-making as heuristics rather than understanding about the world.

One kind of judgement which is important is meta-level judgement about how much weight to put on different perspectives. Say you are deciding whether to publish an advert which you think will make a good impression on people and bring users to your product, but contains a minor inaccuracy which would require much more awkward wording to avoid. You might bring to bear the following perspectives:

A) The heuristic "don't lie"

B) The heuristic "have snappy adverts"

C) The implicit model which is your gut prediction of what will happen if you publish

D) The explicit model about what will happen that you drew up in a spreadsheet

E) The advice of your partner

F) The advice of a professional marketer you talked to

Each of these has something legitimate to contribute. The choice of how to reach a decision is a judgement, which I think is usually made by choosing how much weight to put on the different perspectives in this circumstance (including sometimes just letting one perspective dominate). These weights might in turn be informed by your understanding of the world (e.g. "marketers should know about this stuff"), and also by your own experience ("wow, my partner always seems to give good advice on these kinds of tricky situations").

I think that almost always the choice of these weights is a heuristic (and that the weights themselves are generally implicit rather than explicit). You could develop understanding of the world which specify how much to trust the different perspectives, but as boundedly rational actors, at some point we have to get off the understanding train and use heuristics as shortcuts (to decide when to spend longer thinking about things, when to wrap things up, when to make an explicit model, etc.).


Overall I hope that people can develop good object-level judgement in a number of important domains (strategic questions seem particularly tricky+important, but judgement about technical domains like AI, and procedural domains like how to run organisations also seem very strongly desirable; I suspect there's a long list of domains I'd think are moderately important). I also hope we can develop (and support people to develop) good meta-level judgement. When decision-makers have good meta-level judgement this can act as a force-multiplier on the presence of the best accessible object-level judgement in the epistemic system. It can also add a kind of robustness, making badly damaging mistakes quite a lot less likely.

Buck Shlegeris, 2019, Thoughts on doing good through non-standard EA career pathways 

“When I say someone has good judgement, I mean that I think they’re good at the following things:

  • Spotting the important questions. When they start thinking about a topic (How good is leaflet distribution as an intervention to reduce animal suffering? How should we go about reducing animal suffering? How worried should we be about AI x-risk? Should we fund this project?), they come up with key considerations and realize what they need to learn more about in order to come to a good decision.
  • Having good research intuitions. They are good at making quick guesses for answers to questions they care about. They think critically about evidence that they are being presented with, and spot ways that it’s misleading.
  • Having good sense about how the world works and what plans are likely to work. They make good guesses about what people will do, what organizations will do, how the world will change over time. They have good common sense about plans they’re considering executing on; they rarely make choices which seem absurdly foolish in retrospect.
  • Knowing when they’re out of their depth, knowing who to ask for help, knowing who to trust.

These skills allow people to do things like the following:

  • Figure out cause prioritization
  • Figure out if they should hire someone to work on something
  • Spot which topics are going to be valuable to the world for them to research
  • Make plans based on their predictions for how the world will look in five years
  • Spot underexplored topics
  • Spot mistakes that are being made by people in their community; spot subtle holes in widely-believed arguments

I think it’s likely that there exist things you can read and do which make you better at having good judgement about what’s important in a field and strategically pursuing high impact opportunities within it. I suspect that other people have better ideas, but here are some guesses. (As I said, I don’t think that I’m overall great at this, though I think I’m good at some subset of this skill.)

  • Being generally knowledgeable seems helpful.
  • Learning history of science (or other fields which have a clear notion of progress) seems good. I’ve heard people recommend reading contemporaneous accounts of scientific advancements, so that you learn more about what it’s like to be in the middle of shifts.
  • Perhaps this is way too specific, but I have been trying to come up with a general picture of how science advances by talking to scientists about how their field has progressed over the last five years and how they expect it to progress in the next five. For example, maybe the field is changing because computers are cheaper now, or because we can make more finely tuned lasers or smaller cameras, or because we can more cheaply manufacture something. I think that doing this has given me a somewhat clearer picture of how science develops, and what the limiting factors tend to be.
  • I think that you can improve your skills at this by working with people who are good at it. To choose some arbitrary people, I’m very impressed by the judgement of some people at Open Phil, MIRI, and OpenAI, and I think I’ve become stronger from working with them.
  • The Less Wrong sequences try to teach this kind of judgement; many highly-respected EAs say that the Sequences were very helpful for them, so I think it’s worth trying them out. I found them very helpful. (Inconveniently, many people whose judgement that I’m less impressed with are also big fans of the Sequences. And many smart EAs find them offputting or unhelpful.)”

32

0
0

Reactions

0
0

More posts like this

Comments1
Sorted by Click to highlight new comments since:

Thanks Michael, that's a useful collection.

I think I like Linch's basic definition most, maybe because it's so close to the concepts of epistemic and instrumental rationality, which I found useful before. I'll extend his definition from the summary a little with points touched upon by the other definitions:

Good judgment can roughly be divided within 2 mostly distinct clusters:

  • Forming sufficiently good world models given practical constraints.
    • building world models that are useful for you and your community's world model portfolio
    • efficiently seeking and using diverse forms of evidence
    • learning models from people who have shown good judgement
    • being able to derive calibrated forecasts from your models
  • Making good decisions on the basis of such (often limited) models.
    • strategically focussing on highest priority decisions
    • using heuristics that are informed by and selected based on feedback
    • seeking & weighing advice from people with relevant knowledge
    • be reflective about cognitive biases & previous mistakes

(Note: Linch is currently my supervisor & Michael is another senior manager in my department, so take my positive feedback with a grain of salt :P)

Curated and popular this week
 ·  · 25m read
 · 
Epistemic status: This post — the result of a loosely timeboxed ~2-day sprint[1] — is more like “research notes with rough takes” than “report with solid answers.” You should interpret the things we say as best guesses, and not give them much more weight than that. Summary There’s been some discussion of what “transformative AI may arrive soon” might mean for animal advocates. After a very shallow review, we’ve tentatively concluded that radical changes to the animal welfare (AW) field are not yet warranted. In particular: * Some ideas in this space seem fairly promising, but in the “maybe a researcher should look into this” stage, rather than “shovel-ready” * We’re skeptical of the case for most speculative “TAI<>AW” projects * We think the most common version of this argument underrates how radically weird post-“transformative”-AI worlds would be, and how much this harms our ability to predict the longer-run effects of interventions available to us today. Without specific reasons to believe that an intervention is especially robust,[2] we think it’s best to discount its expected value to ~zero. Here’s a brief overview of our (tentative!) actionable takes on this question[3]: ✅ Some things we recommend❌ Some things we don’t recommend * Dedicating some amount of (ongoing) attention to the possibility of “AW lock ins”[4]  * Pursuing other exploratory research on what transformative AI might mean for animals & how to help (we’re unconvinced by most existing proposals, but many of these ideas have received <1 month of research effort from everyone in the space combined — it would be unsurprising if even just a few months of effort turned up better ideas) * Investing in highly “flexible” capacity for advancing animal interests in AI-transformed worlds * Trying to use AI for near-term animal welfare work, and fundraising from donors who have invested in AI * Heavily discounting “normal” interventions that take 10+ years to help animals * “Rowing” on na
 ·  · 3m read
 · 
About the program Hi! We’re Chana and Aric, from the new 80,000 Hours video program. For over a decade, 80,000 Hours has been talking about the world’s most pressing problems in newsletters, articles and many extremely lengthy podcasts. But today’s world calls for video, so we’ve started a video program[1], and we’re so excited to tell you about it! 80,000 Hours is launching AI in Context, a new YouTube channel hosted by Aric Floyd. Together with associated Instagram and TikTok accounts, the channel will aim to inform, entertain, and energize with a mix of long and shortform videos about the risks of transformative AI, and what people can do about them. [Chana has also been experimenting with making shortform videos, which you can check out here; we’re still deciding on what form her content creation will take] We hope to bring our own personalities and perspectives on these issues, alongside humor, earnestness, and nuance. We want to help people make sense of the world we're in and think about what role they might play in the upcoming years of potentially rapid change. Our first long-form video For our first long-form video, we decided to explore AI Futures Project’s AI 2027 scenario (which has been widely discussed on the Forum). It combines quantitative forecasting and storytelling to depict a possible future that might include human extinction, or in a better outcome, “merely” an unprecedented concentration of power. Why? We wanted to start our new channel with a compelling story that viewers can sink their teeth into, and that a wide audience would have reason to watch, even if they don’t yet know who we are or trust our viewpoints yet. (We think a video about “Why AI might pose an existential risk”, for example, might depend more on pre-existing trust to succeed.) We also saw this as an opportunity to tell the world about the ideas and people that have for years been anticipating the progress and dangers of AI (that’s many of you!), and invite the br
 ·  · 14m read
 · 
Cross-posted from my personal blog after strong encouragement from two people. The text is partially personal musings over my journey through the pre-surgery process, and partially a review of arguments why it might or might not make sense for a person who cares about effective altruism to donate a kidney. Some of the content is specific to where I am based in (Finland), and some of the content is less honed than I'd like, but if I didn't timebox my writing and just push whatever I have at the end of it I probably would not publish anything. Hope you enjoy! June 4, 2025 I was eating phở in the city center when the call came. The EM study had been done. "Bad news," my nephrologist said. "You have a kidney disease." The words hit harder than the spicy broth. After more than nine months of tests, blood draws, and even a kidney biopsy, my journey to donate a kidney to my friend had just come to an abrupt end. Let’s rewind to where this all began. The Decision When Effective Altruism volunteer Mikko mentioned that his kidney transplant was showing signs of chronic rejection, I jumped and offered him my kidney. I had read Dylan Matthew’s kidney donation story, Scott Alexander’s kidney donation story, and had taken part in a discussion on estimating the level of kidney demand in Finland in a local EA discussion group that had at least one Finnish doctor involved. The statistics were reassuring. Kidney donations are very safe with a perioperative death rate of around 0.03% to 0.06%. You will have a slightly increased risk of kidney-related issues later in life (1-2% likelihood of kidney failure), and there is also some weak evidence that suggests people who donate a kidney have very slightly shorter life expectancy than those who have both kidneys intact. The QALY Trade-off There are multiple attempts to calculate what the quality-adjusted life year (QALY) trade for a kidney donation actually is, and to me it looks like donating a kidney is by default similar to