Hide table of contents

I’ve argued in my unawareness sequence that when we properly account for our severe epistemic limitations, we are clueless about our impact from an impartial altruistic perspective.

However, this argument and my responses to counterarguments involve a lot of moving parts. And the term “clueless” gets used in various importantly different ways. It can be easy to misunderstand which claims I am (not) making, in the context of previous EA and academic writings on cluelessness.

So, as a “guide” to these arguments, I’ve written this list of questions and resources that answer them. Caveats:

  • Most of the resources are my own work — not because I necessarily think I’ve given the best answers, but because the precise claims and framings that other works use might be subtly yet importantly different from mine. I also include references to writings that I have not (co-)authored, for more context. But these authors don’t necessarily endorse my claims.
  • When I link to a reply to someone else’s comment, I don’t mean to claim that the person being replied to endorses the exact statement of the objection I’ve given in this post.

What are unawareness, indeterminacy, and cluelessness?: The basics

  1. What’s the connection between unawareness and cluelessness? Are there arguments for cluelessness besides the argument from unawareness?
    1. Comment by me
    2. Mogensen (2019)
    3. Further reading:
      1. Roussos (2021)
      2. “Motivating example” in “Should you go with your best guess?: Against precise Bayesianism and related views”
  2. What’s the difference between (A) accounting for unawareness, or having imprecise credences, and (B) just being really uncertain, or needing to think more before acting? You say we should use intervals of {probabilities} / {values of outcomes} / {expected values} instead of single numbers. What do these intervals mean?
    1. “Unawareness vs. uncertainty” in “The challenge of unawareness for impartial altruist action guidance: Introduction”
    2. “The structure of indeterminacy” in “Should you go with your best guess?: Against precise Bayesianism and related views”
    3. Further reading:
      1. “Degrees of imprecision from unawareness” in “Why intuitive comparisons of large-scale impact are unjustified”
      2. Tarsney et al. (2024, Sec. 3)
  3. If you don’t use EV (or heuristics meant to approximate EV), how do you make decisions?
    1. “Unawareness-inclusive expected value (UEV)” in “Why impartial altruists should suspend judgment under unawareness”
    2. “Suspending judgment on total effects, and choosing based on other reasons” in “Should you go with your best guess?: Against precise Bayesianism and related views”
    3. Further reading:
      1. Clifton (2025a)
      2. Bradley (2012, Sec. 5)
  4. What’s the connection between …
    1. … indeterminacy and imprecision / imprecise probabilities?
      1. “Indeterminate Bayesianism” in “Should you go with your best guess?: Against precise Bayesianism and related views”
    2. … indeterminacy/imprecision and incompleteness?
      1. “Appendix: Indeterminacy for ideal agents” in “Should you go with your best guess?: Against precise Bayesianism and related views”
    3. … indeterminacy/imprecision and incomparability?
      1. “Degrees of imprecision from unawareness” in “Why intuitive comparisons of large-scale impact are unjustified”
  5. What’s the positive motivation for having indeterminate/imprecise credences, or assigning indeterminate/imprecise values to outcomes?
    1. “Motivating example” in “Should you go with your best guess?: Against precise Bayesianism and related views”
    2. “Degrees of imprecision from unawareness” in “Why intuitive comparisons of large-scale impact are unjustified
    3. Further reading:
      1. Bradley (2012, Sec. 4.3)
      2. Bradley (2017, Sec. 11.3-11.4)
  6. You say we should have imprecise credences (etc.) because picking a precise credence is “arbitrary”. Are you saying we need to justify everything from precisely formalizable principles? That seems doomed.
    1. “Why not just do what works?” in “The challenge of unawareness for impartial altruist action guidance: Introduction”
    2. “Non-pragmatic principles” in “Winning isn’t enough”
    3. Further reading:
      1. “Reasons for belief” in Clifton (2025a)
      2. Comment by me

Why aren’t precise credences and EV the appropriate response to these problems?

  1. Sure, we don’t have an exact probability distribution over possible outcomes with exact values assigned to them. But aren’t we still ultimately aiming for the highest-EV action? And can’t we do that using best-guess proxies for the EV?[1]

    1. “Unawareness vs. uncertainty” in “The challenge of unawareness for impartial altruist action guidance: Introduction”

    2. “Okay, But Shouldn’t We Try to Approximate the Bayesian Ideal?” in Violet Hour (2023)

    3. Further reading:

      1. Comment by Clifton

  2. Why not aggregate our interval of {probabilities} / {values of outcomes} / {expected values} using a meta-distribution? (E.g., just take the midpoint.) Don’t we leave out information otherwise?
    1. “The “better than chance” argument, and other objections to imprecision” in “Why intuitive comparisons of large-scale impact are unjustified”
    2. “Maximality is too permissive” in “Should you go with your best guess?: Against precise Bayesianism and related views”
    3. Further reading:
      1. “Aggregating our representor with higher-order credences uses more information” in “Should you go with your best guess?: Against precise Bayesianism and related views”
      2. Clifton (2025b)
      3. Mogensen and Thorstad (2020, Sec 4.4)
      4. Bradley (2017, Sec. 13.2)
  3. Can’t we always say which action is net-better as long as our intuitions are at least somewhat better than chance? Or, as long as there’s some similarity between promoting the impartial good and decision problems we’re much more familiar with?
    1. “The “better than chance” argument, and other objections to imprecision” in “Why intuitive comparisons of large-scale impact are unjustified”
    2. “Meta-extrapolation” in “Why existing approaches to cause prioritization are not robust to unawareness”
    3. Further reading:
      1. “Aggregating our representor with higher-order credences uses more information” in “Should you go with your best guess?: Against precise Bayesianism and related views”
  4. Aren’t your credences just your acceptable betting odds, which are precise?
    1. “Background on degrees of belief and what makes them rational” and “Suspending judgment on total effects, and choosing based on other reasons” in “Should you go with your best guess?: Against precise Bayesianism and related views”
    2. Further reading:
      1. Eriksson and Hájek (2007)
      2. Carlsmith (2021)
  5. You say that picking a precise credence/EV is arbitrary. Isn’t the cutoff between the numbers you include vs. exclude in imprecise credences/intervals of EVs also arbitrary?
    1. “Indeterminate Bayesianism” in “Should you go with your best guess?: Against precise Bayesianism and related views”
    2. Comment by me
    3. Further reading:
      1. Lyon (2017)
      2. Bradley (2012, Sec. 4.3.6)
  6. If you have imprecise credences or incomplete preferences, can’t you get money-pumped or otherwise take a dominated strategy? (And if you apply some patch to avoid dominated strategies, aren’t you just acting like a precise EV maximizer?)
    1. Petersen (2023)
    2. “A money-pump for Completeness” in Thornley (2023)
    3. “Avoiding dominated strategies” in “Winning isn’t enough”
    4. Further reading:
      1. Bradley and Steele (2014)
      2. Bradley (2022)
      3. Hedden (2015)[2]

  7. Sure, you don’t need to have precise probabilities and evaluate actions based on EV to avoid money pumps. Still, don’t coherence/representation theorems collectively suggest that precise EV maximization is normatively correct? (As Yudkowsky puts it, “We have multiple spotlights all shining on the same core mathematical structure [of expected utility]”.)[3]

    1. “Unawareness vs. uncertainty” in “The challenge of unawareness for impartial altruist action guidance: Introduction”

    2. “Avoiding dominated strategies” in “Winning isn’t enough”

    3. Further reading:

      1. Rethink Priorities (2023, Sec. 3.2)

      2. Hájek (2008)

Aren’t we not clueless (in practice) because…?

  1. We’re surely not entirely clueless in mundane contexts. And it would be arbitrary to posit a sharp discontinuity between those contexts and promoting the impartial good. The complexity of a decision problem is continuous and on a spectrum. Thus, aren’t we not entirely clueless about promoting the impartial good?
    1. “When is unawareness not a big deal?” and “Why we’re especially unaware of large-scale consequences” in “Why intuitive comparisons of large-scale impact are unjustified”
    2. Further reading:
      1. Comment by Daniel
  2. Sure, there’s some imprecision in our estimates, but aren’t at least some interventions good by a wide enough margin that the imprecision doesn’t matter?
    1. “Reasons to suspend judgment on comparisons of strategies’ UEV” in “Why impartial altruists should suspend judgment under unawareness”
    2. Further reading:
      1. “Case study revisited” in “Why existing approaches to cause prioritization are not robust to unawareness”
  3. Why not just use the strategies (or credence-forming methods) that work best, either empirically or in toy experiments resembling our situation?
    1. “Heuristics” in “Winning isn’t enough”
    2. “Meta-extrapolation” in “Why existing approaches to cause prioritization are not robust to unawareness”
    3. Further reading:
      1. Williamson (2022, Sec. 1.4.2)
  4. Come on, do you really think [obviously good/bad thing] is no better/worse than staying at home watching cat videos? Isn’t this just radical skepticism?
    1. “When is unawareness not a big deal?” and “Why we’re especially unaware of large-scale consequences” in “Why intuitive comparisons of large-scale impact are unjustified”
    2. Further reading:
      1. “Maximality is too permissive” in “Should you go with your best guess?: Against precise Bayesianism and related views”
  5. Why not wager on the possibility that we’re not clueless?
    1. “The “better than chance” argument, and other objections to imprecision” and “Appendix A: The meta-epistemic wager?” in “Why intuitive comparisons of large-scale impact are unjustified”
    2. Further reading:
      1. “Meta-extrapolation” in “Why existing approaches to cause prioritization are not robust to unawareness”
  6. Superforecasters do better than chance at predicting complex outcomes, so aren’t we not clueless?
    1. “Precise forecasts do better than chance” in “Should you go with your best guess?: Against precise Bayesianism and related views”
    2. “Unawareness and superforecasting” in “Why intuitive comparisons of large-scale impact are unjustified”
    3. Further reading:
      1. “Mechanisms, Metaculus, and World-Models” in Violet Hour (2023)
  7. Shouldn’t we treat the unknown unknowns as canceling out in expectation, since we can’t say anything about them either way? Or at least, can’t we extrapolate from what we do know? Even if we’re biased, it would be surprising for our biases to be highly anti-inductive in expectation.
    1. “Symmetry” and “Extrapolation” in “Why existing approaches to cause prioritization are not robust to unawareness”
    2. Further reading:
      1. “Problem 1: Modeling the catch-all, and biased sampling” in “Why impartial altruists should suspend judgment under unawareness”

Who, and which interventions, are these problems relevant to?

  1. Isn’t cluelessness only a problem if you’re trying to directly shape the far future? But I’m not doing that, I’m trying to (e.g.) stop x-risks in the next few years.
    1. “Case study: Severe unawareness in AI safety” in “The challenge of unawareness for impartial altruist action guidance: Introduction”
    2. “Extremely limited understanding of mechanisms” in “Why intuitive comparisons of large-scale impact are unjustified”
    3. Further reading:
      1. “Focus on Lock-in” and “Case study revisited” in “Why existing approaches to cause prioritization are not robust to unawareness”
  2. Isn’t cluelessness only a problem for sequence thinking?
    1. “Appendix E: On cluster thinking” in “Why existing approaches to cause prioritization are not robust to unawareness”
  3. Isn’t it robustly positive to …
    1. … try to prevent bad lock-in events (like AI x-risk)?
      1. “Focus on Lock-in” and “Case study revisited” in “Why existing approaches to cause prioritization are not robust to unawareness”
    2. … do more research, spread better values or decision-making practices, gain more influence on AI, or save money?
      1. “Capacity-Building” in “Why existing approaches to cause prioritization are not robust to unawareness”
    3. … follow strategies whose least conjunctive effects are positive?
      1. “Simple Heuristics” in “Why existing approaches to cause prioritization are not robust to unawareness”
  4. Your case for indeterminacy appeals a lot to “arbitrariness”. I’m fine with some arbitrariness in my beliefs and preferences. Isn’t that enough for me to not be clueless?
    1. “Permissive epistemology doesn’t imply precise credences / completeness / non-cluelessness”
  5. What about extremely small decisions, like helping an old lady cross the street? If we help the old lady, isn’t it reasonable to treat the expected value of the off-target effects as so negligible that the benefit to the old lady dominates?
    1. Yim (2019)
    2. Comment by Aird

What implications do these problems (not) have for our decisions?

  1. What’s decision-relevant about saying it’s indeterminate whether A is net-better or worse than B, if you have to choose something anyway?
    1. “Practical hallmarks of indeterminacy” in “Should you go with your best guess?: Against precise Bayesianism and related views”
  2. What’s decision-relevant about your arguments about unawareness, if they don’t say it’s bad to keep doing what we’re doing?
    1. “Appendix A: The meta-epistemic wager?” in “Why intuitive comparisons of large-scale impact are unjustified”
    2. “Conclusion and taking stock of implications” in “Why existing approaches to cause prioritization are not robust to unawareness”
  3. Are you saying we should default to inaction?
    1. Comment by me
  1. ^

     Note: I’m not sure the references included here fully respond to this question. But it’s not yet clear to me what people mean by this question, so I encourage anyone who finds the included references inadequate to say in the comments what they have in mind.

  2. ^

     This work argues against the view that diachronic (i.e., sequential) money pump / dominated strategy arguments, such as the arguments against incompleteness, are normatively relevant in the first place.

  3. ^

     Note: Again, I’m not entirely sure what the argument for this objection is supposed to be, so it’s hard to say whether these references adequately address it.

Comments6


Sorted by Click to highlight new comments since:

Maybe you can turn this into a FAQ by pulling out quotes or having an LLM summarize the explanations in your citations? I'm not sure if it's worth the effort, though, because people can just go read the citations.

I’d personally find this helpful, and I expect others will, too. If I consider the FAQs I'm familiar with and imagine alternative documents that consist of the questions and the references, but without the answers, I feel that their value decreases by at least 50%. Most of the added value comes from the synthesis, but some comes from removing the trivial inconvenience of having to open multiple links and locating the relevant passage(s).

That's helpful to know, thanks! I currently don't have time for this, but (edit) might add quotes later.

Most of the added value comes from the synthesis

Could you please clarify what you mean by this?

Could you please clarify what you mean by this?

I was referring to the difference in value between a collection of references and a summary of the content of those references (as opposed to a mere collection of representative quotes).

Gotcha, so to be clear, you're saying: it would be better for the current post to have the relevant quotes from the references, but it would be even better to have summaries of the explanations?

(I tend to think this is a topic where summaries are especially likely to lose some important nuance, but not confident.)

Gotcha, so to be clear, you're saying: it would be better for the current post to have the relevant quotes from the references, but it would be even better to have summaries of the explanations?

Yes, that’s what I’m saying.

(I tend to think this is a topic where summaries are especially likely to lose some important nuance, but not confident.)

I defer to you, since I am not familiar with this topic. My above assessment was "on priors”.

Curated and popular this week
 ·  · 4m read
 · 
TLDR When we look across all jobs globally, many of us in the EA community occupy positions that would rank in the 99.9th percentile or higher by our own preferences within jobs that we could plausibly get.[1] Whether you work at an EA-aligned organization, hold a high-impact role elsewhere, or have a well-compensated position which allows you to make significant high effectiveness donations, your job situation is likely extraordinarily fortunate and high impact by global standards. This career conversations week, it's worth reflecting on this and considering how we can make the most of these opportunities. Intro I think job choice is one of the great advantages of development. Before the industrial revolution, nearly everyone had to be a hunter-gatherer or a farmer, and they typically didn’t get a choice between those. Now there is typically some choice in low income countries, and typically a lot of choice in high income countries. This already suggests that having a job in your preferred field puts you in a high percentile of job choice. But for many in the EA community, the situation is even more fortunate. The Mathematics of Job Preference If you work at an EA-aligned organization and that is your top preference, you occupy an extraordinarily rare position. There are perhaps a few thousand such positions globally, out of the world's several billion jobs. Simple division suggests this puts you in roughly the 99.9999th percentile of job preference. Even if you don't work directly for an EA organization but have secured: * A job allowing significant donations * A position with direct positive impact aligned with your values * Work that combines your skills, interests, and preferred location You likely still occupy a position in the 99.9th percentile or higher of global job preference matching. Even without the impact perspective, if you are working in your preferred field and preferred country, that may put you in the 99.9th percentile of job preference
 ·  · 5m read
 · 
Summary Following our co-founder Joey's recent transition announcement we're actively searching for exceptional leadership to join our C-level team and guide AIM into its next phase. * Find the full job description here * To apply, please visit the following link * Recommend someone you think could be a great fit here * Location: London strongly preferred. Remote candidates willing to work from London at least 3 months a year and otherwise overlapping at least 6 hours with 9 am to 5 pm BST will be considered. We are happy to sponsor UK work visas. * Employment Type: Full-time (35 hours) * Application Deadline: rolling until August 10, 2025 * Start Date: as soon as possible (with some flexibility for the right candidate) * Compensation: £45,000–£90,000 (for details on our compensation policy see full job description) Leadership Transition On March 15th, Joey announced he's stepping away from his role as CEO of AIM, with his planned last day as December 1st. This follows our other co-founder Karolina's completed transition in 2024. Like Karolina, Joey will transition to a board member role while we bring in new leadership to guide AIM's next phase of growth. The Opportunity AIM is at a unique inflection point. We're seeking an exceptional leader to join Samantha and Devon on our C-level team and help shape the next era of one of the most impactful organizations in the EA ecosystem. With foundations established (including a strong leadership team and funding runway), we're ready to scale our influence dramatically and see many exciting pathways to do so. While the current leadership team has a default 2026 strategic plan, we are open to a new CEO proposing radical departures. This might include: * Proposing alternative ways to integrate or spin off existing or new programs * Deciding to spend more resources trialling more experimental programs, or double down on Charity Entrepreneurship * Expanding geographically or deepening impact in existing region
 ·  · 6m read
 · 
I am writing this to reflect on my experience interning with the Fish Welfare Initiative, and to provide my thoughts on why more students looking to build EA experience should do something similar.  Back in October, I cold-emailed the Fish Welfare Initiative (FWI) with my resume and a short cover letter expressing interest in an unpaid in-person internship in the summer of 2025. I figured I had a better chance of getting an internship by building my own door than competing with hundreds of others to squeeze through an existing door, and the opportunity to travel to India carried strong appeal. Haven, the Executive Director of FWI, set up a call with me that mostly consisted of him listing all the challenges of living in rural India — 110° F temperatures, electricity outages, lack of entertainment… When I didn’t seem deterred, he offered me an internship.  I stayed with FWI for one month. By rotating through the different teams, I completed a wide range of tasks:  * Made ~20 visits to fish farms * Wrote a recommendation on next steps for FWI’s stunning project * Conducted data analysis in Python on the efficacy of the Alliance for Responsible Aquaculture’s corrective actions * Received training in water quality testing methods * Created charts in Tableau for a webinar presentation * Brainstormed and implemented office improvements  I wasn’t able to drive myself around in India, so I rode on the back of a coworker’s motorbike to commute. FWI provided me with my own bedroom in a company-owned flat. Sometimes Haven and I would cook together at the residence, talking for hours over a chopping board and our metal plates about war, family, or effective altruism. Other times I would eat at restaurants or street food booths with my Indian coworkers. Excluding flights, I spent less than $100 USD in total. I covered all costs, including international transportation, through the Summer in South Asia Fellowship, which provides funding for University of Michigan under