Status: early stage, let us know if it's being done better elsewhere
Forum Post Cruxes and Pivotal Questions Explorer
EA Forum and LessWrong posts sometimes contain explicit cruxes, "what would change my mind" statements, "hinge beliefs", and research-blocking open questions.
As part of The Unjournal's Pivotal Questions project we're trying to identify decision-relevant open questions that may connect to rigorous/academic research. We've done some work mapping forum content to candidate questions for evaluation and synthesis.
The link: a filterable, searchable table of [so far] ~39 posts from EA Forum and LessWrong (April 2024 – April 2026), each tagged with the crux or change condition, why it looks tractable for Unjournal-style work, and a candidate "Pivotal Question mapping" where relevant. You can filter by signal type (explicit crux, hinge belief, CMM, research demand…), cause area, forum, or Unjournal relevance, and share filtered URLs.
Thought others might find this interesting and useful. It could feed directly into Unjournal work as well as be something to consider in forming career and research plans.
Caveats: this was roughly 1-2 hours of work (AI-assisted curation + some light engineering). Coverage is patchy, tilted toward AI safety, AI welfare, and cause prioritization, and some entries will need correction. If people find it useful we'd like to maintain and extend it, including adding EA Forum content more systematically.
Feedback welcome, especiallyt via the Hypothes.is sidebar on the page, or the "Suggest entry" button. Happy to hear whether the framing and coverage make sense, or if there are posts you'd obviously include that we missed. Or if this should go in a different directi
