Intro + Why it matters:
I've been thinking about how we actually figure out what's true when trying to do the most good. Not how ideal utilitarians with perfect information would reason, but what actually works for humans trying to make high-stakes decisions under uncertainty.
The core argument: we already implicitly rank epistemic methods in our daily lives (we trust thermometers over theology, Google over gut feelings), but sometimes people pretend these hierarchies don't exist when theorizing about knowledge.
The point isn't the specific rankings but making our implicit hierarchies explicit. When we do this, we can better understand why some domains have more consensus than others, and why certain types of arguments are more persuasive. Further, I think my overall framework for incorporating different epistemic methods and "ways of knowing" is more adaptive and fluid and less likely to impose artificial top-down order on humanity's many disparate sources of knowledge. As a wise prince once said, "There are more things in heaven and earth, Horatio, than are dreamt of in your philosophy."
I think this type of reasoning is in many ways critical to the effective altruism project. Many discussions here implicitly involve weighing different types of evidence (from RCTs in global health and factory farming to theoretical arguments in AI safety and wild animal welfare), and this community overall favors systematization.
Hopefully, by having a detailed article that nudges people away from subtly (or not-so-subtly!) wrong frameworks into a framework that's more conducive to clearer thinking, we can make less mistakes in systematization and become more effective at cause and intervention prioritization.
Questions
Curious what people think about:
- Which methods you'd rank differently and why
- How you navigate domains where high-tier methods aren't available
- Whether certain cause areas systematically rely on different tiers of evidence
- What methods I've missed entirely
- How you'd adapt my framework more explicitly for your preferred projects or cause areas
See full post
I hope the intro was interesting! Anyway, I've spent a lot of effort on writing this post, so I'd love to know what people think! Check out the full post here : https://linch.substack.com/p/which-ways-of-knowing-actually-work
Would be interested in understanding why people downvoted this. I spent nontrivial effort conveying something quite subtle and difficult in a fair amount of detail, and I think ought to be helpful for improving a lot of people's thinking a little.
I didn't downvote or disagreevote, but I'm not sure the logic of the rankings is well explained. I get the idea that concepts in the lowest tiers are supposed to be of more limited value, but I'm not sure why the very top tiers are literacy/mathematics - seems like literacy/mathematics by themselves almost never point to any particular conclusions, but are merely prerequisites to using some other method to reach a decision. Is the argument that few people would dispute that literacy and mathematics should play some role in making decisions, where as the value of 'divine revelation' is hotly disputed and the validity of natural experiments debatable? That makes sense, but it feels like it needs more explanation.
Here are the arguments for each of the tiers, for reference and ease of quoting:
Thanks! Can you be more specific about which of my arguments you disagreed with? Your hypotheses seemed sufficiently far away from my actual arguments that I have trouble understanding where you're coming from.
I also think the actual tiers themselves matter much less than the logic behind the tiers and how to use them, which I emphasized multiple times in the post. Maybe my mistake was pasting the infographic on the forum post so people think that's the bulk of the argument? EDIT: I've deleted the infographic.
I thought I was reasonably clear in my post but I will try again. As far as I understand .your argument is that the items in the tiers are heuristics people might use to determine how to make decisions, and the "tiers" represent how useful/trustworthy they are at doing that (with stuff in lower tiers like "folk wisdom" being not that useful and stuff in higher tiers like RCTs being more useful)
But I don't really see "literacy" or "math" broadly construed as methods to reach any specific decision, they're simply things I might need to understand actual arguments (and for that matter I am convinced that people can use good heuristics whilst being functionally illiterate or innumerate). The only real reason I can think of for putting them at the top is "many people argue against trusting (F-tier) folk wisdom is bad, there are some good arguments about not overindexing on (B-tier) RCTs, there are few decent arguments on principle against (S-tier) reading or adding up, despite the fact that literacy helps genocidal grudges as well as scientific knowledge to spread. I agree with this, but I don't think it illustrates very much that can be used to help me make better decisions as an individual. Because what really matters if I'm using my literacy to help me make a decision is what I read and what things I read I trust; much more than whether I can trust I've parsed it correctly. Likewise I think what thought experiments I'm influenced by is more important than the idea that thought experiments are (possibly) less trustworthy than at helping me make decisions than a full blown philosophical framework or more trustworthy than folk wisdown.
FWIW I think the infographic was fine and would suggest reinstating it (I don't think the argument is clearer without it, and it's certainly harder for people to suggest methods you might have missed if you don't show methods you included!)
Your linkpost also strips most of the key parts from the article, which I suspect some of the downvoters missed
I think as an individual reading and mathematical modeling is more conducive to learning true things about the world more than most other things on the list. Certainly I read much more often than I conduct RCTs! Even working scientists have reading the literature as a major component of their overall process.
I also believe this is true for civilization overall. If we imagine in an alternative civilization that is incapable of RCTs but can learn things from direct observation, natural experiments, engineering, etc, I expect substantial progress is still possible. However, if all information can only be relayed via the oral tradition, I think it'd be very hard to build up a substantial civilization. There's a similar argument for math as well, though less so.
Sure, the article discusses this in some detail. Context and discernment definitely matters. I could've definitely spent more effort on it, but I was worried it was already too long, and am also unsure if I could provide anything novel that's relevant to specific people's situations anyway.
I think the infographic probably makes it more likely for people to downvote the post without reading it.
Yeah the linkpost is just an introduction + explanation of why the post is relevant to EA Forum + link. I strongly suspect, based on substack analytics (which admittedly might be inaccurate) most people who downvoted the post didn't read or even skim the post. I frankly find this extremely [1]rude.
(Less than 1% of my substack's views came from the EA Forum, so pretty much every single one of the clickers have to have downvoted; I think it's much more likely that people who didn't read the post downvoted. I personally only downvote posts I've read, or at least skimmed carefully enough that I'm confident I'd downvote upon a closer read. I can't imagine having the arrogance to do otherwise.)