SRC

Saicharan Ritwik Chinni

Student
3 karmaJoined Pursuing a graduate degree (e.g. Master's)
sites.google.com/view/saicharanritwikchinni/home

Participation
1

  • Completed the Introductory EA Virtual Program

Comments
3

I was just discussing this essay with a friend of mine @Ritwika Nandi and she raised a thoughtful question,

"I had a query regarding your causal chaining in Step 2. Uncertainty multiplies, so if each link in your causal chain has around 80 % confidence, a 5 step chain would have only 33 % confidence. Doesn't this mean that for any genuinely long term goal, your ladder will almost always conclude that the decay rate 'k' is too high to justify action?"

And I think that is a great point. The 0.8^5 result assumes each link is independent but in practice, links often share underlying drivers or measurements, introducing correlation. With positive correlation, multiplying 0.8’s can overstate the decay (i.e., you’re penalizing too much). 

One fix I’d consider in Step 2 is information design: ask more proximal, higher-signal questions that minimize chain length and reuse evidence efficiently. Net effect → a smaller effective chain length and a more realistic, well-calibrated .

And yeah, if independence does hold and the effective chain length really is long, then structure your portfolio accordingly... pick interventions with the lowest 

Feedback request...

Thanks for reading! I’d love critique on:

  • The separability assumption.
  • Better decay models you’ve used (links welcome): exp vs. power-law, mixed regimes, or anything that handled domain shift well.
  • Measurement plan: Does the “evidence ladder” (near-term proxies → causal chains → reversible commitments) miss any obvious failure modes?
  • Dependence risk: If higher-stakes domains are also harder to predict, how would you model Cov(s(h),X(h)) in a simple way?

Thanks in advance for any pushback or pointers.