I struggle to see practical cases where it makes sense to spend significant time on WFMs. I would rather improve cost-effectiveness analyses (CEA).
I think that is a reasonable decision. I think WFMs are very useful for certain types of decisions, but not always. I use CEAs much more often. My claim is *not* that more people should be using WFMs. If anything, my post should be seen as a warning to those who do.
My claim is that people should take time to understand their tools and account for their weaknesses. Accounting for weaknesses should happen not just within the tool, but outside of it when making the final decision.
I think GiveWell is a good example of this. If CEAs made up 100% of their decision making process, their decisions would be heavily influenced by the weaknesses of CEAs as a method. However, GiveWell acknowledges these weaknesses and uses CEAs as a primary deciding factor, while also incorporating other factors as well.
You are correct that there are ways to mitigate these issues. However, that does not mean that the issues completely disappear or that the method is without weakness.
The fundamental problem remains. Like I mentioned in my original post, any system for decision making is going to be trading away truth for practicality.
A more refined method means that some weaknesses will be less pronounced, though they frequently introduce new types of errors (like the WFM example in my post). We still need to account for methodological bias into our final decision.
You cite GiveWell as an example of an organization that takes EV estimates "close to literally". I assume by this you mean the EV estimates they make with respect to cost-effectiveness. However, GiveWell outlines 5 things they keep in mind when considering cost-effectiveness here, including the following:
Because of the many limitations of cost-effectiveness estimates, we consider other factors when recommending programs or grants. For example, confidence in an organization's track record and the strength of the evidence for an intervention generally also carry significant weight in our investigations.
In other words, GiveWell seems to believe that cost-effectiveness is a useful tool, but it is not perfect. There are methodological biases with that method, so they acknowledge those limitations and incorporate other factors before making a final decision.
I think EV is one valuable (but incomplete) metric for evaluating charities. WFMs can capture EV as well as other variables that are harder to incorporate quantitatively. However, creating BOTECs to estimate EV is a lot faster than making a full WFM. Which one to use is, in my view, a question of whether the importance of your decision justifies that extra effort or whether your time would be better spent on other decisions/work.
Regardless which one you choose, you should be careful not to rely on just the one tool. EV reasoning is vulnerable to Pascal's Mugging and the Optimizer's Curse. WFM is vulnerable to the issues I talked about in my post and more. The underlying point is that we need to supplement our tools with critical thinking to ensure we're not falling victim to their weaknesses.
I'm glad you found my post insightful! Regarding time, I would probably recommend going through the process with iterative depth. First, outline the points that seem most valuable to investigate based on your goals, uncertainty, and any upcoming moral decisions. Then, go through the process of the project repeatedly, starting with a very low level of depth and repeating with higher levels of depth as needed. Between each round, you could also re-prioritize based on changing uncertainties and decision relevance.
I don't actually think there is much object-level knowledge required to engage with this project. If anything, I imagine that developing object-level knowledge of EA topics would be more fulfilling after developing a more refined moral framework.
I think you've accurately identified a real tension here, and this connects with a fundamental critique of EA as a movement, which is that it is too often focused on measurable outcomes rather than systemic change. I tend to agree that this critique has teeth and applies to the way EA is often practiced.
I do want to highlight that Global Health work is not inherently a temporary fix. Global Health work frequently can (and should) focus on improving existing health systems, not just having a temporary impact. By addressing the root cause, you can make a more permanent difference (and be more cost-effective while you're at it)
So why are more EAs focused on Global Health instead of Global Development relative to your expectations? In my opinion, two major reasons are