Appreciate you putting this out with humility and curiosity, Nick. (And hi from South Africa!)
You're right that technical assistance (TA) in general lacks the type of rigorous evidence base (e.g. RCTs) that underpins most GiveWell top charity picks. And it is the case that the TSU model has its roots in more linear, project-based infrastructure settings, which may not map easily to health systems, which are ongoing, more transaction-intensive, deeply influenced by human behavior, etc.
Fully agree on the dashboard point–too often, evidence use in government is imagined as “provide better information and better decisions will result,” which is mostly unsupported by actual evidence or lived experience.
That said, I think the how of TA matters as much as the what. Whether support is effective depends on design. Do these units build state capability or hollow it out through parallel structures? Do they align with real incentives faced by actors inside the system? Are they embedded in actual decision-making processes? All these questions are core to our work supporting governments on economic growth.
So I don’t know if this is the right bet for GiveWell, but I do think there's value in experimenting with more adaptive, politically-aware, learning-oriented embedded support to governments. It’ll require thoughtful measurement and iteration (and not necessarily via a six-country RCT, which would fail to yield meaningfully insights), but it’s a space worth exploring if done intentionally.
Appreciate you putting this out with humility and curiosity, Nick. (And hi from South Africa!)
You're right that technical assistance (TA) in general lacks the type of rigorous evidence base (e.g. RCTs) that underpins most GiveWell top charity picks. And it is the case that the TSU model has its roots in more linear, project-based infrastructure settings, which may not map easily to health systems, which are ongoing, more transaction-intensive, deeply influenced by human behavior, etc.
Fully agree on the dashboard point–too often, evidence use in government is imagined as “provide better information and better decisions will result,” which is mostly unsupported by actual evidence or lived experience.
That said, I think the how of TA matters as much as the what. Whether support is effective depends on design. Do these units build state capability or hollow it out through parallel structures? Do they align with real incentives faced by actors inside the system? Are they embedded in actual decision-making processes? All these questions are core to our work supporting governments on economic growth.
So I don’t know if this is the right bet for GiveWell, but I do think there's value in experimenting with more adaptive, politically-aware, learning-oriented embedded support to governments. It’ll require thoughtful measurement and iteration (and not necessarily via a six-country RCT, which would fail to yield meaningfully insights), but it’s a space worth exploring if done intentionally.