Hide table of contents

TLDR: This 6 million dollar Technical Support Unit grant doesn’t seem to fit GiveWell’s ethos and mission, and I don’t think the grant has high expected value.

Disclaimer: Despite my concerns I still think this grant is likely better than 80% of Global Health grants out there. GiveWell are my favourite donor, and given how much thought, research, and passion goes into every grant they give, I’m quite likely to be wrong here!
 

What makes GiveWell Special?

I love to tell people what makes GiveWell special. I giddily share how they rigorously select the most cost-effective charities with the best evidence-base. GiveWell charities almost certainly save lives at low cost – you can bank on it. There’s almost no other org in the world where you can be pretty sure every few thousand dollars donated be savin’ dem lives.

So GiveWell Gives you certainty – at least as much as possible.

However this grant supports a high-risk intervention with a poor evidence base. There are decent arguments for moonshot grants which try and shift the needle high up in a health system, but this “meta-level”, “weak evidence”, “hits-based” approach feels more Open-Phil than GiveWell[1]. If a friend asks me to justify the last 10 grants GiveWell made based on their mission and process, I’ll grin and gladly explain. I couldn’t explain this one.

Although I prefer GiveWell’s “nearly sure” approach[2], it could be healthy to have two organisations with different roles in the EA global Health ecosystem. GiveWell backing sure things, and OpenPhil making bets.
 

GiveWell vs. OpenPhil Funding Approach


What is the grant?

The grant is a joint venture with OpenPhil[3] which gives 6 million dollars to two generalist “BINGOs”[4] (CHAI and PATH), to provide technical support to low-income African countries. This might help them shift their health budgets from less effective causes to more effective causes, and find efficient ways to cut costs without losing impact in these leaner times. Teams of 3-5 local experts will be embedded in government ministries for 12-18 months.

In rough order of importance, I list 6 reasons why I’m dubious whether this grant fits GiveWell’s mission, and also why I’m dubious whether the grant makes sense.
 

1.The TSU Evidence base is minimal

Before this grant, I wouldn’t have thought “technical support units” which support government health ministries would have been on GiveWells radar, because the evidence base is scanty at best. Technical support to health ministries has been a go-to development approach for a long time. Since the 1950s, richer government and NGOs have spent billions on technical support for low income governments. One review described a whopping 13 different TSU models including those of CHAI, Save the Children and the World Bank. All try to improve health policy and fund allocation through embedding TSUs within Government ministries, yet we have little evidence that they make a difference. There are a handful of before and after studies with positive results which focus on specific healthcare priorities such as malnutrition, data management and maternal and child health This meagre data however very is low quality especially in proportion to the large amounts of money spent.

A 2001 review stated “In general, there is scant evidence on the effectiveness of TA (Technical Assistance) and how it may contribute to improve health outcomes”. One meta-analysis of different technical support units showed very little peer-reviewed research on the practice in low-income countries. They correctly stated Considering its important role in global health, more rigorous evaluations of TA efforts should be given high priority.”  Despite perhaps billions of dollars spent, and thousands of technical support programs, there isn’t clear evidence that TSU’s work, nor a wealth of stories and reports which indicate that TSUs might be a cost-effective way of improving healthcare delivery.

Backing an low evidence-base approach with 6 million dollars doesn’t seem to fit GiveWell’s evidence-based funding model.


2. Dubious Theory of Change

From the GiveWell Podcast on the topic, ”it is in the region of a few million dollars of government expenditure would have to be shifted to be 20, 30% more cost effective, 20, 30% cheaper for this to look like a good use of funding.” And so it became apparent quite quickly that there's a lot of upside here and that this does seem like a reasonably sensible use of GiveWell funding.

When it comes to advocacy/technical assistance at the highest level, there is always an enticing carrot of huge potential cost effectiveness. If we can shift policy, or change fund allocations even a tiny bit then almost any grant will probably be worth it. Of course if we can move a few tens of million dollars p from bad uses to good uses the expected value looks great.

The real question  is whether changing anything at all is realistic. Because as with any attempt where to influence power at the highest level, tractability is everything. 

CE Charity “The center for effective AID Policy”  shut down last year after judging that moving Developed countries aid budgets to more effective areas was more difficult than expected. An organisation isn’t a good bet just because making a tiny shift in government spending would have huge impact. We need specific and clear reasons why huge government machineries with low budgets and too many priorities might concretely change what they are doing

I wasn’t convinced by the podcast examples such as data consolidation and dashboards. I don’t believe most government officials in low-income health ministries care much about evidence and data – certainly not in Uganda where I live. Enormous political negotiation and complex machine are needed  – data and evidence rarely come into it. I might be more convinced by a theory of TSU that leaned into political wrangling, or building elite coalitions for change, not through generating and presenting better data.

Comments like this seem naive, “it could be the case that the TSUs help the ministry to develop a tool to track real time expenditure across programs and enable them to better shift resources to reduce the underspend of existing budgets.” 


3. High Project Budget

I don’t understand how 24 people doing 8 months of work here  cost 6 million dollars. Even with a very high annual salary of $36,000 in these low income countries (5x what our OneDay Health management team gets)[5], I can’t explain why this program could cost more than say 2.5 million dollars.

I could  well be missing something important which explains the higher budget, but here’s my quick high-end budget BOTEC, based on some of the highest numbers I can imagine.

 

4. Missed RCT Opportunity 

Edited to :Missed rigorous study opportunity instead? I was wrong about an RCT being possible here, thanks for the corrections below by a could of commenters

I was disappointed that possibility for an RCT or other rigorous prospective analysis wasn’t considered in such a poorly researched area. Why not randomize the 6 proposed countries and provide TSU to 3 of them? This A good productive study here could be cheap to follow up as you could monitor outcomes through routinely collected government data. Although you might only be able to gather perhaps 30-50% (rough guess) of the benefit of the program through this, I think it would be possible.

Possible outcome measures (based on podcast)
1. Change in allocation of Grants/Government budgets to poorer districts from richer ones
2. Change in allocation of Donor money to cost-effective programming
3. Changes in supply chain management over the study period


5. No CEA and grant writeup

From GiveWell  “All of the research supporting our funding recommendations is free and publicly available.” Although I understand there is perceived urgency here so this might be an exception, I would have appreciated at least some of GiveWell’s normal grant process here, with at least an attempted CEA and short write-up. Of course through the podcast and forum post, they have still been more transparent than most orgs in their position.


6. Why is this considered urgent post USAID cuts?

From GiveWell's podcast “But we know there are lots of other organizations, as I mentioned before, who are providing technical support like this to governments. We didn't do a really comprehensive investigation into other organizations as part of this work because of the need to move quickly……

 “I think what the TSUs hopefully will do is help intensify that support at a time when it's really needed.”

This seems the kind of intervention where it wouldn’t make a big difference waiting another 3-6 months to do deeper analysis and publish a CEA. This doesn’t involve kids who’s  malnutrition treatment has been cut, or an important RCT which will be stopped with millions of dollars of work wasted unless emergency funding comes in.

There’s even a reasonable argument that post USAID cuts, government budgets might be harder to shift because the remaining money will support budgets that can’t be fixed, like highly paid ministry staff, medications and primary through tertiary  services which cannot be shifted. Technical assistance might potentially more beneficial when more money is sloshing around.  See the “decreasing budgets limit maneuvrability”  by Matthias

Again I’d like to stress that this is an isolated criticism based limited information and there's a high chance I'm way off the mark. I have huge respect for the ability of both GiveWell and OpenPhil’s staff to make good decisions in allocating funds. I did share this with GiveWell, but only gave them 24 hours to comment which isn't really enough time. They responded extremely graciously and I hope they will find the time to comment.

Super keen to hear your thoughts and criticisms of the criticism as well ;).
 

  1. ^

     To be fair, GiveWell and OpenPhil are working together on this grant, but the money seems to be coming from GiveWell? (Perhaps semantics)

  2. ^

     I’m a risk averse normie

  3. ^

     Although I don’t know what that means exactly...

  4. ^

     Big International Non Governmental Organisations

  5. ^

     And more than some Charity Entrepreneurship founders who live in London

  6. Show all footnotes

114

1
4

Reactions

1
4

More posts like this

Comments19
Sorted by Click to highlight new comments since:

Thanks for sharing your critique of our recent grants with Open Philanthropy for technical support units (TSUs). We really appreciate this thoughtful pushback! We've recommended (and are considering) a number of grants to help respond to the current situation with cuts in US foreign health assistance. So, getting critiques like yours is helpful since it encourages us to pause and consider whether we’re making the right tradeoffs in these grants. While we share some of your perspectives on the uncertainties of this work, we're still excited about our decision in this case.

While this grant’s impact is particularly uncertain, we see this as a difference in degree, not kind, compared to other grants we recommend. Most of our funding still goes to Top Charities - proven programs backed by strong evidence and our cost-effectiveness analysis. But we also recommend opportunities through the All Grants Fund. The goal of this fund is to find and fund what we believe are the highest-impact uses of marginal dollars, even when those opportunities are riskier or harder to model. This grant fits squarely in that approach. We’ve funded technical assistance from the All Grants Fund before, alongside grants that are uncertain for other reasons. For example, sometimes we're trying to generate new evidence, while at other times we're recommending high expected value bets even when we know we’re unlikely to get a definitive answer on their impact.

We agree that the evidence base for TSUs is thin. In general, we think it’s challenging to evaluate technical assistance programs because 

  • particular programs might not have very convincing controls or comparators and
  • technical assistance packages can vary widely so aggregating results from many programs might not be that informative, even if they’re targeting the same outcome

So even if the review that Nick cites had found good evidence for past TA programs, we still might not feel sure that it would generalize to the TSUs we recommended funding. 

But as discussed above, we don’t consider high uncertainty to be a dealbreaker in grants funded from the All Grants Fund. We still think it can be worth funding TA (see, e.g., our maternal syphilis grants) and we’re very interested in building up our ability to learn about programs like this over time. (We’re working on a project looking back on a subset of technical assistance grants we’ve funded, but don’t have a publication date for that yet.) 

While we don't have detailed theories of change, we still think it's plausible that TSUs could be impactful. We are excited about this grant because we think it could help governments to make difficult prioritization and program adaptation decisions in countries affected by US government funding freezes and cuts. We expect that the details of how this could look will vary by country and so we don’t feel confident that any particular mechanism will cash out in impact. But for example, we think TSUs could help governments to: 

  • Adapt programs away from more expensive models implemented with USG support to more cost-efficient models (e.g., by integrating disease-specific verticals or moving from international to local implementing organizations)
  • Target programs (e.g., to certain regions or populations) based on cost-effectiveness analyses
  • Crowd in philanthropic or other funding by identifying funding gaps 

While we think the above examples are plausible, we agree that the theory of change for these programs is not tightly specified. However, we spoke with senior Ministry of Health officials in each country about this grant, and overall governments voiced support and demand for the proposed TSUs and were eager to have CHAI and PATH's support on this work. We also think both organizations are well-placed to support this work, as both have supported on malaria-specific TSUs in the past, have teams with specific focus on health systems and health financing, and have established relationships with the governments they’re supporting. 

Budget - Thank you for sharing your estimates - this is helpful for us as we continue to update how we review budgets. We’ll share a high level budget breakdown for this grant with our public grant write-ups (which are coming - see below!). One quick clarification (which wasn’t clear in the podcast) is that costs for Nigeria reflect support in seven states as well as national support. 

Outside of that, we think that the higher budget reflects both higher salaries and a higher non-salary budget share (to account for travel, coordinating stakeholder engagement, and support from global technical teams). Our understanding is that salaries are set based on globally-benchmarked salary ranges and localized equity adjustments to account for organizational equitable pay standards and differential cost of living across different geographies. A portion of the compensation costs is also due to benefits (such as health insurance) that may be standard to each location.

Learning - We agree that we should try to learn about the impact of these grants and also agree with commenters and Nick’s revision that an RCT isn’t an appropriate strategy. We’ve asked CHAI and PATH to track and report out on the following. 

  • Process indicators (staff hiring, government engagement, TSU recommendations made and implemented);
  • Case studies of how government prioritization decisions are made;
  • Promising funding opportunities identified through the TSU work;
  • Contexts where cost-effectiveness considerations are or aren't helpful for government decision-making. 

We’ll attempt to triangulate these reports through speaking with other stakeholders, though we expect we’ll still have substantial uncertainty about impact given the lack of counterfactuals. 

No CEA and grant write up. These are coming! We typically have a lag between making grants and publishing our write-ups, but wanted to share about this grant sooner because we’ve received a lot of interest in our response to the funding cuts. We expect to publish pages for CHAI and PATH (including a rough BOTEC) by the end of June. 

Urgency. We see the urgency here as being specifically related to governments’ needs to adapt to frozen or cut US health assistance: we heard when investigating this grant that governments were already beginning this planning process and that lighter touch versions of the support offered by TSUs were already being provided on nights and weekends by CHAI staff in certain countries. We also think this kind of grant is inherently uncertain and it didn’t seem likely that we’d reduce that uncertainty by spending additional time investigating. So, with apparent demand for support at the time and since we didn't think waiting would lead to a better decision, we chose to recommend funding relatively quickly.

For context, GiveWell's relationship with CHAI dates to 2022, when GiveWell Managing Director Neil Buddy Shah departed to become CEO of CHAI. According to GiveWell's announcement, "this transition does not mark the end of Buddy’s relationship with GiveWell. It is important that GiveWell maintain strong connections with leading organizations in the global health sector." (Incidentally, Shah is also a member of Anthropic's long-term benefit trust.)

GiveWell announced Shah's departure in April 2022; Shah apparently started at CHAI in June; and in August GiveWell announced its first grant recommendation to CHAI, $10m for a new incubator program "to identify, scope, pilot, and ultimately scale cost-effective programs that GiveWell might fund". As planned, the incubator led to later GiveWell grant recommendations to CHAI, like CHAI's tuberculosis contact management program, and multiple grants to CHAI's oral rehydration and zinc distribution program.

Assuming you're correct that this grant is atypical for GiveWell, I would presume it's a result of their special relationship with Shah.

Thanksl for that outline of the relationship, yes that connection looks to be significant. I can see obvious benefits of having close relationships with super capable, aligned people like this, but downside risks as well. 

My biggest questions with CHAI are around how good they really are operationally at doing such a wide range of largely unrelated activities. There a lot to be said for doing one thing well, gaining institutional knowledge and efficiencies through doing it over a decen period of time. Most GiveWell supported charities are in that boat.

Traditionally BINGOS have been inefficient - Jack of ask trades and master of none. Having said that maybe CHAI are unusually good at setting up multiple, unrelated efficient programs - I don't have  deep knowledge about the org.

In this case however (unlike some of the other grants) I think CHAI has quite a lot of experience with implementing TSUs which is great.

A quick drive-by comment on "4. Missed RCT Opportunity": The sample size seems way too small for a RCT to be worth it. There's not much statistical power to work with when researchers are studying a messy intervention with only 6 countries. And I imagine they'd struggle to attribute changes to the Technical Support Units unless it was something truly transformative (at least within the framework of the RCT).[1]

More broadly, I'm not aware of any commonly accepted way to do "small n" impact evaluation yet, especially with something as customized as Technical Assistance. This blog post from 3ie, a NGO to promote evidence-based policy making, talked about the issue 13 years ago and I think it's still broadly true. The impact evaluation toolkit works best with (1) a precisely defined intervention, (2) a decent sample size (say n > 100), and (3) a very homogeneous sample. This grant, so far, looks to be the opposite of all 3.

  1. ^

    I also recall the math for statistical inference gets strange when using very small sample sizes (say n<15) and may require assumptions that most people consider unrealistic. But I could be wrong here.

Drive by hit job successful 🤣

Thanks I'm going to edit that I think you are right. 

To make an RCT work sample success wise you would need district level randomisation probably to work, and that wouldn't make sense here when it's only a central government level intervention.

Thanks for this Nick! I always appreciate your posts on forum! I haven't listened to the full podcast episode and don't have expertise in TSUs, but a few things that stood out to me/additional context on the timing of the USAID cuts:

  • GiveWell mentioned that governments were specifically requesting this kind of support, which makes me more inclined to believe these TSUs will fill temporary gaps that were "outsourced" in a way from MoHs to USAID/other funders over the years or to address vulnerabilities that are rapidly appearing... so it could be more of a triage situation than setting even a short/medium term strategy for governments.[1]
    • I share the sentiment of "not another dashboard!!" but I'm wondering if the way that USAID pulled the plug so quickly has made it so that governments have little visibility into their own programs and need to rapidly (re)build this capacity so they can reprogram funds accordingly - but also so they can go to other bilateral donors and apply for emergency funds that are/may become available through multilateral funders like the Global Fund or World Bank/IDA.
    • Despite claims to the contrary by Secretary of State Marco Rubio and others, the US government is moving to significantly reduce USAID's global health work[2] In addition, nearly all USAID staff will be terminated on July 1 (some foreign service officers at post and administrative staff in DC will remain until September 1 to wind down other operations). 

I hope others with more knowledge about this will comment and thanks again for keeping global health on the front page :)

  1. ^

    One really bad example that comes to mind is when the Kenya MoH temporary lost access health data stored in the Electronic Medical Records (Kenya EMR) system - because it was physically hosted on the servers of a USAID contractor under stop work order. https://www.data4sdgs.org/blog/data-crisis-following-usaids-withdrawal-opportunities-reimagine-data-systems 

  2. ^

    I have a draft post about the FY26 President's Budget Request, proposed $900m rescissions to FY25 funds, and impending "pocket rescission" of FY24 funds (~$10 billion) and how these all interact, but haven't gotten around to finalizing.

What’s unique about these grants?: These grants are a good illustration of how GiveWell is applying increased flexibility, speed, and risk tolerance to respond to urgent needs caused by recent cuts to US foreign assistance. Funded by our All Grants Fund, the grants also demonstrate how GiveWell has broadened its research scope beyond its Top Charities while maintaining its disciplined approach—comparing each new opportunity to established interventions, like malaria prevention or vitamin A supplementation, as part of its grantmaking decisions.

The grants were explicitly made from the all grants fund, which is the place people donate when they are happy for GiveWell to make riskier decisions and hold themselves to lower standards than for top charities. I personally donate to the all grants fund over the top charities fund, am a fan of a more risk tolerant approach, and I'm happy to defer to GiveWell's judgement. I think your post is holding this grant to the standard of a top charity, which I think is unreasonable and would not be worth the effort and expense of GiveWell staff time

I don't have too much context on the actual object details of the grant, so don't have strong takes on most of your criticisms (you definitely know more about this domain than me!). But I find it pretty plausible that lots of high importance decisions get made after a disaster like the USAID cuts, and that this was urgent. And I also expect that there are, in general, a bunch of grants that are time sensitive in response to the USAID cuts and endorse GiveWell moving fast here and maximising expected value.

Related to this point, I was surprised to see this

 

Given that GiveWell's All Grants Fund has basically the same graph

Graph comparing impacts of Top Charities Fund and All Grants Fund

 

Many other grants from the All Grants Fund don't have a ton of evidence behind them and are exploratory. As an example, they funded part of an RCT on building trailbridges in Rwanda, with reasoning «While our best guess is that bridges are below the range of cost-effectiveness of programs we would recommend funding, we think there’s a reasonable chance the findings of the RCT update us toward believing this program is above our bar. [...]» and an RCT on providing eyeglasses for similar reasons.

Hey Lorenzo thanks for this that's a good point and I had somehow missed that graph which you are right is hilariously similar to mine!

I didn't get into the nuance in the article, but I think that the All Grants fund still provides a high amount of certainty for most of their grants. This is how GiveWell puts it. 

"The All Grants Fund provides grants to our Top Charities as well as grants to incubate newer programs, promote policy change, fund relevant research, or support other potentially high-impact, cost-effective initiatives that don’t fit neat categorization.

The All Grants Fund supports the highest-impact opportunities we can identify in global health and well-being. This may include some grants with high expected value that carry a higher risk of not achieving their potential impact.'

 Funding through the all grants fund mostly (not all) falls into 2 categories.

  1. Grants that still have a high level of evidence behind them, but just not as much as the top charities. Most even have RCT to evidence behind them.
  2. Evidence generation for promising interventions. The rail intervention you suggested falls largely into this category.

I see this particular TSU grant as moving somewhat away from even the All Grants philosophy as there is really not much evidence at all that TSUs work, and this grant is also not evidence generating at least as it seems at the moment.

When I read that description I infer "make the best decision we can under uncertainty", not "only make decisions with a decent standard of evidence or to gather more evidence". It's a reasonable position to think that the TSUs grant is a bad idea or that it would be unreasonable to expect it to be a good idea without further evidence, but I feel like GiveWell are pretty clear that they're fine with making high risk grants, and in this case they seem to think this TSUs will be high expected value

Yeah based on the evidence of what GiveWell actually have given most grants to in the past I would have gone with this as what I think GiveWell meant and what I would personally like the most.

"only make decisions with a decent standard of evidence or to gather more evidence"

I think it makes sense to have separation, and have Openphil in doing higher risk bets undet your heuristic of "make the best decision we can under uncertainty". Why have 2 different bodies doing the same thing with largely the same pool of money?

But yes you might be right that at least now maybe both GiveWell and Open Phil are meaning and doing that.

Fair enough, I guess my take from all this is that you mainly just want the all grants fund to have a different philosophy than the one GiveWell is following in practice? Or do you also think they're making a mistake by their own lights?

I just originally thought that the All Grants fund has stuff with a decent evidence base, but less certainty then the top charities. So still more certainty than most other funders in the world.

 Nearly all of the charities there would fit that description so I think they were following that practice. So yes I thought they were making a mistake somewhat by their own lights, or maybe taking the fund in a bit of a different direction.

Or Maybe I was just wrong about what they were trying to do. 

Why have 2 different bodies doing the same thing with largely the same pool of money?

It doesn't apply to the TSU grant, but note that a high percentage of GiveWell-directed donations don't come from OpenPhilanthropy:

 

And I expect this to increasingly be the case in the future, as GiveWell finds new donors and OpenPhil finds other things to donate to. So I wouldn't say it's "largely the same pool of money"

Agreed. GiveWell also takes outside donors and OpenPhil doesn't. I've donated to the all grants fund because I wanted to help with risk tolerant and fast giving after the aid cuts, and am glad the opportunity exists

GiveWell also takes outside donors and OpenPhil doesn't.

 

I don't think that's true anymore: https://www.openphilanthropy.org/partner-with-us/ but I imagine OpenPhil only takes donors above a certain size (here they say >$1M/year) while GiveWell takes donations of all sizes

Appreciate you putting this out with humility and curiosity, Nick. (And hi from South Africa!)

You're right that technical assistance (TA) in general lacks the type of rigorous evidence base (e.g. RCTs) that underpins most GiveWell top charity picks. And it is the case that the TSU model has its roots in more linear, project-based infrastructure settings, which may not map easily to health systems, which are ongoing, more transaction-intensive, deeply influenced by human behavior, etc.

Fully agree on the dashboard point–too often, evidence use in government is imagined as “provide better information and better decisions will result,” which is mostly unsupported by actual evidence or lived experience.

That said, I think the how of TA matters as much as the what. Whether support is effective depends on design. Do these units build state capability or hollow it out through parallel structures? Do they align with real incentives faced by actors inside the system? Are they embedded in actual decision-making processes? All these questions are core to our work supporting governments on economic growth.

So I don’t know if this is the right bet for GiveWell, but I do think there's value in experimenting with more adaptive, politically-aware, learning-oriented embedded support to governments. It’ll require thoughtful measurement and iteration (and not necessarily via a six-country RCT, which would fail to yield meaningfully insights), but it’s a space worth exploring if done intentionally.

Good post Nick. I think the question mark about the timing of the experiment considering cuts to many robustly good programmes is a particularly good one

I don't think the Centre for Effective Aid Policy is a particularly accurate comparison, as I think there's a significant difference between the likely effectiveness of a new org lobbying Western governments to give money to different causes (against sophisticated lobbyists for the status quo and government-defined "soft power" priorities) and orgs with established relationships providing technical recommendations to improve healthcare outcomes to LEDC governments that actually express interest in using them. I think the lack of positive findings in the wider literature links you provide are more interesting, although suspect the outcomes are highly variable depending on level of government engagement, competence of organizations, magnitude of problems they purport to solve and whether the shifts they are promoting are even in the right direction. It would be interesting in that respect to see how GiveWell evaluated the individual organizations. I do agree that budgeting dashboards don't necessarily seem like an area relatively highly paid outsiders are best placed to optimise. 

I suspect the high cost reflects use of non-local staff, which of course has a mixture of advantages and disadvantages beyond the higher cost.

I'm sceptical of the value of RCTs between nations that have different healthcare policies and standards and bureaucracies to start with (particularly as I don't think there's a secular global trend in the sort of outcomes TSUs are supposed to achieve, and collecting data on some of them feels like it would involve nearly as much effort as actually providing the recommendations). A lot of policy and government optimization work - effective or otherwise - is hard to RCT especially at national level. Which doesn't mean there can't be more transparency and non-RCT metrics

Thanks for this fantastic comment! - yes I agree my comparison with the center for effective aid policy fairly weak, I was trying to find a real life example of moving governments being very difficult, and I could have found a more analogous one. I'm not sure I'm this case countries "asking" is necessarily a signal that shifts are more likely. I think there are lots of motives for government s asking for help here, including employing local friends with lucrative salaries, and hoping these relationships might being in more donor money. But maybe I'm too cynical!

I agree the outcomes will vary based on a huge variety of things including the factors you mention. I think we need better indications though of which of these might lead to effective technical support. It's tricky and needs more decent research.

If there were more non local staff you would be right, but from the podcast it did seem they were planning on hiring mostly local people?

Your right on RCTs (have edited post), I got that wrong, but I still think we can use routinely collected data on health outcomes to see if Health metrics have improved, before and after at least. I don't think it needs to be too expensive to assess.

Curated and popular this week
Relevant opportunities