Barry Grimes

Communications & Membership Lead @ International Alliance of Mental Health Research Funders
1086 karmaJoined Nov 2017Working (6-15 years)Witney, UK

Bio

Participation
5

I foster collaboration between funders and researchers in order to identify and implement the most cost-effective ways to improve mental health and wellbeing.

From 2021-2023 I was Communications Manager at the Happier Lives Institute. From 2018-2021 I worked on CEA’s events team, helping to produce EA Global and EAGx conferences.

I practice Vipassana meditation for two hours on most days and Buddhist philosophy drives my commitment to reducing the suffering of others.

Sequences
3

StrongMinds: the debate continues...
EA Charity recommendations 2022
happier lives institute: the story behind our 2022 charity recommendation

Comments
88

Topic contributions
1

Welcome to the forum! Thanks so much for taking the time to dig into this question and sharing your findings. The treatment gap in global mental health is enormous and apps are an essential tool for addressing this challenge. 

I didn't have time to review your calculations in detail, but I have a few general reflections that may be useful.

1) There is a brand new meta-analysis on the efficacy of mental health apps which includes 176 RCTS (Linardon et al, 2024). They conclude that "apps have overall small but significant effects on symptoms of depression and generalized anxiety, and that specific features of apps – such as CBT or mood monitoring features and chatbot technology – are associated with larger effect sizes."

2) There is a substantial difference in efficacy between self-help and guided self-help apps. Kaya Guides (incubated by Charity Entrepreneurship) is using WhatsApp to pilot the WHO's guided self-help intervention in India. Their founder wrote an excellent summary of their work here.

3) Be careful with using a single WELLBY number for AMF. The wellbeing effects of life-extending interventions vary widely depending on philosophical choices, so it is better to use the range of possible outcomes rather than a single figure (see The Elephant in the Bednet).

4) John Torous is a leading researcher and thought leader in digital mental health. If you'd like to spend more time learning about this topic, I recommend looking at his recent publications.

5) The current market for mental health apps is completely unregulated and there are major concerns about privacy and data protection. Wellcome recently awarded £1.8m to MHRA and NICE to explore how the market could be regulated more effectively to protect patient safety.

You may be interested in this recent meta-analysis on the efficacy of mental health apps. The authors conclude that: "apps have overall small but significant effects on symptoms of depression and generalized anxiety, and that specific features of apps – such as CBT or mood monitoring features and chatbot technology – are associated with larger effect sizes."

I recommend The Elephant in the Bednet as an accessible introduction to the different philosophical theories for the badness of death.

This comment helps to highlight the importance of language when discussing this topic. Happiness and wellbeing are not the same thing and it can lead to confusion when the two terms are used interchangeably.

This post explains the three main theories of wellbeing: hedonism, desire-based views, and objective list views. If you're a hedonist, then failing to optimise for happiness would be a mistake. However, as Owen points out, people often trade off happiness for other things they value which is more consistent with the objective list theory.

Over recent decades, the field of wellbeing science has settled on 'life satisfaction' as the primary metric for subjective wellbeing. It's still important to track other measures too (e.g., positive/negative affect, sense of meaning/purpose), but I share the view that life satisfaction should be the goal of society. 

That's because life satisfaction is the common unit that people use when they make trade-offs between happiness, purpose, duty etc. It's the 'all things considered' assessment of a person's life, according to what they value. Many attempts to measure wellbeing rely on a dashboard of indicators, but in all those cases, the relative weightings of the indicators are decided by the researchers rather than the subjects of the research and, in my view, that misses the whole point. Having said that, I've read some compelling arguments against the life satisfaction approach from Plant (2023) and Thoma (2021) which readers may find insightful.

I'm feeling confused by these two statements:

Although there are other problems, those I have repeated here make the recommendations of the report unsafe.

 

Even if one still believes the bulk of (appropriate) analysis paths still support a recommendation, this sensitivity should be made transparent.

The first statement says HLI's recommendation is unsafe, but the second implies it is reasonable as long as the sensitivity is clearly explained. I'm grateful to Greg for presenting the analysis paths which lead to SM < GD, but it's unclear to me how much those paths should be weighted compared to all the other paths which lead to SM > GD.

It's notable that Cuijpers (who has done more than anyone in the field to account for publication bias and risk of bias) is still confident that psychotherapy is effective.

I was also surprised by the use of 'unsafe'. Less cost-effective maybe, but 'unsafe' implies harm and I haven't seen any evidence to support that claim.

I recently discovered that GiveWell decided to exclude an outlier in their water chlorination meta-analysis. I'm not qualified to judge their reasoning, but maybe others with sufficient expertise will weigh in?

We excluded one RCT that meets our other criteria because we think the results are implausibly high such that we don't believe they represent the true effect of chlorination interventions (more in footnote).[4] It's unorthodox to exclude studies for this reason when conducting a meta-analysis, but we chose to do so because we think it gives us an overall estimate that is more likely to represent the true effect size.

I recently discovered that GiveWell decided to exclude an outlier in their water chlorination meta-analysis. I'm not qualified to judge their reasoning, but maybe others with sufficient expertise will weigh in?

We excluded one RCT that meets our other criteria because we think the results are implausibly high such that we don't believe they represent the true effect of chlorination interventions (more in footnote).[4] It's unorthodox to exclude studies for this reason when conducting a meta-analysis, but we chose to do so because we think it gives us an overall estimate that is more likely to represent the true effect size.

[This comment is no longer endorsed by its author]Reply

[Disclaimer: I worked at HLI until March 2023. I now work at the International Alliance of Mental Health Research Funders]

Gregory says

these problems are sufficiently major I think potential donors are ill-advised to follow the recommendations and analysis in this report.

That is a strong claim to make and it requires him to present a convincing case that GiveDirectly is more cost-effective than StrongMinds. I've found his previous methodological critiques to be constructive and well-explained. To their credit, HLI has incorporated many of them in the updated analysis. However, in my opinion, the critiques he presents here do not make a convincing case.

Taking his summary points in turn...

  1. The literature on PT in LMICs is a complete mess. Insofar as more sense can be made from it, the most important factors appear to belong to the studies investigating it (e.g. their size) rather than qualities of the PT interventions themselves.

I think this is much too strong. The three meta-analyses (and Gregory's own calculations) give me confidence that psychotherapy in LMICs is effective, although the effects are likely to be small.

2. Trying to correct the results of a compromised literature is known to be a nightmare. Here, the qualitative evidence for publication bias is compelling. But quantifying what particular value of 'a lot?' the correction should be is fraught: numerically, methods here disagree with one another dramatically, and prove highly sensitive to choices on data exclusion.

There is no consensus on the appropriate methodology for adjusting publication bias. I don't have an informed opinion on this, but HLI's approach seems reasonable to me and I think it's reasonable for Greg to take a different view. From my limited understanding, neither approach makes GiveDirectly more cost-effective. 

3. Regardless of how PT looks in general, StrongMinds, in particular, is looking less and less promising. Although initial studies looked good, they had various methodological weaknesses, and a forthcoming RCT with much higher methodological quality is expected to deliver disappointing results. 

We don't have any new data on StrongMinds so I'm confused why Greg thinks it's "less and less promising". HLI's Bayesian approach is a big improvement on the subjective weightings they used in the first cost-effectiveness analysis. As with publication bias, it's reasonable to hold different views on how to construct the prior, but personally, I do believe that any psychotherapy intervention in LMICs, so long as cost per patient is <$100, is a ~certain bet to beat cash transfers. There are no specific models of psychotherapy that perform better than the others, so I don't find it surprising that training people to talk to other people about their problems is a more cost-effective way to improve wellbeing in LMICs. Cash transfers are much more expensive and the effects on subjective wellbeing are also small.

4. The evidential trajectory here is all to common, and the outlook typically bleak. It is dubious StrongMinds is a good pick even among psychotherapy interventions (picking one at random which doesn't have a likely-bad-news RCT imminent seems a better bet). Although pricing different interventions is hard, it is even more dubious SM is close to the frontier of "very well evidenced" vs. "has very promising results" plotted out by things like AMF, GD, etc. HLI's choice to nonetheless recommend SM again this giving season is very surprising. I doubt it will weather hindsight well.

HLI had to start somewhere and I think we should give credit to StrongMinds for being brave enough to open themselves up to the scrutiny they've faced. The three meta-analyses and the tentative analysis of Friendship Bench suggest there is 'altruistic gold' to be found here and HLI has only just started to dig. The field is growing quickly and I'm optimistic about the trajectories of CE-incubated charities like Vida Plena and Kaya Guides

In the meantime, although the gap between GiveDirectly and StrongMinds has clearly narrowed, I remain unconvinced that cash is clearly the better option (but I do remain open-minded and open to pushback).

Thanks for providing such a thoughtful response. These value judgments are extremely difficult and it looks like you did the best you could with the evidence available. I haven't looked into the subjective wellbeing of suicide survivors but, if there's enough data, this could provide a helpful sense-check to your original discount rate.

Although means restriction is very successful at reducing suicide rates, I'm curious how it compares to social determinants (or psychotherapy) if the goal is DALYs/QALYs/WELLBYs. It seems plausible that public health interventions that focus on improving quality of life could lead to a larger overall benefit (for a larger population) than ones that focus solely on reducing suicides (depending on philosophical views of course!)

Load more