Thank you for taking the time to reply. Your responses to 1a and 1b make sense to me. 2 I'm still exploring and turning these ideas around in my mind - thank you for the paper. I wonder if some of this can be tested by asking people about their number of desires, general life satisfaction, % life satisfaction/desires fulfilled.
If I may, I'd like to expand a bit on number 1.
Thank you for presenting these views! This was very interesting.
I have some questions of interest to me - apologies if I've missed something and for these being slightly outside of scope.
Excited to see more work on mental health charities! Thank you for this. I will need a bit of time to read before I comment I could comment in more detail.
What's stopping me from have a good overview of your results is that the cost-effectiveness of each proposed intervention is on a different mental-health outcome. If I am not mistaken, these have different scale sizes. Do you have results converted in effect sizes (Cohen's d)? This would mean all the outcomes are converted to the same unit, standard deviations. This makes it easier to compare and allows you to compare them to other interventions that are also evaluated in affect/wellbeing (e.g., McGuire et al., 2022).
As my colleagues have mentioned in their responses (Michael's general response, Joel's technical response), the WELLBYs per $1000 that GiveWell put forward for AMF are dependent on philosophical choices about the badness of death and the neutral point. There are a range of plausible possible choices and these can affect the results. HLI does not hold a view.
We've whipped up an R Shiny app so that you, the reader, can play around with these choices and see how your views affect the comparison between StrongMinds and AMF.
Please note that this is a work in progress and was done very quickly. Also, I'm using the free plan for hosting the app so it might be a bit slow/limited in monthly bandwidth.
Hi Nick,
Thanks for pointing out both kinds of biases. These biases can cause a failure of comparability. Concretely, if an intervention causes you to give counterfactually higher scores as a matter of ‘courtesy’ to the researcher, then the intervention changed the meaning of each given response category.
I therefore take it that you don’t think that our particular tests of comparability will cover the two biases you mention. If so, I agree. However, my colleague has given reasons for why we might not be as worried about these sorts of biases.
I don’t think this can be tested in our current survey format, but it might be testable in a different design. We are open to suggestions!
Hello Henry,
Thank you for presenting this thought experiment.
The core here is about whether groups like the Sentinelese who do not have the same levels of development as others would give similar levels of SWB. I think the other comments here have done a great job at pointing out possible explanations.
Some briefs answers / pointers. Many of these things have been discussed in more details elsewhere.
Hi Nick, A quick comment to thank you for engaging with our work and for your insights. This is super interesting.
Arthritis - same as treating most other pain - large amounts of paractamol, ibuprofen (and other nsaids) and diclofenac gel is what we do for arthritis.
This suggests that this could be really cost-effective, considering the price of NSAIDs! However, wouldn't issues of side effects also occur here? Or is this less of an issue because the gains would be higher?
This is a cool tool!
Maybe it is my french/worker rights bias, but I do feel weird about the framing towards the workers. Shouldn't this be more for bosses to be incentivised to retain their workforce? "If you don't treat your employees well enough and they leave, it will cost you".