huw

Co-Founder & CTO @ Kaya Guides
2271 karmaJoined Working (6-15 years)Sydney NSW, Australia
huw.cool

Bio

Participation
2

I live for a high disagree-to-upvote ratio

Comments
321

5% of Americans identify as being on the far left

However, I would strongly wager that the majority of this sample does not believe in the three ideological points you outlined around authoritarianism, terrorist attacks, and Stalin & Mao (I think it is also quite unlikely that the people viewing the Tik Tok in question would believe these things either). Those latter beliefs are extremely fringe.

My hobby horse around these parts has been that EA should be less scared about reaching out to the left (where I’m politically rooted), and thinking about what commonalities we have. This is something I have already seen in the animal welfare movement, where EAs are unafraid to work with existing vegan activism, and have done a good job of selling philanthropic funding to them, despite having large differences in opinion on the margins.

As you note, it’s not unreasonable that EA looks very far left from some perspectives. GiveDirectly is about direct empowerment, and I would argue that a lot of global development work, especially economic development, can be anti-imperial and generally concord with Marxist ideas of the internationale. Some better outreach and PR management in these communities would go a long way in the same way that it has for the political centre-left, who seem to get lots more attention from EA.

I strongly agree, and would add that this is a big concern of mine in direct intervention delivery also. Kaya Guides is fortunate enough to have a team that understands digital marketing, and we decided early on to recruit participants for our intervention using Meta ads. When I joined, I implemented direct conversion tracking from our intervention. Combined, we have reduced recruitment costs to around US$1, which is substantially cheaper than forming partnerships in the early years of an organisation, and is much more flexible.

I have been advocating for it where I can within AIM, for interventions that support it (and sometimes go a step further, and try to advocate for selecting interventions that are well suited for digital delivery).

Let me know if there are ways I can help advocate for better growth marketing within EA interventions, I am very passionate about this!

A few points:

  1. There is still a lot of progress to be made in low-income country psychotherapy, which I think many EAs find counterintuitive. StrongMinds and Friendship Bench could both be about 5× cheaper, and have found ways to get substantially cheaper every year for the past half decade or so. At Kaya Guides, we’re exploring further improvements and should share more soon.
  2. Plausibly, you could double cost-effectiveness again if it were possible to replace human counsellors with AI in a way that maintained retention (the jury is still out here).
  3. The Happier Lives Institute has been looking at these kinds of interventions; their Promising Charities Pure Earth and Taimaka both appear to improve long-run mental health sustainably, by treating lead poisoning and malnutrition.

I am surprised at this, only because I remember the Gulf states were quite keen on bringing production into their countries and I would’ve thought they’d have declared Halal sooner!

huw
2
0
0
1

(Yep, I’m not having a go at the mission here, more at the nuances of measurement)

Small drive-by question for you: In your opinion, if C. Elegans is conscious and has some moral significance, and suppose we could hypothetically train artificial neural networks to simulate a C. Elegans, would the resulting simulation have moral significance?

If so, what other consequences flow from this—do image recognition networks running on my phone have moral significance? Do LLMs? Are we already torturing billions of digital minds?

If not, what special sauce does C. Elegans have that an artificial neural network does not? (If you’re not sure, where do you think it might lie?)

(Asking out of genuine curiosity—haven’t had a lot of time to interface with this stuff)

I guess I don’t find your conclusion intuitive. I’m sure there are a range of preference questions you could ask these extreme sufferers. For example, whether they, at a 5/10 life satisfaction, would trade places with someone in a low-income country with a life satisfaction of 2/10 who does not have their condition.

  • If you believe that they would make this trade, then surely there is something that their life satisfaction score is simply failing to capture
  • If you believe that they wouldn’t make this trade, then either that preference game isn’t eliciting some true value of suffering, or otherwise, why should we allocate hypothetical marginal dollars to their suffering and not that of those with lower life satisfaction?

My hunch is that the former is true, that there is something you can elicit from these people that isn’t being captured in the Cantril Ladder. (In my work, we’ve found the Cantril Ladder to be unreliable in other ways). But on the other side of this, I do worry about rejecting people’s own accounts of their experiences—it may literally be true that these people are somewhat happy with their lives, and that we should focus our resources on those who report that they aren’t!

huw
10
1
0

I take this as an indicator that we need to work harder to demonstrate that global mental health is a cause area worth investing in :)

Load more