huw

Co-Founder & CTO @ Kaya Guides
1929 karmaJoined Working (6-15 years)Sydney NSW, Australia
huw.cool

Bio

Participation
2

I live for a high disagree-to-upvote ratio

Comments
267

How are you thinking about trade as a deterrent? The typical defence here brings up TSMC—but IMHO should also bring up Foxconn, a gigantic employer in China—and that if the Chinese consumer economy collapses, this could cause enough headaches to make an attack unreasonable.

In this case, the two main courses of action would look like:

  1. Use policy and lobbying within the US to reduce tariffs and chip controls (neglected on the margins, ex. chip controls, but probably not broadly)
  2. Find other means of increasing consumer spending on foreign goods within China (seems not neglected as corporations have a direct incentive to pursue it)

I haven’t used it extensively for research tasks yet, but I do really worry about that. There is something I feel viscerally when I ‘get’ a paper that often requires a deep look into the mechanics of how a study was run (i.e. reading the whole thing), that’s just not going to come from a skim-read. There’s lots of nuance in the literature my intervention is based off of, that if I didn’t understand, would lead me to inappropriately embellish my results.

I think if I was using research tools, they’d save me a lot of time in the googling phase, but then I’d still skim papers for value, and hand-read the most important ones. Anecdotally, this seems to be what most full-time researchers do.

(I also find talking confidently about the details of papers impresses people I talk to, which can be valuable in and of itself)

huw
10
0
0
1
1

How robust is your assumption about the value life events staying constant? If it were not true, then there may not be any rescaling to explain. Intuitively, if wellbeing saturates at the top end, having a really positive thing happen to me genuinely might not move the needle as much. In other words, if my life is already a 9, is it realistic to expect getting married will take me to a 10—a perfect life?

HLI have a good, but very preliminary, look at the linearity/compression of wellbeing here, and it seems like linearity/compression is actually very under-studied. This seems odd to me, considering that it would probably dramatically shift where you allocate resources if linearity was true vs if there were bigger gains to be made in the middle of the spectrum.

(Apologies if you have addressed this somewhere)

All of the headlines are trying to run with the narrative that this is due to Trump pressure, but I can’t see a clear mechanism for this. Does anyone have a good read on why he’s changed his mind? (Recent events feel like: Buffet moving his money to his kids’ foundations & retiring from BH, divorce)

I liked Bob Jacob’s essay Is Effective Altruism neocolonial?.

Aid dependency is a really interesting problem, where charities can become victims of their own success. I think we should be very thoughtful about counterfactual government funding—even when, due to natural government inefficiencies, it might be less cost-effective.

One place I think EAs can do a lot of good is in charity entrepreneurship. There are often good emerging ideas that need a strong evidence base before governments will adopt them, but a shortage of ambitious people willing to take these risks. At Kaya Guides, we see our role as pioneering a novel treatment method, and then working with governments to implement. Even if we don’t do this ourselves, our counterfactual impact will always have been to create an evidence base that encourages others to do so!

huw
9
1
0
1

Thanks for the feedback! I’m not really smart enough to figure something like that out tbh, and by the point I’d seen that my realistic options were within an order of magnitude of each other (and both high-risk with high overlap) I was pretty satisfied that my decision was likely gonna hinge on something else.

Maybe your adjustment would take it outside that range but I think at the point of extreme success these charities would be selling impact at competitive rates (so funders would be getting marginal value out of them), and more than likely going to counterfactual funders (ex. government). Maybe this is truer in GHD and especially mental health than in animal welfare, which seems more concentrated. But yeah, at this point I was pretty satisfied that Kaya Guides had minimal risk of substantial funding displacement in a success scenario (I can’t be too specific about this in public), so I picked it.

(Maybe again—I’m just highlighting my specific scenario, there’s definitely an attempt to generalise here but I didn’t think too hard through it)

No, but I think that’s reasonable in most cases (although hard to figure out exactly how to allocate it).

I didn’t. It evidently works—as do cooperatives, which I was also excited to found—but I think the big worry is up the top end. It’s very hard to imagine a FAANG company structured this way. And some of the average-case calculations above are skewed upwards by a handful of top success stories.

huw
6
1
0
2

I looked at a handful of mental health startups to inform my guesses on impact. I looked the most deeply into BetterHelp, but you can clearly see through their numbers that their prices have almost doubled in 5 years (and steadily, too, this wasn’t a COVID thing). From the research I did, my sense was that it wasn’t getting passed back onto their counsellors, nor fuelling an increase in growth spending. There’s no way it got twice as expensive to deliver therapy.

I think if we had to point to a single mechanism, once you run out of user growth—as BetterHelp have—your investors push you to find an equilibrium price. That price is necessarily going to be higher than the price that guarantees the broadest impact, and likely to be higher than the price for the most (breadth x depth) impact.

My best guess is regional pricing can act as a crude form of means-testing, but this probably comes with a perverse incentive to ignore the cheaper regions (as BetterHelp have—almost all of their users are in the U.S.).

(All of that goes out the window if you don’t go direct to consumers—I think deeper forms of healthtech might be quite value-aligned!)

what does these numbers represent?

How much money a CE charity might be able to raise on average. (This makes the assumption that deployed cash from a charity is roughly equivalent to donated cash from Founding to Give which is what the other numbers represent).

Load more