KT

Karthik Tadepalli

Economics PhD @ UC Berkeley
4461 karmaJoined Pursuing a doctoral degree (e.g. PhD)blog.karthiktadepalli.com

Bio

I think about technological progress, global development, and AI's economic impacts. I write about these topics on my blog, Beyond Imitation.

Sequences
1

What we know about economic growth in LMICs

Comments
508

Here is an argument for why that might be better:

  • The evidence base behind EA's preferred global health interventions (e.g. malaria nets) is much stronger than the evidence base behind virtually any global development intervention. There are more papers and more high-quality evidence (RCTs)
  • Health interventions are cheap because they focus on delivering commodities (nets, pills). Development interventions (that work) tend to have higher cost-per-person because they're more high-touch and tailored (e.g. graduation programs)

But if you want the descriptive story of why it ended up this way, it goes more like this:

  • GiveWell focused on global health because they were searching for the cheapest and most evidence-backed interventions; other people followed suit
  • Early EA philosophers used global health interventions as evocative examples of how stark cost-effectiveness differences could be, and how that created a moral imperative (e.g. Toby Ord)
  • Global health had a much more established culture of considering cost-effectiveness than global development (which has since caught up somewhat), so it was much easier to find evidence of cost-effectiveness for health interventions

Online: much of the content on the EA forum is quite specialized. I, in principle, absolutely love it that people are writing 10,000 word reports on shrimp sentience and posting it on the forum. That is what actually doing the work looks like - rather than speculating at a high level about whether shrimp could suffer and if so what that would mean for us, you go out and actually try to push our knowledge forward in detail. However, I have absolutely no desire to read it.

I have long had the opposite criticism; that almost everything that gets high engagement on the Forum is lowest-common-denominator content, usually community-related posts or something about current events, rather than technical writing that has high signal and helps us make progress on a topic. So in a funny way, I have also come to the same conclusion as you:

I won’t sugar-coat it: the main reason I don’t engage so much with EA these days is that I find it boring.

but for the opposite reason.

I don't think of altruism as being completely selfless. Altruism is a drive to help other people. It exists within all of us to more or less extent, and it coexists with all of our other desires. Wanting things for yourself or for your loved ones is not opposed to altruism.

When you accept that - and the point Henry makes that it isn't zero sum - there doesn't seem to be any conflict.

Yes, nothing in this post seems less likely than an EA trying to convince socialists to become EAs and subsequently being convinced of socialism.

This is a valid statement but non-responsive to the actual post. The argument is that there is intuitive appeal in having a utility function with a discontinuity at zero (ie a jump in disutility from causing harm), and ~standard EV maximisation does not accommodate that intuition. That is a totally separate normative claim from arguing that we should encode diminishing marginal utility.

I'm obviously missing something trivial here, but also I find it hard to buy "limited org capacity"-type explanations for GW in particular given total funding moved, how long they've worked, their leading role in the grantmaking ecosystem etc)

This should be very easy for you to buy! The opportunity cost of lookbacks is investigating new grants. It's not obvious that lookbacks are the right way to spend limited research capacity. Worth remembering that GW only has around 30 researchers and makes grants in a lot of areas. And while they are a leading EA grantmaker, it's only recently that their giving has scaled up to being a notable player in the total development ecosystem.

Without this assumption, recursive self improvement is a total non starter. RSI relies on an improved AI being able to design future AIs ("we want Claude N to build Claude N+1")

Skeptic says "longtermism is false because premises X don't hold in case Y." Defender says "maybe X doesn't hold for Y, but it holds for case Z, so longtermism is true. And also Y is better than Z so we prioritize Y."

What is being proven here? The prevailing practice of longtermism (AI risk reduction) is being defended by a case whose premises are meaningfully different from the prevailing practice. It feels like a motte and bailey.

It's clearly not the case that asteroid monitoring is the only or even a highly prioritised intervention among longtermists. That makes it uncompelling to defend longtermism with an argument in which the specific case of asteroid monitoring is a crux.

If your argument is true, why don't longtermists actually give a dollar to asteroid monitoring efforts in every decision situation involving where to give a dollar?

I certainly agree that you're right about describing why people diversify but I think the interesting challenge is to try and understand under what conditions this behavior is optimal.

You're hinting at a bargaining microfoundation, where diversification can be justified as the solution arrived at by a group of agents bargaining over how to spend a shared pot of money. I think that's fascinating and I would explore that more.

Load more