KT

Karthik Tadepalli

Economics PhD @ UC Berkeley
3149 karmaJoined Pursuing a doctoral degree (e.g. PhD)karthiktadepalli.com

Bio

I research a wide variety of issues relevant to global health and development. I'm always happy to chat - if you think we have similar interests and would like to talk, send me a calendar invite at karthikt@berkeley.edu!

Sequences
1

What we know about economic growth in LMICs

Comments
397

One of the most interesting posts I've seen on the forum, ever. Thanks for writing this up!

What is Beast Philanthropy's approach to finding giving opportunities?

Effective altruism in the garden of ends is a great reflection from someone who experienced this dilemma.

This essay is a reconciliation of moral commitment and the good life. Here is its essence in two paragraphs:

Totalized by an ought, I sought its source outside myself. I found nothing. The ought came from me, an internal whip toward a thing which, confusingly, I already wanted – to see others flourish. I dropped the whip. My want now rested, commensurate, amidst others of its kind – terminal wants for ends-in-themselves: loving, dancing, and the other spiritual requirements of my particular life. To say that these were lesser seemed to say, “It is more vital and urgent to eat well than to drink or sleep well.” No – I will eat, sleep, and drink well to feel alive; so too will I love and dance as well as help.

Once, the material requirements of life were in competition: If we spent time building shelter it might jeopardize daylight that could have been spent hunting. We built communities to take the material requirements of life out of competition. For many of us, the task remains to do the same for our spirits. Particularly so for those working outside of organized religion on huge, consuming causes. I suggest such a community might practice something like “fractal altruism,” taking the good life at the scale of its individuals out of competition with impact at the scale of the world.

I'm agnostic on the right functional form for the VSLY, just as I'm agnostic on the right . My point was just that you cannot have it be independent of .

You need to impose some structure to get an exact identification of , but that should not be interpreted as meaning that we can be fully agnostic about how affects valuations, the way you describe. So I don't think that puts us at the point you stated. Specifically, I think the framework you describe where the VSLY relative to income doublings is constant while you shift is still inconsistent with utility maximization, and still not a valid way to interpret how affects the value of health vs income.

On a similarly simple intellectual level, I see "people should not suppress doubts about the critical shift in direction that EA has taken over the past 10 years" as a no-brainer. I do not see it as intellectual wank in an environment where every other person assumes p(doom) approaches 1 and timelines get shorter by a year every time you blink. EA may feature criticism circle jerking overall, but I think this kind of criticism is actually important and not actually super well received (I perceive a frosty response whenever Matthew Barnett criticizes AI doomerism)

Yup, that's an accurate summary of my beliefs (with the caveat that is non-critical and can be replaced with a constant or whatever else you want; only is essential). Put another way, is a single preference parameter that determines the marginal utility of income, and that affects how we value both income and health. I think any other assumption leads to internal inconsistency, or doesn't represent utility maximization.

Does that sound right? If so, my view would be that valuing an extra life year according to for some is a functional form assumption on how people value an extra life-year. In some way, I see the data on from (2) as a test of that assumption. Whereas in your view, which I think is also reasonable, the assumption and data on (3) is a test/verification of our estimate of from (2).

What is an individual willing to pay for anything? Suppose buying a health good (e.g. air purifier) gives you utility , and it costs . Then for every dollar you spend, you are getting utility. Is that optimal? It is optimal only if the marginal utility of spending a dollar on any other good is . If you could get utility from spending a dollar elsewhere, then you would optimally refuse to buy the air purifier and spend your money elsewhere. That's why is always in the denominator; it represents the opportunity cost of money. If you didn't spend your money on a health good, you would spend it somewhere else. And the opportunity cost of money is obviously higher for poor people (trading off daily food for an air purifier is a hell of a lot less appealing than trading off a Chanel bag for an air purifier). So that's why I don't think there can be any consistent model of utility maximization where the VSLY[1] doesn't depend on . Its dependence on is irrelevant and can be replaced with some constant if you want, but I am reasonably sure that can't be banished from the VSLY without rejecting individual utility maximization.

So I think the crux between us is whether you see your position as consistent with VSLYs being derived from individual utility maximization. If it is, then please help me understand, because that would be a major update for me. But if it's not, then I think we are at this point:

Now, the key assumption is that we monetize VSLs using individual willingness-to-pay. Maybe you think social willingness-to-pay should be determined by the marginal utility of money to the social planner, which is common across people, rather than by the WTP of individuals who vary in their income levels. This is a defensible normative position. I would just note that the marginal utility of money from a donor's perspective is the value you could otherwise get by spending that money. For us, that benchmark use of money is GiveDirectly cash transfers. If you think that way, you will end up with a marginal utility of money that is close to a poor person's marginal utility of money, so the original framework is still representative of how valuable health interventions are among poor people.


  1. If you substitute "buying an air purifier" with "buying a year of life", then my argument goes from estimating "willingness to pay for an air purifier" to estimating "willingness to pay for an extra year of life". This is exactly what the VSLY represents, when it is estimated from individual revealed preferences. ↩︎

I think EAs who consume economics research are accustomed to the challenges of interpreting applied microeconomic research: causal inference challenges and such. But I don't think they are accustomed to interpreting structural models critically, which is going to become more of a problem as structural models of AI and economic growth become more common. The most common failure mode for interpreting structural research is to not recognize model-concept mismatch. It looks something like this:

  1. Write a paper about <concept> that requires a structural model, e.g. thinking about how <concept> affects welfare in equilibrium.
  2. Write a model in which <concept> is mathematized in a way that represents only <narrow interpretation>.
  3. Derive conclusions that only follow from <narrow interpretation>, and then conclude that they apply to <concept>.

which is exactly what you've identified with Acemoglu's paper.

Model-concept mismatch is endemic to both good and bad structural research. Models require specificity, but concepts are general, so they have to be shrunk in particular ways to be represented in a model, and some of those ways of representing them will be mutually exclusive and lead to different conclusions. But it means that whenever you read an abstract of a paper that says "we propose a general equilibrium model of <complicated concept>", never take it at face value. You will almost always find that its interpretation of <complicated concept> is extremely narrow.

Good research a) picks reasonable ways to do that narrowing, and b) owns what it represents and what it does not represent. I think Acemoglu's focus on automation is reasonable, because Acemoglu lives, breathes and dreams automation. It is his research agenda. It's important. But he does not own what it represents and what it does not represent, and that's bad.

Likewise, great discussion!

I don't understand using and the revealed preferences independently of each other. only makes sense if it is consistent with the revealed preferences that people place on health vs income. If revealed preferences show that people have a constant valuation of income doublings vs life, then that is only consistent with , and I see no justification for using . How would you justify it?

My earlier statement did not rely explicitly on the VSLY being . However, what it does rely on is the VSLY-income ratio being increasing in . If we assume the value of health is constant and that , then the VSLY is so the VSLY-income ratio is constant. I'm down to assume the value of health is constant, and I don't feel strongly about even though I think it's probably . But my loose reading of the VSLY-income literature is that the ratio is increasing in .

Um... Why did you copy paste Ben Millwood's comment?

Load more