KT

Karthik Tadepalli

Economics PhD // Consulting Researcher @ UC Berkeley // GiveWell
4233 karmaJoined Pursuing a doctoral degree (e.g. PhD)karthiktadepalli.com

Bio

I research a wide variety of issues relevant to global health and development. I also consult as a researcher for GiveWell (but nothing I say on the Forum is ever representative of GiveWell). I'm always happy to chat - if you think we have similar interests and would like to talk, send me a calendar invite at karthikt@berkeley.edu!

Sequences
1

What we know about economic growth in LMICs

Comments
499

I certainly agree that you're right about describing why people diversify but I think the interesting challenge is to try and understand under what conditions this behavior is optimal.

You're hinting at a bargaining microfoundation, where diversification can be justified as the solution arrived at by a group of agents bargaining over how to spend a shared pot of money. I think that's fascinating and I would explore that more.

Maximizing a linear objective always leads to a corner solution. So to get an optimal interior allocation, you need to introduce nonlinearity somehow. Different approaches to this problem differ mainly in how they introduce and justify nonlinear utility functions. To me I can't see where the nonlinearity is introduced in your framework. That makes me suspect the credence-weighted allocation you derive is not actually the optimal allocation even under model uncertainty. Am I missing something?

Apropos of nothing, it will be funny to see SummaryBot summarizing an AI summary.

I think the phrasing is probably a joke but the substance is the same as the post

For what it's worth "not consistently candid" is definitely a joke about the OpenAI board saying that Sam altman was "not consistently candid" with them rather than a statement of context.

Thanks for the link to your thoughts on why you think it's likely that there will be a crash. I think you underestimate the likelihood of the US government propping up AI companies. Just because they didn't invest money in the Stargate expansion doesn't mean they aren't reserving the option to do so later if necessary. It seems clear that Elon Musk is personally very invested in AI. Even aside from his personal involvement the fact that China/DeepSeek is in the mix points towards even a normal government offering strong support to American companies in this race.

If you believe that the US government will prop up AI companies to virtually any level they might realistically need by 2029, then i don't see a crash happening.

The author of this post must be over the moon right now

IQ grew over the entire 20th century (Flynn effect). Even if it's declining now, it is credulous to take a trend over a few decades and extrapolate it to millennia from today. Especially when that trend of a few decades is itself a reversal of an even longer trend.

Compare this to other trends that we extrapolate out for millennia – increases in life expectancy and income. These are much more robust. Income has been steadily increasing since the Industrial Revolution and life expectancy possibly for even longer than that. That doesn't make extrapolation watertight by any means, but it's a way stronger foundation.

Also, I don't know much about the social context for this article that you say is controversial, but it strikes me as really weird to say "here's an empirical fact that might have moral implications, but EAs won't acknowledge it because its taboo and they're not truthseeking enough". That's putting the cart a few miles before the horse.

The True Believer by Eric Hoffer is a book about the psychology of mass movements. I think there are important cautions for EAs thinking about their own relationship to the movement.

There is a fundamental difference between the appeal of a mass movement and the appeal of a practical organization. The practical organization offers opportunities for self-advancement, and its appeal is mainly to self-interest. On the other hand, a mass movement, particularly in its active, revivalist phase, appeals not to those intent on bolstering and advancing a cherished self, but to those who crave to be rid of an unwanted self. A mass movement attracts and holds a following not because it can satisfy the desire for self-advancement, but because it can satisfy the passion for self-renunciation.

I wanted to write a draft amnesty post about this, but I couldn't write anything better than this Lou Keep essay about the book, so I'll just recommend you read that.

Something that I personally would find super valuable is to see you work through a forecasting problem "live" (in text). Take an AI question that you would like to forecast, and then describe how you actually go about making that forecast. The information you seek out, how you analyze it, and especially how you make it quantitative. That would

  1. make the forecast process more transparent for someone who wanted to apply skepticism to your bottom line
  2. help me "compare notes", ie work through the same forecasting question that you pose, come to a conclusion, and eventually see how my reasoning compares to yours.

This exercise does double duty as "substantive take about the world for readers who want an answer" and "guide to forecasting for readers who want to do the same".

Load more