KT

Karthik Tadepalli

Economics PhD // Consulting Researcher @ UC Berkeley // GiveWell
4261 karmaJoined Pursuing a doctoral degree (e.g. PhD)karthiktadepalli.com

Bio

I research a wide variety of issues relevant to global health and development. I also consult as a researcher for GiveWell (but nothing I say on the Forum is ever representative of GiveWell). I'm always happy to chat - if you think we have similar interests and would like to talk, send me a calendar invite at karthikt@berkeley.edu!

Sequences
1

What we know about economic growth in LMICs

Comments
502

Without this assumption, recursive self improvement is a total non starter. RSI relies on an improved AI being able to design future AIs ("we want Claude N to build Claude N+1")

Skeptic says "longtermism is false because premises X don't hold in case Y." Defender says "maybe X doesn't hold for Y, but it holds for case Z, so longtermism is true. And also Y is better than Z so we prioritize Y."

What is being proven here? The prevailing practice of longtermism (AI risk reduction) is being defended by a case whose premises are meaningfully different from the prevailing practice. It feels like a motte and bailey.

It's clearly not the case that asteroid monitoring is the only or even a highly prioritised intervention among longtermists. That makes it uncompelling to defend longtermism with an argument in which the specific case of asteroid monitoring is a crux.

If your argument is true, why don't longtermists actually give a dollar to asteroid monitoring efforts in every decision situation involving where to give a dollar?

I certainly agree that you're right about describing why people diversify but I think the interesting challenge is to try and understand under what conditions this behavior is optimal.

You're hinting at a bargaining microfoundation, where diversification can be justified as the solution arrived at by a group of agents bargaining over how to spend a shared pot of money. I think that's fascinating and I would explore that more.

Maximizing a linear objective always leads to a corner solution. So to get an optimal interior allocation, you need to introduce nonlinearity somehow. Different approaches to this problem differ mainly in how they introduce and justify nonlinear utility functions. To me I can't see where the nonlinearity is introduced in your framework. That makes me suspect the credence-weighted allocation you derive is not actually the optimal allocation even under model uncertainty. Am I missing something?

Apropos of nothing, it will be funny to see SummaryBot summarizing an AI summary.

I think the phrasing is probably a joke but the substance is the same as the post

For what it's worth "not consistently candid" is definitely a joke about the OpenAI board saying that Sam altman was "not consistently candid" with them rather than a statement of context.

Thanks for the link to your thoughts on why you think it's likely that there will be a crash. I think you underestimate the likelihood of the US government propping up AI companies. Just because they didn't invest money in the Stargate expansion doesn't mean they aren't reserving the option to do so later if necessary. It seems clear that Elon Musk is personally very invested in AI. Even aside from his personal involvement the fact that China/DeepSeek is in the mix points towards even a normal government offering strong support to American companies in this race.

If you believe that the US government will prop up AI companies to virtually any level they might realistically need by 2029, then i don't see a crash happening.

The author of this post must be over the moon right now

Load more