I research a wide variety of issues relevant to global health and development. I'm always happy to chat - if you think we have similar interests and would like to talk, send me a calendar invite at karthikt@berkeley.edu!
This is a valid statement but non-responsive to the actual post. The argument is that there is intuitive appeal in having a utility function with a discontinuity at zero (ie a jump in disutility from causing harm), and ~standard EV maximisation does not accommodate that intuition. That is a totally separate normative claim from arguing that we should encode diminishing marginal utility.
I'm obviously missing something trivial here, but also I find it hard to buy "limited org capacity"-type explanations for GW in particular given total funding moved, how long they've worked, their leading role in the grantmaking ecosystem etc)
This should be very easy for you to buy! The opportunity cost of lookbacks is investigating new grants. It's not obvious that lookbacks are the right way to spend limited research capacity. Worth remembering that GW only has around 30 researchers and makes grants in a lot of areas. And while they are a leading EA grantmaker, it's only recently that their giving has scaled up to being a notable player in the total development ecosystem.
Skeptic says "longtermism is false because premises X don't hold in case Y." Defender says "maybe X doesn't hold for Y, but it holds for case Z, so longtermism is true. And also Y is better than Z so we prioritize Y."
What is being proven here? The prevailing practice of longtermism (AI risk reduction) is being defended by a case whose premises are meaningfully different from the prevailing practice. It feels like a motte and bailey.
It's clearly not the case that asteroid monitoring is the only or even a highly prioritised intervention among longtermists. That makes it uncompelling to defend longtermism with an argument in which the specific case of asteroid monitoring is a crux.
If your argument is true, why don't longtermists actually give a dollar to asteroid monitoring efforts in every decision situation involving where to give a dollar?
I certainly agree that you're right about describing why people diversify but I think the interesting challenge is to try and understand under what conditions this behavior is optimal.
You're hinting at a bargaining microfoundation, where diversification can be justified as the solution arrived at by a group of agents bargaining over how to spend a shared pot of money. I think that's fascinating and I would explore that more.
Maximizing a linear objective always leads to a corner solution. So to get an optimal interior allocation, you need to introduce nonlinearity somehow. Different approaches to this problem differ mainly in how they introduce and justify nonlinear utility functions. To me I can't see where the nonlinearity is introduced in your framework. That makes me suspect the credence-weighted allocation you derive is not actually the optimal allocation even under model uncertainty. Am I missing something?
I don't think of altruism as being completely selfless. Altruism is a drive to help other people. It exists within all of us to more or less extent, and it coexists with all of our other desires. Wanting things for yourself or for your loved ones is not opposed to altruism.
When you accept that - and the point Henry makes that it isn't zero sum - there doesn't seem to be any conflict.