I agree that we don't need to (and usually don't) play those zero-sum games. The problem is that those zero-sum games are the mechanism for price discovery, and we don't have market price signals in the charity world.
I agree with your point about diversification reducing risk. This is true for empirical uncertainty and for value uncertainty sometimes. If you have a convex utility function, reducing risk has positive expected value, if not, then no.
I don't see how this could work.
Investing in an index benefits from prices being good proxies for expected returns, because bringing information to the market is rewarded.
In a liquid market, buying pushes prices up, and selling pushes them down, so if something is mispriced it can be arbitraged away for a profit.
In charity, this is not happening. If research shows that charity A is 10x effective charity B (even with error bars), people don't switch until the prices (aka impact per unit funding) equalize, so the price signal that is useful for index investing is not there.
Hi, welcome to the EA Forum. It's nice to see philosophical ideas that don't come from the dominant tradition here.
Your argument rests on the premise that everyone (human) has liangzhi but large models don't.
I'm skeptical of that, because the innate sense of right/wrong can be culture dependent, and there are people with neurological and psychological conditions that don't have that same experience.
How does that fit into your worldview?
Hey, I like your progressive pledge tool. How hard would it be to include places outside the US? And more currencies?
I sometimes check this place out for cost of living comparisons around the world, it's not perfect but it gives you some idea for at least big cities:
https://www.numbeo.com/cost-of-living/
At the same time, the good thing of 10% is that it is a way stronger Schelling point than a progressive tax, so I suppose it's better for signaling.
For me it's even more that what you say. I was thinking even for most people working on AI or bio risk, the threats usually feel quite real in a scale of decades, and they could be personally affected. The numbers may change, but I think for most people working in EA cause areas, their work is well justified without appealing to impartiality (radical empathy would be enough, and it's less demanding) or longtermism.
Strongly agree.
For me, the discussion of impartiality (first day of intro program) and longtermism (which isn't necessary for many of the suggested action points) were moments of doubt. Also 80k narrowing on transformative AI and alienating people that don't agree with the worldview.
Somehow I still stuck around.
But I think many of the things EA proposes don't need people to buy the whole package, and we are missing out on impact by leading with strong philosophical stuff.
Non american here.
I read that sentence as a rethorical like "doing whatever thing is necessary" and I don't see it implying that "defending America" is necessarily even good.
However, if your read is the right one, then I find it off putting as well.
I would appreciate @Mjreard clarifying what the intent behind that was.
Mutual funds underperform in an environment where arbitrage exists and prices are at least close to efficient.
Indices don't select the "best performing" companies, they usually select the "biggest" companies. Here the analogy to the charity world breaks.