I think about technological progress, global development, and AI's economic impacts. I write about these topics on my blog, Beyond Imitation.
Online: much of the content on the EA forum is quite specialized. I, in principle, absolutely love it that people are writing 10,000 word reports on shrimp sentience and posting it on the forum. That is what actually doing the work looks like - rather than speculating at a high level about whether shrimp could suffer and if so what that would mean for us, you go out and actually try to push our knowledge forward in detail. However, I have absolutely no desire to read it.
I have long had the opposite criticism; that almost everything that gets high engagement on the Forum is lowest-common-denominator content, usually community-related posts or something about current events, rather than technical writing that has high signal and helps us make progress on a topic. So in a funny way, I have also come to the same conclusion as you:
I won’t sugar-coat it: the main reason I don’t engage so much with EA these days is that I find it boring.
but for the opposite reason.
I don't think of altruism as being completely selfless. Altruism is a drive to help other people. It exists within all of us to more or less extent, and it coexists with all of our other desires. Wanting things for yourself or for your loved ones is not opposed to altruism.
When you accept that - and the point Henry makes that it isn't zero sum - there doesn't seem to be any conflict.
This is a valid statement but non-responsive to the actual post. The argument is that there is intuitive appeal in having a utility function with a discontinuity at zero (ie a jump in disutility from causing harm), and ~standard EV maximisation does not accommodate that intuition. That is a totally separate normative claim from arguing that we should encode diminishing marginal utility.
I'm obviously missing something trivial here, but also I find it hard to buy "limited org capacity"-type explanations for GW in particular given total funding moved, how long they've worked, their leading role in the grantmaking ecosystem etc)
This should be very easy for you to buy! The opportunity cost of lookbacks is investigating new grants. It's not obvious that lookbacks are the right way to spend limited research capacity. Worth remembering that GW only has around 30 researchers and makes grants in a lot of areas. And while they are a leading EA grantmaker, it's only recently that their giving has scaled up to being a notable player in the total development ecosystem.
Skeptic says "longtermism is false because premises X don't hold in case Y." Defender says "maybe X doesn't hold for Y, but it holds for case Z, so longtermism is true. And also Y is better than Z so we prioritize Y."
What is being proven here? The prevailing practice of longtermism (AI risk reduction) is being defended by a case whose premises are meaningfully different from the prevailing practice. It feels like a motte and bailey.
It's clearly not the case that asteroid monitoring is the only or even a highly prioritised intervention among longtermists. That makes it uncompelling to defend longtermism with an argument in which the specific case of asteroid monitoring is a crux.
If your argument is true, why don't longtermists actually give a dollar to asteroid monitoring efforts in every decision situation involving where to give a dollar?
I certainly agree that you're right about describing why people diversify but I think the interesting challenge is to try and understand under what conditions this behavior is optimal.
You're hinting at a bargaining microfoundation, where diversification can be justified as the solution arrived at by a group of agents bargaining over how to spend a shared pot of money. I think that's fascinating and I would explore that more.
Here is an argument for why that might be better:
But if you want the descriptive story of why it ended up this way, it goes more like this: