Fantastic post!
I'm trying to put myself in the shoes of someone that is new around here, and I would appreciate some definitions or links for acronyms (GHD, AIS), and meat eater problem. Maybe others as well, I haven't been thorough.
Can you please update the post, it would be even better in my opinion.
I would be very surprised if [neuron count + noiciceptive capacity as moral weight] are standard EA assumptions. I haven't seen this in the people I know nor in the major funders, who seem to be more pluralistic to me.
My main critique to this post is that there are different claims and it's not very clear which arguments are supporting what conclusions. I think your message would be more clear after a bit of rewriting, and then it would be easier to have an object-level discussion.
Hey, kudos to you for writing a longform about this. I have talked to some self-identified negative utilitarians, and I think this is a discussion worth having.
I think this post is mixing two different claims.
Critiquing “minimize suffering as the only terminal value → extinction is optimal” makes sense.
But that doesn’t automatically imply that some suffering-reduction interventions (like shrimp stunning) are not worth it.
You can reject suffering-minimization-as-everything and still think that large amounts of probable suffering in simple systems matter at the margin.
Also I appreciated the discussion of depth, but have nothing to say about it here.
I would appreciate:
 - Any negative utilitarian or person knowledgeable about negative utilitarianism commenting on why NU doesn't necessarily recommend extinction.
 - The OP clarifying the post by making more explicit the claims.
I like your post, especially the vibe of it.
At the same time, I have a hard time understanding what does "quit EA" even mean:
Stop saying you're EA? I guess that's fine.
Stop trying to improve the world using reason and evidence? Very sad. Probably read this post x50 and I hope it convinces you otherwise.
My answer is largely based on my view that short-timeline AI risk people are more dominant in the discourse that the credence I give them, ymmv