Thanks, that's useful. I mostly agree with you, and mistakenly read the second bullet point as saying "work that opposes fascism should come from all sides of the political spectrum", which is something I agree with. I think the OP somewhat assumed that opposing fascism will look like 'work with your local anti-fascist network', but I expect much of it could look more like 'militarising Europe' (something the political left would typically oppose).
I don't think this quite works as a response to Alene's point. Many things are necessary/valuable preconditions for doing good. We need food, water, functioning infrastructure, preserving democracy, the internet, etc. The fact that something is a precondition for other work doesn't by itself make it a high-priority EA cause area.
If I apply the ITN framework to 'preserving democracy', I get something like:
Thanks for this post – really would have liked having such a filter in the past.
We estimate that The Vegan Filter could cut the convenience barrier roughly in half by addressing the “supermarket barrier,” one of the largest friction points for new vegans.
Can you say more about why you estimate this to half the convenience barrier?
I expect this to be much lower, maybe cutting the inconvenience of being vegan by 1-5%. The filter could still be worth the effort, of course :)
you are threatening not to care about a problem in the world because I made you uncomfortable
Is this directed at me? Because I didn't want to do this, and I don't see why you think I did this (like, I clearly never threatened not to care about a problem?).
If I take the way that you've used "you" in your post and in the comments here seriously, you've said a bunch of things that I believe are clearly not true:
you want me to beg you to please consider it as a favor [I don't want to do this]
I know your arguments in and out. [we've never talked about this together]
you don’t care about finding out what is right [I actually do]
Now it’s about working at an AI lab or wishing you could work at an AI lab. [I don't wish to do that]
I’m already beating you and you just define the game so that the conclusion of moving toward advocacy can’t win. [we've never played any games]
you’re tedious to deal with [this one is true, but this is incidental, not sure why you know this]
I'm very sorry to hear about your dad. I hope those who would have voted for PauseAI in the donation election will consider donating to you directly.
On the points you raise, one thing stands out to me: you mention how hard it is to convince EAs that your arguments are right. But the way you've written this post (generalising about all EAs, making broad claims about their career goals, saying you're already beating them in arguments) suggests to me you're not very open to being convinced by them either. I find this sad, because I think that PauseAI is sitting in an important space (grassroots AI activism), and I'd hope the EA community & the PauseAI community could productively exchange ideas.
In cases where there is an established science or academic field or mainstream expert community, the default stance of people in EA should be nearly complete deference to expert opinion, with deference moderately decreasing only when people become properly educated (i.e., via formal education or a process approximating formal education) or credentialed in a subject.
If you took this seriously, in 2011 you'd have had no basis to trust GiveWell (quite new to charity evaluation, not strongly connected to the field, no credentials) over Charity Navigator (10 years of existence, considered mainstream experts, CEO with 30 years of experience in charity sector).
But, you could have just looked at their website (GiveWell, Charity Navigator) and tried to figure out yourself whether one of these organisations is better at evaluating charities.
I am extremely skeptical of any claim that an individual or a group is competent at assessing research in any and all extant fields of study, since this would seem to imply that individual or group possesses preternatural abilities that just aren't realistic given what we know about human limitations.
This feels like a Motte ("skeptical of any claim that an individual or a group is competent at assessing research in any and all extant fields of study") and Bailey (almost complete deference with deference only decreasing with formal education or credentials). GiveWell obviously never claimed to be experts in much beyond GHW charity evaluation.
I agree it would be bad if the OpenAI Foundation were still giving under 5% per year several years from now. But I don’t think 'they should spend 5%+ in year one' follows.
Directing billions well is really hard, especially for a new foundation. Coefficient Giving says it directed over $4 billion from 2014 to mid-2025, and that 2025 was the first year it directed more than $1 billion. Their 'endowment' is much smaller (~10x smaller?) than OAF’s but it still points towards allocating money well at that scale being genuinely hard. I wouldn't call a new foundation planning to deploy $1 billion in its first year "conservative".
What I'd most like to see is OAF committing to an aggressive, public ramp-up targets, maybe something like reaching 5% of assets by 2028.