My reasons for being vegan have little to do with the direct negative effects of factory farming. They are in roughly descending order of importance.
I basically think so, yes. I think it mainly caused by, as you put it, "the amount of money from six-figure donations was nonetheless dwarfed by Open Philanthropy" and therefore people have scaled back/stopped since they don't think it's impactful. I basically don't think that's true, especially in this case of animal welfare but also just in terms of absolute impact which is what actually matters as opposed to relative impact. FWIW, this is the same (IMO, fallacious) argument "normies" have against donating "my potential donations are so small compared to billionaires/governments/NGOs that I may as well just spend it on myself".
But yes, the amount of people I know who would consider themselves to be effective altruists, even committed effective altruists who earn considerable salaries donate relatively little, at least compared to what they could be donating.
I'll take a crack at some of these.
On 3, I basically don't think this matters. I hadn't considered it largely because it seems super irrelevant. It matters far more if any individual people shouldn't be there or some individuals should be there who aren't. AFAICT without much digging, they all seem to be doing a fine job and I don't see the need for a male/poc though feel free to point out a reason. I think nearly nobody feels they have a problem to report and then upon finding out that they are reporting to a white woman feel they can no longer do so. I would really hate to see EA become a place where we are constantly fretting and questioning demographic makeups of small EA organizations to make sure that they have enough of all the traits. It's a giant waste of time, energy and other resources
On 4, this is a risk with basically all nonprofit organizations. Do we feel AI safety organizations are exaggerating the problem? How about SWP? Do you think they exaggerate the number of shrimp or how likely they are to be sentient? How about Givewell? Should we be concerned about their cost-effectiveness analyses? It's always a question to ask but usually, a concern would come with something more concrete or a statistic. For example, the charity Will Macaskill talks about in the UK that helps a certain kind of Englishperson who is statistically ahead (though I can't remember if this is Scotts or Irishmen or another group)
On 7, university groups are limited in resources. Very limited. It is almost always done part-time while managing a full time courseload and working on their own development among other things and so they focus on their one comparative advantage of recruitment (since it would be difficult for others to do that) and outsource the training to other places (80k, MATS, etc.).
On 10, good point, I would like to see some movement within EA to increase the intensity.
On 11, another good point. I'd love to read more about this.
On 12, another good point but this is somewhat how networks work, unfortunately. There's just so many incentives for hubs to emerge and then to have a bunch of gravity. It kinda started in the Bay area and then for individual actors, it nearly always makes sense to go around there and then there is a feedback loop.
@Greg_Colbourn while I disagree on Pause AI and the beliefs that lead up to it, I want to commend you for this for:
1) Taking your beliefs seriously.
2) Actually donating significant amounts. I don't know how this sort of fell off as a thing EAs do.
Actually, I'm uncertain if pausing AI is a good idea and I wish the Pause AI people had a bit more uncertainty (on both their "p(doom)" and on whether pausing AI is a good policy) as well. I look at people who have 90%+ p(doom) as, at the very least, uncalibrated, the same way I look at the people who are dead certain that AI is going to go positively brilliant and that we should be racing ahead as fast as possible. It's as if both of them aren't doing any/enough reading of history. In the case of my tribe
I would submit that this kind of protesting, including/especially the example you posted makes your cause seem dumb/unnuanced/ridiculous to the onlookers who are indifferent/know little.
Last, I was just responding to the prompt "What are some criticisms of PauseAI?". It's not exactly the place for a "fair and balanced view" but also, I think it is far more important to critique your own side than the opposite side since you speak the same language as your own team so they will actually listen to you.
2/3. https://youtu.be/T-2IM9P6tOs?si=uDiJXEqq8UJ63Hy2 this video came up as the first search result when i searched "pause ai protest" on youtube. In it, the chant things like "open ai sucks! Anthropic sucks! Mistral sucks!" And "Demis Hassabis, reckless! Darío amodei reckless"
I agree that working on safety is a key moral priority. But working on safety looks a lot more like the things I linked to in #3. That's what doing work looks like.
This seems to be what a typical protest looks like. I've seen videos of others. I consider these to be juvenile and unserious and unlikely to build necessary bridged to accomplish outcomes. I'll let others form their opinions.
I listed in descending order of importance. I might be confused for one of those "hyper rationalist" types in many instances. I think rationalist undervalue the cognitive dissonance. In my experience, a lot of rationalists just don't value non human animals. Even rationalists behave in a much more "vibes" based way than they'd have you believe. It really is hard to hold in your head both "it's okay to eat animals" and "we can avert tremendous amounts of suffering to hundreds of animals per dollar and have a moral compulsion to do so".
I also wouldn't call what I do virtue signaling. I never forthright tell people and I live in a very conservative part of the world.