M

MarcusAbramovitch

1764 karmaJoined

Comments
109

I listed in descending order of importance. I might be confused for one of those "hyper rationalist" types in many instances. I think rationalist undervalue the cognitive dissonance. In my experience, a lot of rationalists just don't value non human animals. Even rationalists behave in a much more "vibes" based way than they'd have you believe. It really is hard to hold in your head both "it's okay to eat animals" and "we can avert tremendous amounts of suffering to hundreds of animals per dollar and have a moral compulsion to do so".

I also wouldn't call what I do virtue signaling. I never forthright tell people and I live in a very conservative part of the world.

My reasons for being vegan have little to do with the direct negative effects of factory farming. They are in roughly descending order of importance.

  1. A constant reminder to myself that non-human animals matter. My current day-to-day activities give nearly no reason to think about the fact that non-human animals have moral worth. This is my 2-5 times per day reminder of this fact.
  2. Reduction of cognitive dissonance. It took about a year of being vegan to begin to appreciate, viscerally, that animals had moral worth. It's hard to quantify this but it is tough to think that animals have moral worth when you eat them a few times a day. This has flow-through effects on donations, cause prioritization, etc.
  3. The effect it has on others. I'm not a pushy vegan at all. I hardly tell people but every now and then people notice and ask questions about it.
  4. Solidarity with non-EAA animal welfare people. For better or worse, outside of EA, this seems to be a ticket to entry to be considered taking the issue seriously. I want to be able to convince them to donate to THL over a pet shelter and to SWP over dog rescue charities and the the EA AWF over Pets for Vets. They are more likely to listen to me when they see me as one of them who just happens to be doing the math.
  5. Reducing the daily suffering that I cause. It's still something even though it pales in comparison to my yearly donations but it is me living in accordance with my values and is causing less suffering than I would otherwise.

I basically think so, yes. I think it mainly caused by, as you put it, "the amount of money from six-figure donations was nonetheless dwarfed by Open Philanthropy" and therefore people have scaled back/stopped since they don't think it's impactful. I basically don't think that's true, especially in this case of animal welfare but also just in terms of absolute impact which is what actually matters as opposed to relative impact. FWIW, this is the same (IMO, fallacious) argument "normies" have against donating "my potential donations are so small compared to billionaires/governments/NGOs that I may as well just spend it on myself".

But yes, the amount of people I know who would consider themselves to be effective altruists, even committed effective altruists who earn considerable salaries donate relatively little, at least compared to what they could be donating.

I'll take a crack at some of these.

On 3, I basically don't think this matters. I hadn't considered it largely because it seems super irrelevant. It matters far more if any individual people shouldn't be there or some individuals should be there who aren't. AFAICT without much digging, they all seem to be doing a fine job and I don't see the need for a male/poc though feel free to point out a reason. I think nearly nobody feels they have a problem to report and then upon finding out that they are reporting to a white woman feel they can no longer do so. I would really hate to see EA become a place where we are constantly fretting and questioning demographic makeups of small EA organizations to make sure that they have enough of all the traits. It's a giant waste of time, energy and other resources

On 4, this is a risk with basically all nonprofit organizations. Do we feel AI safety organizations are exaggerating the problem? How about SWP? Do you think they exaggerate the number of shrimp or how likely they are to be sentient? How about Givewell? Should we be concerned about their cost-effectiveness analyses? It's always a question to ask but usually, a concern would come with something more concrete or a statistic. For example, the charity Will Macaskill talks about in the UK that helps a certain kind of Englishperson who is statistically ahead (though I can't remember if this is Scotts or Irishmen or another group)

On 7, university groups are limited in resources. Very limited. It is almost always done part-time while managing a full time courseload and working on their own development among other things and so they focus on their one comparative advantage of recruitment (since it would be difficult for others to do that) and outsource the training to other places (80k, MATS, etc.).

On 10, good point, I would like to see some movement within EA to increase the intensity.

On 11, another good point. I'd love to read more about this.

On 12, another good point but this is somewhat how networks work, unfortunately. There's just so many incentives for hubs to emerge and then to have a bunch of gravity. It kinda started in the Bay area and then for individual actors, it nearly always makes sense to go around there and then there is a feedback loop.

@Greg_Colbourn while I disagree on Pause AI and the beliefs that lead up to it, I want to commend you for this for:
1) Taking your beliefs seriously.

2) Actually donating significant amounts. I don't know how this sort of fell off as a thing EAs do.

Unfortunately, a lot of the organizations listed are very cheap. For example, I don't want to be too confident but I think that Arthropoda is going to have <$200k nearly certainly.

Actually, I'm uncertain if pausing AI is a good idea and I wish the Pause AI people had a bit more uncertainty (on both their "p(doom)" and on whether pausing AI is a good policy) as well. I look at people who have 90%+ p(doom) as, at the very least, uncalibrated, the same way I look at the people who are dead certain that AI is going to go positively brilliant and that we should be racing ahead as fast as possible. It's as if both of them aren't doing any/enough reading of history. In the case of my tribe

I would submit that this kind of protesting, including/especially the example you posted makes your cause seem dumb/unnuanced/ridiculous to the onlookers who are indifferent/know little.

Last, I was just responding to the prompt "What are some criticisms of PauseAI?". It's not exactly the place for a "fair and balanced view" but also, I think it is far more important to critique your own side than the opposite side since you speak the same language as your own team so they will actually listen to you.

Correct, I potentially misremembered. the actual things they definitely say, at least in this video are "open ai sucks! Anthropic sucks! Mistral sucks!" And "Demis Hassabis, reckless! Darío amodei reckless"

I would submit that I am at the very least directionally correct.

  1. I don't think there is a need for me to show the relationship here.

2/3. https://youtu.be/T-2IM9P6tOs?si=uDiJXEqq8UJ63Hy2 this video came up as the first search result when i searched "pause ai protest" on youtube. In it, the chant things like "open ai sucks! Anthropic sucks! Mistral sucks!" And "Demis Hassabis, reckless! Darío amodei reckless"

I agree that working on safety is a key moral priority. But working on safety looks a lot more like the things I linked to in #3. That's what doing work looks like.

This seems to be what a typical protest looks like. I've seen videos of others. I consider these to be juvenile and unserious and unlikely to build necessary bridged to accomplish outcomes. I'll let others form their opinions.

  1. Pausing AI development is not a good policy to strive for. Nearly all regulations will slow down AI progress. That's what regulation does by default. It makes you slow down by having to do other stuff instead of just going forward. But a pause gets no additional benefit whereas most other regulation gets additional benefit (like model registry, chip registry, mandatory red teaming, dangerous model capability evals, model weights security standards, etc.) I don't know what the ideal policies are but it doesn't seem like a "pause" with no other asks is the best one.
  2. Pausing AI development for any meaningful amount of time is incredibly unlikely to occur. They will claim they are shifting the overton window but frankly, they mainly seem to do a bunch of protesting where they do stuff like call Sam Altman and Dario Amodei evil.
  3. Pause AI, the organization, does, frankly, juvenile stunts that make EA/AI safety advocates look less serious. Screaming that people are evil is extremely unnuanced, juvenile, and very unlikely to build the necessary bridges to really accomplish things. It makes us look like idiots. I think EAs too often prefer to do research from their laptops as opposed to getting out into the real world and doing things; but doing things doesn't just mean protesting. It means crafting legislation like SB 1047. It means increasing the supply of mech interp researchers by training them. It means lobbying for safety standards on AI models.
  4. Pause AI's premise is very "doomy" and only makes sense if you have extremely high AI extinction probabilities and the only way to prevent extinction is an indefinite pause to AI progress. Most people (including those inside of EA) have far less confidence in how any particular AI path will play out and are far less confident in what will/won't work and what good policies are. The Pause AI movement is very "soldier" mindset and not "scout" mindset.
Load more