V

Vanessa

566 karmaJoined

Comments
42

I intentionally stayed meta because I didn't especially want to start an argument about EA premises. Concretely, my disagreements with EA are, that I don't believe in any of:

  • Moral realism
  • Radical impartiality
  • Utilitarianism
  • Longtermism

I view improving the world as an enterprise of collective rationality / cooperation, not a moral imperative (I don't believe in moral imperatives). I care much more about the people (and other creatures) closer to me in the social graph, but I also want to cooperate with other people for mutual gain, and in particular endorse/promote social norms that create incentives beneficial for most of everyone (e.g. reward people for helping others / improving the world).

Why I changed some of my views in this particular direction is a long story, but it involved a lot of reflection and thinking about my preferences on different levels of abstraction (from "how do I feel about such-and-such particular situation" to "what could an abstract mathematical formalization of my preferences look like").

I'm a vegan existential AI safety researcher. I once identified as EA, now as EA-adjacent. So, superficially, I'm part of the problem you describe. However, my reasons for not identifying as EA anymore have nothing to with FTX or other PR concerns. It's not a "mask". I just have philosophical disagreements with EA, coming out of my own personal growth, that seem sufficiently significant to be acknowledged.

To be clear, I'm very grateful to EA donors and orgs for supporting my research. I think that both EAs in AI safety and EAs more broadly are doing tonnes of good, for which they genuinely deserve my and most of everyone's gratitude and praise.

At the same time, it's a perfectly legitimate personal choice to not identify as EA. Moreover, the case for the importance of AI X-safety doesn't rest on EA assumptions (some of which I reject), but is defensible much more broadly. And, there is no reason that every individual or organization working on AI X-safety must identify as EA or recruit only EA-aligned personnel. Even if they have history with EA or funding from EA etc.

Let's keep cooperating and accomplishing great things, but let's also acknowledge each other's right to ideological pluralism.

Thank you for this update.

I'm curious what is going to happen to EA Funds specifically? Is it going to be an independent entity that continues to function more or less in the same way? Or something else entirely?

Here's a human translation, although ChatGPT's is suspiciously similar.

This is pretty sad and also surprising. In your opinion, why are there so many people that come to an animal welfare conference but are not really interested in helping animals (apparently)? If they don't care about animals, what are they doing there? 

Is there going to be a post-mortem including an explanation for the decision to sell?

Yes. Moreover, GCR mitigation can appeal even to partial altruists: something that would kill most of everyone, would in particular kill most of whatever group you're partial towards. (With the caveat that "no credence on longtermism" is underspecified, since we haven't said what we assume instead of longtermism; but the case for e.g. AI risk is robust enough to be strong under a variety of guiding principles.)

The framing "PR concerns" makes it sound like all the people doing the actual work are (and will always be) longtermists, whereas the focus on GCR is just for the benefit of the broader public. This is not the case. For example, I work on technical AI safety, and I am not a longtermist. I expect there to be more people like me either already in the GCR community, or within the pool of potential contributors we want to attract. Hence, the reason to focus on GCR is building a broader coalition in a very tangible sense, not just some vague "PR".

I can relate, as someone who also struggles with self-worth issues. However, my sense of self-worth is tied primarily to how many people seem to like me / care about me / want to befriend me, rather than to what "senior EAs" think about my work.

I think that the framing "what is the objectively correct way to determine my self-worth" is counterproductive. Every person has worth by virtue of being a person. (Even if I find it much easier to apply this maxim to others than to myself.) 

IMO you should be thinking about things like, how to do better work, but in the frame of "this is something I enjoy / consider important" rather than in the frame of "because otherwise I'm not worthy". It's also legitimate to want other people to appreciate and respect you for your work (I definitely have a strong desire for that), but IMO here also the right frame is "this is something I want" rather than "this is something that's necessary for me to be worth something".

I strongly disagree that utilitarianism isn't a sound moral philosophy, and don't understand the black and white distinction between longtermism and us not all dying. I might be missing something there is surely at least some overlap betwen those two reasons for preventing AI risk.

I don't know if it's a "black and white distinction", but surely there's a difference between:

  • Existential risk is bad because the future could have a zillion people, so their combined moral weight dominates all other considerations.
  • Existential risk is bad because (i) I personally am going to die (ii) my children are going to die (iii) everyone I love are going to die (iv) everyone I know are going to die, and also (v) humanity is not going to have a future (regardless of the number of people in it).

For example, something that "only" kills 99.99% of the population would be comparably bad by my standards (because i-iv still apply), whereas it would be way less bad by longtermism standards. Even something that "only" kills (say) everyone I know and everyone they know would be comparably bad for me, whereas utilitarianism would judge it a mere blip in comparison to human extinction.

Out of interest, if you aren't an effective altruist, nor a longermist then what do you call yourself?

I call myself "Vanessa" :) Keep your identity small and all that. If you mean, do I have a name for my moral philosophy then... not really. We can call it "antirealist contractarianism", I guess? I'm not that good at academic philosophy.

Load more