Epistemic Status
Written in a hurry while frustrated. I kind of wanted to capture my feelings in the moment and not sanitise it when I'm of clearer mind.
Context
This is mostly a reply to these comments:
1) One way to see the problem is that in the past we used frugality as a hard-to-fake signal of altruism, but that signal no longer works.
Agree.
Fully agree we need new hard-to-fake signals. Ben's list of suggested signals is good. Other things I would add are vegan and cooperates with other orgs / other worldviews. But I think we can do more as well as increase the signals. Other suggestions of things to do are:
- Testing for altruism in hiring (and promotion) processes. EA orgs could put greater weight on various ways to test or look for evidence of altruism and kindness in their hiring processes. There could also be more advice and guidance for newer orgs on the best ways to look for and judge this when hiring. Decisions to promote staff should seek feedback from peers and direct reports.
- Zero tolerance to funding bad people. Sometimes an org might be tempted to fund or hire someone they know / have reason to expect it is a bad person or primarily seeking power or prestige not impact. Maybe this person has relevant skills and can do a lot of good. Maybe on a naïve utilitarian calculus it looks good to hire them as we can pay them for impact. I think there is a case to be heavily risk adverse here and avoid hiring or funding such people.
A Little Personal Background
I've been involved in the rationalist community since 2017 and joined EA via social osmosis (I rarely post on the forum and am mostly active on social media [currently Twitter]). I was especially interested in AI risk and x-risk mitigation more generally, and still engage mostly with the existential security parts of EA.
Currently, my main objective in life is to help create a much brighter future for humanity (that is, I am most motivated by the prospect of creating a radically better world as opposed to securing our current one from catastrophe). I believe strongly that one is possible (nothing in the fundamental laws prohibit it), and effective altruism seems like the movement for me to realise this goal.
I am currently training (learning maths, will start a CS Masters this autumn and hopefully a PhD afterwards) to pursue a career as an alignment researcher.
I'm a bit worried that people like me are not welcome in EA.
Motivations
Since my mid to early teens, I've always wanted to have a profound impact on the world. It was how I came to grasp with mortality. I felt like people like Newton, Einstein, etc. were immortalised by their contributions to humanity. Generations after their deaths, young children learn about their contributions in science class.
I wanted that. To make a difference. To leave a legacy behind that would immortalise me. I had plans for the world (these changed as I grew up, but I never permanently let go of my desire to have an impact).
Nowadays, it's mostly not a mortality thing (I aspire to [greatly] extended life), but the core idea of "having an impact" persists. Even if we cure aging, I wouldn't be satisfied with my life if it were insignificant — if I weren't even a footnote in the story of human civilisation — I want to be the kind of person who moves the world.
Argument
Purity Tests Aren't Effective
I want honour and glory, status, and prestige. I am not a particularly kind, generous, selfless, or altruistic person. I'm not vegan, and I'd only stop eating meat when it becomes convenient to do so. I want to be affluent and would enjoy (significant) material comfort. Nonetheless, I feel that I am very deeply committed to making the world a much better place; altruism just isn't a salient factor driving me.
Reading @weeatquince's comment, I basically match their description for "bad people". It was both surprising and frustrating?
It feels like a purity test that is not that useful/helpful/valuable? I don't think I'm any less committed to improving the world just because my motives are primarily selfish? And I'm not sure what added benefit the extra requirement for altruism adds? If what you care about is deep ideological commitment to improving the world, then things like veganism, frugality, etc. aren't primarily selecting for what you ostensibly care about, but instead people who buy into a particular moral framework.
I don't think these purity tests are actually a strong signal of "wants to improve the world". Many people who want to improve the world aren't vegan or frugal. If EA has an idiosyncratic version of what improving the world means, such that enjoying material comfort is incompatible with improving the world, then that should be made (much) clearer? My idea of a brighter world involves much greater human flourishing (and thus much greater material comfort).
Status Seeking Isn't Immoral
Desiring status is a completely normal human motivation. Status seeking is ordinary human psychology (higher status partners are better able to take care of their progeny, and thus make better mates). Excluding people who want more status excludes a lot of ambitious/determined people; are the potential benefits worth it? Ambitious/determined people seem like valuable people to have if you want to improve the world?
Separately from the matter of how effective it is to the movement's ostensible goals, I find the framing of "bad people" problematic. Painting completely normal human behaviour as "immoral" seems unwise. I would expect that such normal psychology being directed to productive purposes would be encouraged not condemned.
I guess it would be a problem if I tried to get involved in animal welfare but was a profligate meat eater, but that isn't the case (I want to work on AI safety [and if that goes well, on digital minds]). I don't think my meat eating makes me any less suited to those tasks.
Conclusions
I guess this is an attempt to express my frustration with what I consider to be counterproductive purity tests and inquire if the EA community is interested in people like me.
- Are people selfishly motivated to improve the world (or otherwise not "pure" [meat eaters, lavish spenders, etc.]) not welcome in EA?
- Should such people not be funded?
I think a lot of people miss the idea that "being an EA" is a different thing from being "EA adjacent"/"in the EA community"/ "working for an EA organization" etc. I am saying this as someone who is close to the EA community, who has an enormous amount of intellectual affinity, but does not identify as an EA. If the difference between the EA label and the EA community is already clear to you, then I apologize for beating a dead horse.
It seems from your description of yourself like you're actually not an Effective Altruist in the sense of holding a significantly consequentialist worldview that one tries to square with one's choices (once again, neither am I). From your post, the main way that I see in which your worldview deviates from EA is that, while lots of EA's are status-motivated, your worldview seems to include the idea that typical levels of status -based and selfish motivations aren't a cognitive error that should be pushed against.
I think that's great! You have a different philosophical outlook (from the very little I can see in this post, perhaps it's a little close to the more pro-market and pro-self interest view of people like Zvi, who everyone I know in the community respects immensely). I think that if people call this "evil" or "being a bad person", they are being narrow-minded and harmful to the EA cause. But I also don't think that people like you (and me) who love the EA community and goals but have a personal philosophy that deviates significantly from the EA core should call ourselves EA's, any more than a heterosexual person who has lots of gay friends and works for a gay rights organization should call themselves LGBT. There is a core meaning to being an "effective altruist", and you and I don't meet it.
No two people's philosophies are fully aligned, and even the most modal EA working in the most canonical EA organization will end up doing some things that feel "corporate" or suboptimal, or that matter to other people but not to them. If you work for an EA org, you might experience some of that because of your philosophical differences, but as long as you're intellectually honest with yourself and others, and able to still do the best you can (and not try to secretly take your project in a mission-unaligned direction) then I am sure everyone would have a great experience.
My guess is that most EA organizations would love to hire/fund someone with your outlook (and what some of the posts you got upset with are worried about are people who are genuinely unaligned/deceptive and want to abuse the funding and status of the organization for personal gain). However if you do come in to an EA org and do your best, but people decline to work with you because of your choices or beliefs, I think that would be a serious organizational problem and evidence of harmful cultishness/"evaporative cooling of group beliefs".
I plan to seek status/glory through making the world a better place.
That is, my desire for status/prestige/impact/glory is interpreted through an effective altruistic like framework.
"I want to move the world" transformed into "I want to make the world much better".
"I want to have a large impact" became "I want to have a large impact on creating a brighter future".
I joined the rationalist community at a really impressionable stage. My desire for impact/prestige/status, etc. persisted, but it was directed at making the world better.
... (read more)