Hide table of contents

Someone suggested I name this post "What We Owe The Community": I think it's a great title, but I didn't dare use it...

Views and mistakes my own.

EDIT: I'd like to emphasize the choice of the word "affiliation" as opposed to"identity" as I do think it's important not to make it about identity.

What I believe

I think owning our EA affiliation—how we are inspired by the movement and the community—is net positive for the world and our careers. If more people were more outspoken about their alignment with EA principles and proximity to the EA community, we would all be better off. While there may be legitimate reasons for some individuals to not publicly identify as part of the EA movement, this can create a “free-rider problem”. If too many people choose to passively benefit from EA without openly supporting it, the overall movement and community may suffer from it.

Why I think more people should own their EA affiliation publicly

I understand why one doesn’t, but I’d probably not support it in most cases—I say most cases, because some cases are exceptional. I’m also not necessarily saying that one needs to shout it everywhere, but simply be transparent about it.

The risks

These are the risks—actual or perceived—that I mostly hear about when people choose not to publicly own their EA identity:

  • People don’t want to talk to you / take you seriously because you are affiliated with EA
  • You won’t get some career opportunities because you are affiliated with EA

And I get it. It’s scary to think two letters could shut some doors closed for some potentially incorrect reasons.

A prisoner’s dilemma

But I think it hurts the movement. If people inspired or influenced by EA are not open about it, it’s likely that their positive impact won’t get credited to EA. And in principle, I wouldn’t mind. But that means that the things that EA will get known for will mostly be negative events, because during scandals, everyone will look for people to blame and draw causal paths from their different affiliation to the bad things that happened. It’s much less attractive to dig out those causal paths when the overall story is positive. I’d believe this is a negative feedback loop that hurts the capacity of people inspired by the EA movement to have a positive impact on the world.

Tipping points

It seems to me that currently, not publicly affiliating with EA is the default, it’s normal, and there’s no harm in doing that. I’d like that norm to change. In Change: How to Make Big Things Happen, Damon Centola defines the concept of “tokens”, e.g. for women:

[Rosabeth Moss Kanter] identified several telltale signs of organizations in which the number of women was below the hypothesized tipping point. Most notably, women in these organizations occupied a “token” role. They were conspicuous at meetings and in conferences, and as such were regarded by their male colleagues as representatives of their gender. As tokens, their behavior was taken to be emblematic of all women generally. They became symbols of what women could do and how they were expected to act.

We need more people to own their affiliation, to represent the true diversity of the EA identity and avoid tokenization.

On transparency

On a personal level, I think transparency is rewarded, in due time. On a community level, one will get to be part of a diverse pool of EAs, which will contribute to showing the diversity of the community: its myriad of groups and individuals, that all have their own vision of what making the world a better place means. It would solve the token problem.

An OpenPhil-funded AI governance organization I am in contact with has chosen a long time ago to always be transparent about their founders’ EA affiliation and its funding sources. Long-term, they benefited from proving high-integrity for not leaving out some details or reframing them. After the OpenAI debacle (Sam Altman being unsuccessfully fired by the board, event that was linked to some people from the EA movement), I asked them if their EA affiliation had had consequences on the trust others placed in them.

My expectation was that their reputation was hurt as a result of associating the event to EA and thus, to this organization. What the leadership described was quite different: actually, people asked them about how to make sense of the events. Their non-EA contacts easily differentiated between them and Sam Altman (as well as the other Sam). I’d guess their trust in the org is high particularly because they have always been transparent about something that is “courageous” to own. And they wanted to understand why people they trust still stand by something that seems so controversial. However, it requires them not to back down and get scared of when such things happen. But overall, they don’t see their affiliation as a sacrifice they have to navigate, that’s just who they are. It does take some additional work but it doesn’t seem too much for the leadership. It just takes a little bit of mental resilience.

I want to see more of that.

Where I come from

Event branding

My current job is being a full-time community builder for EA Switzerland. In that capacity, I organize events. It happened several times that my requests were declined for public relations (PR) reasons, whether as a speaker,[1] or even just for sharing an event. From my experience, this seems to be particularly true in the policy world.

I could set up a different project (under the EA Switzerland umbrella but without the EA name) that is used for outreach and events, and I would probably solve the above problems, but I don’t think that’s what I / EA Switzerland should be doing.

Getting hired

When I was considering whether or not to take my current (EA community-building) job, I felt that the strongest potential downside was that I’d have to make my EA affiliation public—it will be my full-time job, it will be something I have to justify on my CV. This might be a bad move if the movement were to fall in disgrace or if I were to dissociate from it at some point, or just because people might have negative impressions of it. But I don’t stand by that line of reasoning anymore. I know why I’m here, and I can explain it to the people who care about listening.

I also knew working in an EA organization this could potentially hurt my career at some point. It actually did. It was part of the reasons why I didn’t get hired once. But I don’t think it should motivate me to be less transparent about my beliefs.


Thanks to Guillaume Vorreux, James Herbert and many others for feedback and discussions on the topic, and to many more for the conversations that lead to the writing of this post.

  1. ^

     One person told me they would be happy to participate if it was branded differently, i.e. non-EA.

299

25
10
20
3

Reactions

25
10
20
3

More posts like this

Comments41
Sorted by Click to highlight new comments since:

These are the risks—actual or perceived—that I mostly hear about when people choose not to publicly own their EA identity:

People don’t want to talk to you / take you seriously because you are affiliated with EA You won’t get some career opportunities because you are affiliated with EA

I think you left out another key reason: you do not agree with lots of EAs about lots of things and think that telling people you are an EA will give them false impressions about your beliefs.

I am a development economist, and I often tell other development economists that I am an EA. That tells them "this is someone who cares a lot about cost-effectiveness and finding impactful interventions, not just answering questions for the sake of answering questions", which is true. But if I said I was an EA to random people in the Bay Area, they would infer "this is someone who thinks AI risk is a big deal", which is not true of me, so I don't want to convey that. This example could apply to lots of people who work on global development or animal welfare and don't feel compelled by AI risk. (ETA: one solution would be to signal the flavor of EA you're most involved in, e.g. "bed nets not light cone" but it sounds like that would not be owning your EA identity publicly according to the OP.)

I also don't follow much of the media around EA. I don't have any special insight into the OpenAI board drama, or FTX, or anything. I don't see myself as a resource people should come to when they want to make sense of EA being in the news for whatever reason. I think this behavior is specific to me, so maybe it's not a great reason for others.

one solution would be to signal the flavor of EA you're most involved in, e.g. "bed nets not light cone" but it sounds like that would not be owning your EA identity publicly according to the OP

I'd also add that I took great care in not using "identity" but "affiliation" instead, I think it's important not to make it about identity.

Not sure I correctly understand your situation (I have not lived in the bay for more than a few weeks), but I think it can be worth doing the following:

  1. State your affinity for EA, maybe even with some explanation
  2. Let people get the wrong impression of what it means to you anyway
  3. [Highly contextual] correct this impression, either through immediate explanation or letting your actions speak

-> over time, this can help everyone see the core value of what you cherish and reduce the all too common understanding of EA as an identity (within and outside of EA). We all need to work on not identifying with our convictions to avoid being soldiers.

Most interactions are iterative, not one offs. You could help people understand that EA is not == AI xrisk.

If you think EA is about a general approach to doing good, explaining this more often would help you and the xrisk people. Identities are often pushed onto us and distort discourse. I see it as part of my responsibility to counteract this wherever I can. Otherwise, my affiliation would mostly be a way to identify myself as "in-group" - which reinforces the psycho-social dynamics that build the "out-group" and thus identity politics.

Your example seems to be an opportunity to help people better understand EA or even to improve EA with the feedback you get. You don't necessarily have to stay on top of the news -- on the contrary, it helps if you show that everyone can make it their own thing as long as the basic tenets are preserved.

I understand this might be effortful for many. I don't want to pressure anyone into doing this because it can also look defensive and reinforce identity politics. I figured it might be worth laying out this model to make it easier for people to sit through the discomfort and counteract an oversimplified understanding of what they cherish - whether that's EA or anything else.

I think this is a good approach for iterative interactions.

I also want to flag that for one off interactions, they might be one off as a result of people evaluating you based on their (false) impression of you. Job interviews, cover letters, and first dates are obvious example. But even casually meeting a new person for the first time without a structure of assessment/evaluation (imagine a friend's birthday party, an conference, or a Meetup-type social event) involves that person deciding if they would want to spend time with you. So if I tell a sibling that I am interested in EA ideas we will almost certainly have the space/time to discuss the details of what that means. But new and unformed social relationships often won't be given that space/time to understand nuance.

This is interesting because I can tell you that - being a retired military servicemember - I've encountered some of the same discrimination and labeling within the EA community that EAs claim to experience from non-EAs. To use Alix Pham's verbiage in an earlier post in a separate context "'people don't talk to you' (because they project some beliefs on you you actually don't have)". Thankfully, engaging with the EA community (at least those with whom I've engaged) over the last several years have changed their minds (e.g. that I don't come here to infiltrate the EA community or to arm the EA community, etc). In my Defense community, it is generally inadvisable to claim the EA moniker, while ironically it seemed inadvisable to claim being a former servicemember in my EA community. (It was a challenge being PNG'ed* in both communities that one wants to represent and bridge together, but I digress.)

 

Additionally, I believe that the term "soldier bias" creates the very confirmation bias within the EA community that EAs generally try to avoid; by automatically claiming that all soldiers have this particularly zealous bias. See the irony there? I know that there are several former servicemembers within the EA community who are proud and outstanding EAs (though many of them have been hesitant to openly divulge their previous profession, as they have told me). I think the "soldier bias" term would be unacceptable as a professional and formal naming convention if you replaced "soldier" with any other profession's name when it is meant in a negative context.

 

*Persona non grata

Thank you John for sharing! This is super interesting.

Particularly, the "PNG" part makes me reflect on community belonging and inclusivity, I think it's an important part.

That's interesting, I'll reflect on that. I would be curious to explore how the reason you mention can be a risk for you? And to what extent you'll undertake actions to make sure people don't know your EA affiliation for that reason?

one solution would be to signal the flavor of EA you're most involved in, e.g. "bed nets not light cone" but it sounds like that would not be owning your EA identity publicly according to the OP

No, I do think it's owning to some extent already.

I think the reason you're mentioning is also partly included in "people don't talk to you" (because they project some beliefs on you you actually don't have). But my certainty on that line of thought is lower as it's something I thought less about.

It's not a risk to say I'm EA, it's just not informative in many contexts. It conveys a specific and false impression of my priorities and what I work on - so as a matter of cooperative communication, I don't do it. I don't take any actions to hide it.

I don't really think the risk is that people don't talk to me because they project false beliefs onto me. Because I'm not worried about negative consequences for myself if people think I work on AI. It's just not a helpful thing to say in most contexts, because i don't actually work on AI.

I think it's different for professional community builders. In your job, EA is a community to be represented. In my job, EA is a signifier for a cluster of beliefs. Sometimes that cluster is relevant and sometimes it isn't.

Thanks for clarifying, that makes a lot of sense. I'm not sure yet, but I think those considerations are not in the scope of my post, then? Let me know what you think.

Maybe this part 

I’m also not necessarily saying that one needs to shout it everywhere, but simply be transparent about it.

conveys it, maybe?

I very much appreciate you writing this, and writing it so well. I really resonate with a lot of this, though of course maybe my role as a professional EA community person makes it harder for me and others to trust my take.

Thank you for writing this, it means a lot to me!

I have mixed feelings on this one. The first two relate to distribution of responsibility and power:

  • I have some concerns that this kind of logic often calls, as a practical matter, for rank-and-file EAs to sacrifice their own personal career and other interests due to the consequences of actions taken by higher-status EAs. FTX is the obvious example here. That does not sit well with me.
  • Somewhat relatedly, problems of this sort often have two preconditions: "Jane would not want to publicly affiliate with EA to the extent it has become associated with X," and "EA became associated with X because of the actions of certain EAs." Cancelling either precondition is sufficient to prevent the state of the world in which Jane doesn't publicly affiliate with EA. Although there are some circumstances in which placing the burden on Jane may be appropriate or unavoidable, I don't think we should view Jane's desire as the problematic precondition by default. 

Two others relate to any potential implication that those who choose not to publicly affiliate are acting in an unfair/non-cooperative manner:

  • Although I appreciate that free-rider problem is a term of art, I would want to be very careful in emphasizing that there appear to be few free riders in 2024 EA. It's very hard to view someone who donates 10% of their income, or who is working for far less than their private-sector salary, as a free rider.
  • In terms of "passively benefit[ting] from EA without openly supporting it," it's unclear what should count as passively benefitting. In particular, there's at least some meaningful distance between OP and EA, and it doesn't seem to view itself as responsible for or accountable to the community (cf. point 5 of this comment). Absent evidence that OP cares whether people self-identify as EA, I don't think their funding would count. People were emphasizing that Manifest wasn't an EA event, SFF "explicitly does not identify as an EA Funder," etc. I don't think EA gets to take credit for actors who explicitly disclaim being EA entities, or who EAs will cast as non-EA actors when they do something controversial.

As far as a path forward, many other social movements (like religions and political parties) have managed to construct sub-brands or sub-identities that allow supporters to limit identification with elements of the larger group that they find problematic. For instance, one can identify with terminology that identifies you with Major Party X but makes it clear that you think the progressive (or conservative) wing is way off the mark. I don't sense that EA has managed to pull this off yet.

Thank you for your comment.

On your first points, I think there are totally fair. I feel that's the preconditions of the prisonner's dilemma.

Then, I see your point on free-rider and I will reflect on it. I'd add that people mentioning how EA might have influence them or how an organization might make decisions influenced by EA principles seems completely different (to me) than "being an EA" ("identifying") and orgs "being EA orgs". I tend to think those latter framings are not necessarily useful, especially in the context of what I'm advocating for.

I deeply appreciate you writing this and much agree.

I sometimes worry that EAs may optimize for consequences rather than for integrity and this may be the reason people distance themselves from EA.

[In my view it then creates a dangerous world of "solitude, filth and ugliness". :) ]

Thank you for writing this!

Thanks for writing this. I just wanted to quickly respond with some thoughts from my phone.

I currently like the norm of not publicly affiliating with EA but its something I've changed my mind about a few times.

Some reasons

I think EA succeeds when it is no longer a movement and simply a general held value system (i.e., that it's good to do good and try to be effective and to have a wide moral circle). I think that many of our insights and practices are valuable and underused. I think we disseminate these most effectively when they are unbranded.

This is why: Any group identity creates a brand and opportunities for attacking the brand of that group and doing damage by association.

Additionally, if you present as being in a group then you are either classed as ingroup or outgroup. Which group your in overwhelmingly affects people's receptiveness to you and your goals. For many people EA presents as a threatening outgroup which makes them feel judgement and pressure.

Many people who might be receptive to the idea of counterfactural or cost effectiveness related reasoning if it has a neutral source but unreceptive to it if they believe it comes from effective altruism.

I think most people care about progress on salient issues not effective altruism in the abstract. People are much more likely to be interested in philosophy which helps them to achieve a goal of reducing animal suffering or mental health issues or AI risks than figuring out how to be more effective at doing good.

What I think we should do

I think we should say things like I have been influenced by effective altruism, or reading doing good better really changed my mind about X but I think we should avoid presenting as EAs.

This seems consistent with my other behaviour. I don't call myself a feminist, consequentialist, animal welfare activist or longtermist etc, but these are all ideas/values which influenced me a lot.

I previously discussed the idea of fractional movement building and I think that that is still probably the best approach for EA to have more impact via influence on others. Basically you work on a thing that you think is important (e.g., a cause area) and allocate some fraction of your time to try to help other people to work on that thing or other things which are EA aligned.

So rather than being an EA affiliated movement builder you might be researcher trying to understand how to have some type of positive impact (e.g. reducing risks from AI) and navigating the world with that as your superficial identity. You can still organise events and provide support with that identity and personal brand and there no brand risk or association to worry about. You can mention your EA interest where relevant and useful.

And to be clear I think that people who are clearly ea-affiliated are doing good work and mostly aren't at any risk of major career damage etc. I just think that we might be able to achieve more if we worked in ways that they pattern matched better to more widely accepted and unbranded routes to impact (e.g., researcher or entrepreneur etc) than to activist/movement building type groups which are more associated with bad actors (e.g. cults and scams)

Of course these are not strong beliefs and I could easily change them as I have before.

Thank you very much for taking the time to write this.

I generally don't feel disagreement with what you say, I don't think it's completely opposed to what I'm advocating for. I feel that there's a huge deal of interpretation around the words used, here "affiliation" (and as mentioned at the beginning of the post, not "identity").

I do think more people "affiliating" will make EA less of an ingroup / outgroup, and more of a philosophy (a "general held value system" as you say in the beginning) people adhere to to some extent, esp if framed as "this is a community that inspires me, those are ideas that are influencing my actions".

I have no issue being transparent about the various ways I am involved with EA organizations, institutions, and ideas but I find a lot of people want to identify me as "an EA", literally in introductory emails and stuff like that. I am very put off by this for two reasons:

One, the label applied to me as a person strikes me as incredibly arrogant; it seems to imply I consider myself more effective and more altruistic than people outside this community, which I absolutely do not. I'm doing my best just like every one else.

Two, after many years of engagement including being a cause-area organizer for my local chapter, going to EA global, reading every major book about EA and associated ideas, and spending countless hours on the 80k website and podcast, I still don't think I could fairly summarize some of the key principles in EA. I don't think it is particularly well-defined and those sources that aim to define it are often quite vague and inconsistent with one another. When I object that I do not commit to being fully non-particularist or accepting all the tenets of utilitarianism or some similar gripe, I hear from one corner that I misunderstand EA and EA doesn't demand that and from another corner that objecting to those things does in fact mean I am not aligned with EA. Thus, I don't feel comfortable identifying with the movement.

On the other hand, I am happy to acknowledge the quote positive influence various EA orgs have had on my work, my thinking, and even my values. I have a lot of appreciation for the EA movement (which I will readily tell my network) but I am not "an effective altruist."

Thank you for sharing!

I would like to emphasize my choice of word ("affiliation" and not "identity"), as I do understand the offputting implications of "being an EA" (as opposed to "being part of the EA community" or another less identity-defining formulation).

I also want to add that I don't think anyone can claim they endorse everything about a movement they consider themselves a part of (e.g. feminism or civil rights or...), I don't think it's possible to expect full alignment for anyone in any movement. I think it's important people know that about EA. I think there are as many definitions of EA as there are "members". But I also think not defining it for yourself will leave more space for others to define it for you (and most likely, wrongly). (That is to say, I probably won't support anything along the lines of "you misunderstanding EA" or "you not being aligned with EA", but I can't say anything with certainty as I don't have enough context on you)

I hope that makes sense.

I think your first point is totally fair and agree that those are separate things.

As to your second point, I feel similar hesitation strongly associating with most movements for the same reason. In fact the example you give of feminism stands out to me as perfect because I probably agree with most "feminists" about most issues around gender and gender-associated rights but the term feels loaded and opaque. I don't know what people mean when they use it, and there are people who use it to mean pretty uncontroversial things as well as people who mean something quite distinct. With respect to feminism, I generally don't use the word to describe myself but am happy to have a longer conversation about my beliefs on the topic, which is pretty similar to how I approach EA in conversation.

To the second point, yes, I probably agree, and it's an approach I find useful.

But sometimes you don't get the chance to say so many words, and giving the opportunity to people to connect the dot "EA" to "your values and your actions" might increase the understanding one has of EA (as it would for feminism), without necessarily engaging in a lengthy conversation with all of the people that would be able to connect the dots otherwise by observing your actions from a bit further. I hope that makes sense.

One point of view I haven't seen represented so much here is that it can simultaneously be true that publically allying yourself with the community is good for the community, and yet overall not the right call. I'd like e.g. politicians to be able to take our good ideas and implement them without giving anything back, if that's the best way to get those ideas done. (That said, I'd like them to read this post and weigh up the costs and benefits, and particularly consider the points about transparency and ensure they aren't being outright deceitful about their links to the movement.)

Thank you! That's a useful framing.

Perhaps a bit of a ramble after reading your post and others, but I opted to click "comment" rather than delete it.

I pretty reflexively identify as and would not hesitate to associate myself with the EA community. A part of this is that I view my identity as an EA as relating to its core principles of a commitment to doing good and using reason to determine how best to do so. Although I  believe that much of the cause prioritization/downstream beliefs common in the EA community are wise and well-taken, I don't consider these to be essential to being an EA. Were it the case that I thought that the community was mistaken on any or many of its determinations, I would still consider myself to be EA. 

Of course, we cannot choose the inferences that others make when we say that we are EA, and thus one might risk people making assumptions or just getting unwelcome vibes. 

I wonder what could be done to spread a message that EA is about the fundamental commitment to doing good the best one can rather than commitment to specific cause areas or downstream beliefs. One issue is likely the relative unipolarity of EA funding. Even if members of the EA community disagree on downstream beliefs held by funders and cause area prioritizations, there will likely be less representation of this within EA  because it is unlikely that this work will get funded and EA is likely to be viewed as "what EA does" rather than "what people who identify/affiliate as EA think". Furthermore, it is likely that people who identify as EAs will be tempted to adopt beliefs and priorities that are funded by EA. 

I think there is just going to be a tension between the core principles of EA and the demonstrated actions of the EA community insofar as no person would be perfectly represented by its collective actions. And action is pretty heavily tethered to funding which the community has little ability to influence, so others may see an EA that reflects priorities and beliefs that an individual EA would not endorse. So, there is likely a gap between what you might mean when you say "I am EA" and what somewhat else might hear, and I understand why it might make more sense to be more concerned about the latter.

One of the things that comes to mind are the variety of beliefs that might fall under religious identities. People might have significant political disagreements, for instance, but come together as Christians under the belief that Jesus is the son of God who died for our sins. Getting EA to be more associated with its essential principles than downstream beliefs and conclusions might be critical to its expansion and making people more comfortable identifying/affiliating with it, but this would probably be a difficult project.

Thank you for sharing this!

Were it the case that I thought that the community was mistaken on any or many of its determinations, I would still consider myself to be EA.

I think that's nice.

One issue is likely the relative unipolarity of EA funding.

I agree with this. I think this is one of the major issues, and I've mentioned it in the past.

no person would be perfectly represented by its collective actions.

Yes, i'd guess one could say it's the other side of the token problem, and why we might need to show a greater diversity of people "affiliating".

I am also quite sad that EA has a less positive reputation than it did/could/'should'. I think EA ideas and communities being high-status and widely admired would be good for the world. And I think to make that happen it is useful for the the Rutger Bregman's and Yuval Harari's of the world to be public about EA-ish things. For 'normal' people like us without a lot of name recognition or large online following, my guess is that EA community building is best done just by talking with friends and acquaintances, than by having a quite EA online presence.

So where I have come down so far is to be quite open and forthcoming about my EA-ness in person, and to have some EA markings online (GWWC diamond on LinkedIn etc) but I semi-recently removed my surname from my EAF account. I think this is mainly a tail-risk insurance policy and will probably not be useful/important, but even a ~5% chance of the EA community's reputation deteriorating further and me wanting to get a non-EA job later means this very weak partial anonymisation is worth it. But I'm not at all sure about that, maybe there are important benefits to having a bio on the EAF about who I am and linking to my LinkedIn profile and so forth (though my guess is most people reading something I write on the Forum would be able to fairly quickly find/infer my surname if they want to).

I was surprised to see you mention at the end you didn't get a job offer partially because of being publicly EA as this seems to cut against your thesis? I think you are saying that the benefits are just large enough that that cost is bearable. Which makes sense, but not getting a job offer seems like quite a big cost to me, or at least that is something I care a lot about.

Thanks Oscar for sharing this! A few points:

  • Bregman and Harari are exactly examples of tokens, and being open about your EA-ness to friends is detokenizing them I think, so I do agree with you on this being part of the solution.
  • I don't think me not having a job offer cuts against my thesis:
    • It's not that simple and there's more to the story
    • My point is, I was frustrated, but I think being honest about my EA affiliation is not something I would trade against a job - if someone doesn't want me because of that, I'd probably be better off somewhere else. There will be other impactful opportunities, some of them with a better value-alignment factor.
      • I can say that also because I was not in a precarious financial situation.
    • It's also a point supporting something like "I have experienced this myself and don't regret it" (it happened 9 months ago and I'm perfectly satisfied of where I am today, despite this happening).
  • I think many others will make different tradeoffs, and that's okay. I am merely advocating for shifting the norm and questioning one's motivations more explicitly, instead of defaulting to this behavior as if it was harmless.

It is not clear to me that EA branding is net positive for the movement overall or if it's been tarnished beyond repair by various scandals. Like, it might be that people should make a small personal sacrifice to be publicly EA, but it might also be that the pragmatic collective action is to completely rebrand and/or hope that EA provides a positive radical flank effect.

The reputation of EA at least in the news and on Twitter is pretty bad; something like 90% of the news articles mentioning EA are negative. I do not think it inherently compromises integrity to not publicly associate with EA even if you agree with most EA beliefs, because people who read opinion pieces will assume you agree with everything FTX did, or are a Luddite, or have some other strawman beliefs. I don't know whether EAF readers calling themselves EAs would make others' beliefs about their moral stances more or less accurate.

I don't think this is currently true, but if the rate of scandals continues, anyone holding on to the EA label would be suffering from the toxoplasma of rage, where the EA meme survives by sounding slightly good to the ingroup but extremely negative to anyone else. Therefore, as someone who is disillusioned with the EA community but not various principles, I need to see some data before owning any sort of EA affiliation, to know I'm not making some anti-useful sacrifice.

Thanks for sharing! I'm honestly not sure what to answer to this, I feel some of your doubts / points are already addressed in the post. I guess it's where the crux is, whether you believe increasing diversity of representation would be positive for the movement as a way to show others that EA is not a sticker that absolutely defines the whole set of beliefs/values of a person, or not. Maybe I'll change my mind in the future about this. But I probably still want to advocate for making the decision to "not affiliate" intentional, when it could just be a non-decision, a default.

I often think about transparency in relation to allies, competitors and others:

  • Allies are on my team. We have shared norms and can make deals
  • Competitors are not on my team and are either working against my interests or too chaotic to work with
  • Others are randoms I don't know enough about.

Generally I feel obliged to be truthful to allies and others and transparent to allies. So I'd say I like EA to anyone, but I wouldn't feel the need to reveal it to any random person and particularly not someone who is against me.

What I think is more interesting from some of this discourse is that people see eg their government employers as competitors here. I think that would change the frame of how I went to work if I didn't think I was in a collaborative relationship with my boss.

That's interesting, thanks for sharing this. The framework of thought is useful.

I think there's a key component that I still see: on top of being the default, not being outspoken about one's EA affiliation sometimes require some efforts, it's not about "not feeling the need to reveal" but "feeling the need to conceal" (e.g. not sharing events with networks and friends, not appearing on pictures of EA events).

But again, I do think some cases are exceptional.

Notably, we asked about whether EAs had "experienced any reputational consequences (i.e., changes in others’ evaluation about you) from people not involved in the effective altruism community as a result of believing or acting on the ideas of effective altruism?" in EAS 2020.

A plurality reported no reputational effects at all, and substantially more people reported mostly positive effects than reported mostly negative effects, though a larger number reported a mix.

Responsen%95% CI
Lower boundUpper bound
No, no reputational consequences at all44144.37%41.05%47.75%
Yes, a mix of reputational benefits and negative reputational costs30730.89%27.57%34.27%
Yes, mostly positive reputational benefits20020.12%16.80%23.50%
Yes, mostly negative reputational costs464.63%1.31%8.01%

Of course, this was pre-FTX. It would be interesting to see whether results have changed in the next survey. And, of course, whatever the most common effect, results may differ in particular areas (e.g. people working in certain areas of policy).

Thank you for sharing that, I think those numbers are a great addition to this post. It would be amazing to have an update of this post-FTX and post-OpenAI.

It’s good that few perceive reputational effects to be negative, but this strikes me as one question that is in many/most cases impossible to self-evaluate. One can never know about an opportunity not presented due to prejudice or reputation, and often even when an opportunity does present itself, it can be hard to know what factors were decisive.

I agree that sometimes you won't know whether people think positively or negatively of something (particularly if we're thinking about individual interactions). But I think very often people will have a good sense of this (particularly if we're thinking about the aggregate effect), and often people will be quite explicit about this.

I broadly wish we could get to a point where this is applicable, but I'm unsure whether the strategy outlined by the OP is the best one. Although, I'm by no means saying it is exactly comparable, but having experienced it myself, an oppressed minority has no solution but to get transparent about it. If you feel that you belong to a group with a bad reputation, you might experience fear and anxiety when opening up about it.

A small detail I'd add is that, as far as I perceive it, no one actually tries to debunk false ideas and over-generalizations related to EA. When I suggested doing so in the past, a few people actively discouraged me to do it. Some of those false ideas are emotionally hard to bear, others are completely outlandish. What made my coming-out possible, in contrast, was the wealth of ressources and arguments I could throw to people, or just knowing they'll run into them at some point.

This might be caused by us not owning our affiliation.

Yet EA is starting to become a well-identified group, something that people can have clichés about, and who can suffer unfair ostracisation. It's very hard for me to publish my affiliation if I don't feel defended from such downsides. So the circle continues.

Thank you Camille for sharing this.

I've thought a bit about that angle too, and I tend to think it is also a strategy (though obviously, a costly one) for oppressed minorities to have more people being outspoken about their belonging to a certain group. I think the quote from Change touches a bit on that. Though being a woman and being LGBTQ+ are very different flavours of 'minority belonging' (for several reasons that I won't expand on here). For people to debunk stereotypes about certain populations, you need to show their diversity, or at least it seems to me like one of the viable strategies.

I am, by no means, saying it's not costly. I think I would want more people to consider sharing the cost.

 

About this point

the wealth of ressources and arguments I could throw to people, or just knowing they'll run into them at some point

I'd like to point you to this resource that do provide guidance on responding to criticisms of EA, if you don't know about it. And I do think individuals are working to "debunk false ideas and over-generalizations related to EA" at their scale. I'm sad to read you've been discouraged to do so. I do agree that the EA community as a whole has not been communicative enough at the crucial moments when it made the news, and I have been (with other community builders) quite frank about it to CEA's new CEO Zach Robinson, and CEA's Communications Team. Hopefully, they are currently taking steps that go in this direction (as Zach's talk at EAG London suggests). I also have suggested that community builders could also do a bigger part there, but it takes time.

 

I hope the EA community can step up to the task and better support each other in that endeavor.

I like your analysis of the situation as a prisoner's dilemma! I think this is basically right. At least, there generally seems to be some community cost (or more generally: negative externality) to not being transparent about one's affiliation with EA. And, as per usual with externalities, I expect these to be underappreciated by individuals when making decisions. So even if this externality is not always decisive since the cost of disclosing one's EA affiliation might be larger in some cases, it is important to be reminded of this externality – and the reminder might be especially valuable since EAs tend to be altruistically motivated!

I wonder if you have any further thoughts on what the positive effects of transparency are in this case? Are there important effects beyond indicating diversity and avoiding tokenization? Perhaps there also more 'inside-directed' effects that directly affect the community and not only via how it seems to outsiders?

Thank you!

I think people tend to trust you more if they notice your transparency and sincerity, especially over time. I think transparency has high long-term rewards. There is also a great deal of better sense-making: you can connect the dots that this bad thing happened but you know someone that was outspoken about an affiliated thing - how do those two things make sense together? is there some misunderstanding that one can gain clarity on?

A bit is captured here:

I’d guess their trust in the org is high particularly because they have always been transparent about something that is “courageous” to own. And they wanted to understand why people they trust still stand by something that seems so controversial.

Does that help?

Executive summary: The author argues that effective altruists should be more open about their affiliation with the EA movement, as it would benefit the community and help avoid negative stereotyping despite potential career risks.

Key points:

  1. Being openly affiliated with EA is net positive for the world and individual careers, even if it comes with some risks.
  2. If too many EAs hide their affiliation, it creates a "free-rider problem" where EA mainly gets publicity for negative events.
  3. More open affiliation would show the true diversity of the EA community and prevent "tokenization" of the few vocal members.
  4. Transparency about EA affiliation is rewarded with trust in the long run, as shown by an example AI governance organization.
  5. The author has faced some career setbacks due to EA affiliation but believes it is still worth being open about one's beliefs and association with EA.

 

 

This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.

3. More open affiliation would show the true diversity of the EA community and prevent "tokenization" of the few vocal members.


I would replace "vocal" with "visible" (e.g. I don't think the members of the OpenAI board were especially vocal about their EA affiliation, people simply singled them out)

Curated and popular this week
Relevant opportunities