I am a bit nervous to actually post something on this forum (although it is just a simple question, not really an opinion or an analysis).
Context
I have been engaging with EA content for a while now, read most of the foundational posts and the handbook, and been at several in-person events, started donating and taking EA considerations into account for my future career choices. I have been completely and utterly convinced by the principles of EA very early on. However, I also happen to almost perfectly fit the stereotype of people who join EA the way I did: white male, medium-length hair, groomed beard, academic background with a side of tech skills, ambition... (At least that's what many EA people look like in France. To quote my girlfriend glancing over my shoulder as I started a Zoom meeting: "Oh, five copies of you"). I could not help but wonder why so many people with whom I shared the same initial motivations and ideas failed to stick around. I asked around and tried to reach out both to highly invested people and to people who left or kept some distance with the community. One of the big reasons was a disagreement about the "conclusions" offered by the community (the choice of causes, as well as the dismissal of some topics that were important to the newcomers).
Issue
Here is the object of my question: people agreeing with EA's principles genuinely think that it is important to lay out carefully what's important and relevant, to evaluate what we think should be prioritized, and then act upon it. However, people who actually take the time to do this process are very rare... Most of the people I know discovered the principles, started the reasoning process, and ended up convinced that they would reach the same conclusion as the community in a kind of "yeah that seems right"-cognitive-load-saving process.
Question
It seems to me that people who do not think that EA principles "seem right" from the beginning will face a much harder time being included in the community. I do think that individual people do respect the time it takes to integrate new knowledge and shift one's beliefs. However some communication does not happen in 1-1 or informal chats, but goes through the choice of curated content on this forum, through the responses that are sometimes tougher than they ought to be, especially on questions that approach the thin line between people who genuinely want to understand and people criticizing EA blindly, and through implicit appearances such as the relative uniformity of the backgrounds people come from. As a result, I wonder if EA as a group does not appear way more object-level focused than we may want it to, for people who are not yet convinced that the principles would lead them to the same object-level conclusions. If I had to sum up in one question: Do we, as a community, sometimes lean more toward unconsciously advocating specific outcomes rather than encouraging people to discover their own conclusions through the EA framework?
(Feel free to tackle and challenge every aspect of the question, context, or my views, form and content! Please be gentler if you want to criticize the motivation, or the person who posted :) As stated above, I hesitated for a long time before gathering enough courage to post for the first time.)
Kudos for bringing this up, I think it's an important area!
There's a lot to this question.
I think that many prestigious/important EAs have come to similar conclusions. If you've come to think that X is important, it can seem very reasonable to focus on promoting and working with people to improve X.
You'll see some discussions of "growing the tent" - this can often mean "partnering with groups that agree with the conclusions, not necessarily with the principles".
One question here is something like, "How effective is it to spend dedicated effort on explorations that follow the EA principles, instead of just optimizing for the best-considered conclusions?" This is something that arguably there would need to be more dedicated effort in order to really highlight. I think we just don't have all too much work in this area now, compared to more object-level work.
Perhaps another factor seemed to have been that FTX has stained the reputation of EA and hurt CEA - after which there was a period where there seemed to be less attention on EA, and more on specific causes like AI safety.
In terms of "What should the EA community do", I'd flag that a lot of the decisions are really made by funders and high-level leaders. It's not super clear to me how much agency the "EA community" has, in ways that aren't very aligned with these groups.
All that said, I think it's easy for us to generally be positive towards people who take the principles in ways that don't match the specific current conclusions.
I personally am on the side that thinks that current conclusions are probably overconfident and lacking in some very important considerations.
Can you give specifics? Any crucial considerations that EA is not considering or under-weighting?