Community
Community
Posts about the EA community and projects that focus on the EA community

Quick takes

123
6mo
2
In light of recent discourse on EA adjacency, this seems like a good time to publicly note that I still identify as an effective altruist, not EA adjacent. I am extremely against embezzling people out of billions of dollars of money, and FTX was a good reminder of the importance of "don't do evil things for galaxy brained altruistic reasons". But this has nothing to do with whether or not I endorse the philosophy that "it is correct to try to think about the most effective and leveraged ways to do good and then actually act on them". And there are many people in or influenced by the EA community who I respect and think do good and important work.
16
17d
Running EA Oxford Socials: What Worked (and What Didn't) After someone reached out to me about my experience running EA socials for the Oxford group, I shared my experience and was encouraged to share what I sent him more widely. As such, here's a brief summary of what I found from a few terms of hosting EA Oxford socials. The Power of Consistency Every week at the same time, we would host an event. I strongly recommend this, or having some kind of strong schedule, as it lets people form a routine around your events and can help create EA aligned friend-groups. Regardless of the event we were hosting, we had a solid 5ish person core who were there basically every week, which was very helpful. We tended to have 15 to 20 people per event, with fewer at the end of the term as people got busy with finishing tutorials. Board Game Socials Board game socials tended to work the best of the types of socials I tried. No real structure was necessary, just have a few strong EAs to set the tone, so it really feels like "EA boardgames," and then just let people play. Having the games acts as a natural conversation starter. Casual games especially are recommended, "Codenames" and "Coup" were favorites in particular at my socials but I can imagine many others working too. Deeper games have a place too, but they generally weren't primary. In the first two terms, we would just hold one of these every week. They felt like ways for people to just talk about EA stuff in a more casual environment than the discussion groups or fellowships. "Lightning Talks" We also pretty effectively did "Lightning Talks," basically EA powerpoint nights. As this was in Oxford, we could typically get at least one EA-aligned researcher or worker there every week we did it (which was every other week), and the rest of the time would be filled with community member presentations (typically between 5-10 minutes). These seemed to be best at re-engaging people who signed up once but had lost contact wi
53
3mo
5
I am sure someone has mentioned this before, but… For the longest time, and to a certain extent still, I have found myself deeply blocked from publicly sharing anything that wasn’t significantly original. Whenever I have found an idea existing anywhere, even if it was a footnote on an underrated 5-karma-post, I would be hesitant to write about it, since I thought that I wouldn’t add value to the “marketplace of ideas.” In this abstract concept, the “idea is already out there” - so the job is done, the impact is set in place. I have talked to several people who feel similarly; people with brilliant thoughts and ideas, who proclaim to have “nothing original to write about” and therefore refrain from writing. I have come to realize that some of the most worldview-shaping and actionable content I have read and seen was not the presentation of a uniquely original idea, but often a better-presented, better-connected, or even just better-timed presentation of existing ideas. I now think of idea-sharing as a much more concrete, but messy contributor to impact, one that requires the right people to read the right content in the right way at the right time; maybe even often enough, sometimes even from the right person on the right platform, etc. All of that to say, the impact of your idea-sharing goes much beyond the originality of your idea. If you have talked to several cool people in your network about something and they found it interesting and valuable to hear, consider publishing it! Relatedly, there are many more reasons to write other than sharing original ideas and saving the world :)
53
3mo
1. If you have social capital, identify as an EA. 2. Stop saying Effective Altruism is "weird", "cringe" and full of problems - so often And yes, "weird" has negative connotations to most people. Self flagellation once helped highlight areas needing improvement. Now overcorrection has created hesitation among responsible, cautious, and credible people who might otherwise publicly identify as effective altruists. As a result, the label increasingly belongs to those willing to accept high reputational risks or use it opportunistically, weakening the movement’s overall credibility. If you're aligned with EA’s core principles, thoughtful in your actions, and have no significant reputational risks, then identifying openly as an EA is especially important. Normalising the term matters. When credible and responsible people embrace the label, they anchor it positively and prevent misuse. Offline I was early to criticise Effective Altruism’s branding and messaging. Admittedly, the name itself is imperfect. Yet at this point, it is established and carries public recognition. We can't discard it without losing valuable continuity and trust. If you genuinely believe in the core ideas and engage thoughtfully with EA’s work, openly identifying yourself as an effective altruist is a logical next step. Specifically, if you already have a strong public image, align privately with EA values, and have no significant hidden issues, then you're precisely the person who should step forward and put skin in the game. Quiet alignment isn’t enough. The movement’s strength and reputation depend on credible voices publicly standing behind it.
1
1d
In talking with OWA groups in Africa and Asia, I’m learning about a culture of dictatorship at OWA. 1. OWA holds 15 to 20+ meetings annually with grantees, excluding campaign meetings, mentorship, and trainings, in addition to 2 narrative reports each year. It has to be unacceptable even if you’re brandishing it as collaboration. 2. OWA grantees in these regions are recently required to submit “regular written updates regarding engagement with the companies” 3. Over 30 groups from Asia and Africa are in the alliance, serving 78% of the world population and over 60% of farmed chicken. OWA has only three staff to support groups in the regions. The job titles of some of the staff are “regional leads”. I think that is insufficient if they’re building a movement in these regions, but sufficient if they’re passing over requests from the West. 4. OWA seeks to control the specific companies that groups campaign against. In a recent webinar to OWA members on “Focus Local, Impact Global,” they pitched to groups to leave Western companies operating in their countries and target local competitors.  I discovered these facts while researching OWA and attending their recent global summit. I haven't shared this feedback with the OWA team before this post, as they don’t have a public anonymous feedback form.  
198
2y
6
I'm going to be leaving 80,000 Hours and joining Charity Entrepreneurship's incubator programme this summer! The summer 2023 incubator round is focused on biosecurity and scalable global health charities and I'm really excited to see what's the best fit for me and hopefully launch a new charity. The ideas that the research team have written up look really exciting and I'm trepidatious about the challenge of being a founder but psyched for getting started. Watch this space! <3 I've been at 80,000 Hours for the last 3 years. I'm very proud of the 800+ advising calls I did and feel very privileged I got to talk to so many people and try and help them along their careers! I've learned so much during my time at 80k. And the team at 80k has been wonderful to work with - so thoughtful, committed to working out what is the right thing to do, kind, and fun - I'll for sure be sad to leave them. There are a few main reasons why I'm leaving now: 1. New career challenge - I want to try out something that stretches my skills beyond what I've done before. I think I could be a good fit for being a founder and running something big and complicated and valuable that wouldn't exist without me - I'd like to give it a try sooner rather than later. 2. Post-EA crises stepping away from EA community building a bit - Events over the last few months in EA made me re-evaluate how valuable I think the EA community and EA community building are as well as re-evaluate my personal relationship with EA. I haven't gone to the last few EAGs and switched my work away from doing advising calls for the last few months, while processing all this. I have been somewhat sad that there hasn't been more discussion and changes by now though I have been glad to see more EA leaders share things more recently (e.g. this from Ben Todd). I do still believe there are some really important ideas that EA prioritises but I'm more circumspect about some of the things I think we're not doing as well as we could (
40
6mo
6
I used to feel so strongly about effective altruism. But my heart isn't in it anymore. I still care about the same old stuff I used to care about, like donating what I can to important charities and trying to pick the charities that are the most cost-effective. Or caring about animals and trying to figure out how to do right by them, even though I haven't been able to sustain a vegan diet for more than a short time. And so on. But there isn't a community or a movement anymore where I want to talk about these sorts of things with people. That community and movement existed, at least in my local area and at least to a limited extent in some online spaces, from about 2015 to 2017 or 2018. These are the reasons for my feelings about the effective altruist community/movement, especially over the last one or two years: -The AGI thing has gotten completely out of hand. I wrote a brief post here about why I strongly disagree with near-term AGI predictions. I wrote a long comment here about how AGI's takeover of effective altruism has left me disappointed, disturbed, and alienated. 80,000 Hours and Will MacAskill have both pivoted to focusing exclusively or almost exclusively on AGI. AGI talk has dominated the EA Forum for a while. It feels like AGI is what the movement is mostly about now, so now I just disagree with most of what effective altruism is about. -The extent to which LessWrong culture has taken over or "colonized" effective altruism culture is such a bummer. I know there's been at least a bit of overlap for a long time, but ten years ago it felt like effective altruism had its own, unique culture and nowadays it feels like the LessWrong culture has almost completely taken over. I have never felt good about LessWrong or "rationalism" and the more knowledge and experience of it I've gained, the more I've accumulated a sense of repugnance, horror, and anger toward that culture and ideology. I hate to see that become what effective altruism is like. -The stori
54
10mo
1
I’ve been working a few hours per week at the Effective Altruism Infrastructure Fund as a Fund Manager since Summer this year. EA’s reputation is at a bit of a low point. I’ve even heard EA described as the ‘boogeyman’ in certain well-meaning circles. So why do I feel inclined to double down on effective altruism rather than move onto other endeavours? Some shower thoughts: * I generally endorse aiming directly for the thing you actually care about. It seems higher integrity, and usually more efficient. I want to do the most good possible, and this goal already has a name and community attached to it; EA. * I find the core, underlying principles very compelling. The Centre for Effective Altruism highlights scope sensitivity, impartiality, recognition of tradeoffs, and the Scout Mindset. I endorse all of these! * Seems to me that EA has a good track record of important insights on otherwise neglected topics. Existential risk, risks of astronomical suffering, AI safety, wild animal suffering; I attribute a lot of success in these nascent fields to the insights of people with a shared commitment to EA principles and goals. * Of course, there’s been a lot of progress on slightly less neglected cause areas too. The mind boggles at the sheer number of human lives saved and the vast amount of animal suffering reduced by organisations funded by Open Philanthropy, for example. * I have personally benefited massively in achieving my own goals. Beyond some of the above insights, I attribute many improvements in my productivity and epistemics to discussions and recommendations that arose out of the pursuit of EA. * In other roles or projects I’m considering, when I think of questions like “who will actually realistically consider acting on this idea I think is great? Giving up their time or money to make this happen?” the most obvious and easiest answer often looks like some subset of the EA community. Obviously there are some echo chamber-y and bias-related reasons tha
Load more (8/215)

Posts in this space are about

CommunityEffective altruism lifestyle