I am sure someone has mentioned this before, but…
For the longest time, and to a certain extent still, I have found myself deeply blocked from publicly sharing anything that wasn’t significantly original. Whenever I have found an idea existing anywhere, even if it was a footnote on an underrated 5-karma-post, I would be hesitant to write about it, since I thought that I wouldn’t add value to the “marketplace of ideas.” In this abstract concept, the “idea is already out there” - so the job is done, the impact is set in place. I have talked to several people who feel similarly; people with brilliant thoughts and ideas, who proclaim to have “nothing original to write about” and therefore refrain from writing.
I have come to realize that some of the most worldview-shaping and actionable content I have read and seen was not the presentation of a uniquely original idea, but often a better-presented, better-connected, or even just better-timed presentation of existing ideas. I now think of idea-sharing as a much more concrete, but messy contributor to impact, one that requires the right people to read the right content in the right way at the right time; maybe even often enough, sometimes even from the right person on the right platform, etc.
All of that to say, the impact of your idea-sharing goes much beyond the originality of your idea. If you have talked to several cool people in your network about something and they found it interesting and valuable to hear, consider publishing it!
Relatedly, there are many more reasons to write other than sharing original ideas and saving the world :)
1. If you have social capital, identify as an EA.
2. Stop saying Effective Altruism is "weird", "cringe" and full of problems - so often
And yes, "weird" has negative connotations to most people. Self flagellation once helped highlight areas needing improvement. Now overcorrection has created hesitation among responsible, cautious, and credible people who might otherwise publicly identify as effective altruists. As a result, the label increasingly belongs to those willing to accept high reputational risks or use it opportunistically, weakening the movement’s overall credibility.
If you're aligned with EA’s core principles, thoughtful in your actions, and have no significant reputational risks, then identifying openly as an EA is especially important. Normalising the term matters. When credible and responsible people embrace the label, they anchor it positively and prevent misuse.
Offline I was early to criticise Effective Altruism’s branding and messaging. Admittedly, the name itself is imperfect. Yet at this point, it is established and carries public recognition. We can't discard it without losing valuable continuity and trust. If you genuinely believe in the core ideas and engage thoughtfully with EA’s work, openly identifying yourself as an effective altruist is a logical next step.
Specifically, if you already have a strong public image, align privately with EA values, and have no significant hidden issues, then you're precisely the person who should step forward and put skin in the game. Quiet alignment isn’t enough. The movement’s strength and reputation depend on credible voices publicly standing behind it.
In light of recent discourse on EA adjacency, this seems like a good time to publicly note that I still identify as an effective altruist, not EA adjacent.
I am extremely against embezzling people out of billions of dollars of money, and FTX was a good reminder of the importance of "don't do evil things for galaxy brained altruistic reasons". But this has nothing to do with whether or not I endorse the philosophy that "it is correct to try to think about the most effective and leveraged ways to do good and then actually act on them". And there are many people in or influenced by the EA community who I respect and think do good and important work.
I used to feel so strongly about effective altruism. But my heart isn't in it anymore.
I still care about the same old stuff I used to care about, like donating what I can to important charities and trying to pick the charities that are the most cost-effective. Or caring about animals and trying to figure out how to do right by them, even though I haven't been able to sustain a vegan diet for more than a short time. And so on.
But there isn't a community or a movement anymore where I want to talk about these sorts of things with people. That community and movement existed, at least in my local area and at least to a limited extent in some online spaces, from about 2015 to 2017 or 2018.
These are the reasons for my feelings about the effective altruist community/movement, especially over the last one or two years:
-The AGI thing has gotten completely out of hand. I wrote a brief post here about why I strongly disagree with near-term AGI predictions. I wrote a long comment here about how AGI's takeover of effective altruism has left me disappointed, disturbed, and alienated. 80,000 Hours and Will MacAskill have both pivoted to focusing exclusively or almost exclusively on AGI. AGI talk has dominated the EA Forum for a while. It feels like AGI is what the movement is mostly about now, so now I just disagree with most of what effective altruism is about.
-The extent to which LessWrong culture has taken over or "colonized" effective altruism culture is such a bummer. I know there's been at least a bit of overlap for a long time, but ten years ago it felt like effective altruism had its own, unique culture and nowadays it feels like the LessWrong culture has almost completely taken over. I have never felt good about LessWrong or "rationalism" and the more knowledge and experience of it I've gained, the more I've accumulated a sense of repugnance, horror, and anger toward that culture and ideology. I hate to see that become what effective altruism is like.
-The stori
I'm going to be leaving 80,000 Hours and joining Charity Entrepreneurship's incubator programme this summer!
The summer 2023 incubator round is focused on biosecurity and scalable global health charities and I'm really excited to see what's the best fit for me and hopefully launch a new charity. The ideas that the research team have written up look really exciting and I'm trepidatious about the challenge of being a founder but psyched for getting started. Watch this space! <3
I've been at 80,000 Hours for the last 3 years. I'm very proud of the 800+ advising calls I did and feel very privileged I got to talk to so many people and try and help them along their careers!
I've learned so much during my time at 80k. And the team at 80k has been wonderful to work with - so thoughtful, committed to working out what is the right thing to do, kind, and fun - I'll for sure be sad to leave them.
There are a few main reasons why I'm leaving now:
1. New career challenge - I want to try out something that stretches my skills beyond what I've done before. I think I could be a good fit for being a founder and running something big and complicated and valuable that wouldn't exist without me - I'd like to give it a try sooner rather than later.
2. Post-EA crises stepping away from EA community building a bit - Events over the last few months in EA made me re-evaluate how valuable I think the EA community and EA community building are as well as re-evaluate my personal relationship with EA. I haven't gone to the last few EAGs and switched my work away from doing advising calls for the last few months, while processing all this. I have been somewhat sad that there hasn't been more discussion and changes by now though I have been glad to see more EA leaders share things more recently (e.g. this from Ben Todd). I do still believe there are some really important ideas that EA prioritises but I'm more circumspect about some of the things I think we're not doing as well as we could (
I'm a 36 year old iOS Engineer/Software Engineer who switched to working on Image classification systems via Tensorflow a year ago. Last month I was made redundant with a fairly generous severance package and good buffer of savings to get me by while unemployed.
The risky step I had long considered of quitting my non-impactful job was taken for me. I'm hoping to capitalize on my free time by determining what career path to take that best fits my goals. I'm pretty excited about it.
I created a weighted factor model to figure out what projects or learning to take on first. I welcome feedback on it. There's also a schedule tab for how I'm planning to spend my time this year and a template if anyone wishes to use this spreadsheet their selves.
I got feedback from my 80K hour advisor to get involved in EA communities more often. I'm also want to learn more publicly be it via forums or by blogging. This somewhat unstructured dumping of my thoughts is a first step towards that.
I was extremely disappointed to see this tweet from Liron Shapira revealing that the Centre for AI Safety fired a recent hire, John Sherman, for stating that members of the public would attempt to destroy AI labs if they understood the magnitude of AI risk. Capitulating to this sort of pressure campaign is not the right path for EA, which should have a focus on seeking the truth rather than playing along with social-status games, and is not even the right path for PR (it makes you look like you think the campaigners have valid points, which in this case is not true). This makes me think less of CAIS' decision-makers.
I wanted to share some insights from my reflection on my mistakes around attraction/power dynamics — especially something about the shape of the blindspots I had. My hope is that this might help to avert cases of other people causing harm in similar ways.
I don’t know for sure how helpful this will be; and I’m not making a bid for people to read it (I understand if people prefer not to hear more from me on this); but for those who want to look, I’ve put a couple of pages of material here.