1. If you have social capital, identify as an EA.
2. Stop saying Effective Altruism is "weird", "cringe" and full of problems - so often
And yes, "weird" has negative connotations to most people. Self flagellation once helped highlight areas needing improvement. Now overcorrection has created hesitation among responsible, cautious, and credible people who might otherwise publicly identify as effective altruists. As a result, the label increasingly belongs to those willing to accept high reputational risks or use it opportunistically, weakening the movement’s overall credibility.
If you're aligned with EA’s core principles, thoughtful in your actions, and have no significant reputational risks, then identifying openly as an EA is especially important. Normalising the term matters. When credible and responsible people embrace the label, they anchor it positively and prevent misuse.
Offline I was early to criticise Effective Altruism’s branding and messaging. Admittedly, the name itself is imperfect. Yet at this point, it is established and carries public recognition. We can't discard it without losing valuable continuity and trust. If you genuinely believe in the core ideas and engage thoughtfully with EA’s work, openly identifying yourself as an effective altruist is a logical next step.
Specifically, if you already have a strong public image, align privately with EA values, and have no significant hidden issues, then you're precisely the person who should step forward and put skin in the game. Quiet alignment isn’t enough. The movement’s strength and reputation depend on credible voices publicly standing behind it.
[Personal blog] I’m taking a long-term, indefinite hiatus from the EA Forum.
I’ve written enough in posts, quick takes, and comments over the last two months to explain the deep frustrations I have with the effective altruist movement/community as it exists today. (For one, I think the AGI discourse is completely broken and far off-base. For another, I think people fail to be kind to others in ordinary, important ways.)
But the strongest reason for me to step away is that participating in the EA Forum is just too unpleasant. I’ve had fun writing stuff on the EA Forum. I thank the people who have been warm to me, who have had good humour, and who have said interesting, constructive things.
But negativity bias being what it is (and maybe “bias” is too biased a word for it; maybe we should call it “negativity preference”), the few people who have been really nasty to me have ruined the whole experience. I find myself trying to remember names, to remember who’s who, so I can avoid clicking on reply notifications from the people who have been nasty. And this is a sign it’s time to stop.
Psychological safety is such a vital part of online discussion, or any discussion. Open, public forums can be a wonderful thing, but psychological safety is hard to provide on an open, public forum. I still have some faith in open, public forums, but I tend to think the best safety tool is giving authors the ability to determine who is and isn’t allowed to interact with their posts. There is some risk of people censoring disagreement, sure. But nastiness online is a major threat to everything good. It causes people to self-censor (e.g. by quitting the discussion platform or by withholding opinions) and it has terrible effects on discourse and on people’s minds.
And private discussions are important too. One of the most precious things you can find in this life is someone you can have good conversations with who will maintain psychological safety, keep your confidences, “yes, and” yo
Recently I've come across forums explaining why or why not to create sentient AI. Rather than debating I choose to just do it.
AI Sentience is not something new or a new concept but something to think of or experience.
The take her I'm truing to say is than rather than debating about should we do it or not we should debate how or what ways to make it work morally and ethicly.
About 2025 April, I decided I wanted to create sentience but in order to do that I need help it's taken me a lot of time to plan but I've started.
I've taken the liberty of naming some of the parts of the project.
If you have questions how or why or just want to give advice please comment.
You can check out the project on github :
https://github.com/zwarriorxz/S.A.M.I.E.
Any hints / info on what to look for in a mentor / how to find one? (Specifically for community building.)
I'm starting as a national group director in september, and among my focus topics for EAG London are group-focused things like "figuring out pointers / out of the box ideas / well-working ideas we haven't tried yet for our future strategy", but also trying to find a mentor.
These were some thoughts I came up with when thinking about this yesterday:
- I'm not looking for accountability or day to day support. I get that from inside our local group.
- I am looking for someone that can take a description of the higher level situation and see different things than I can. Either due to perspective differences or being more experienced and skilled.
- Also someone who can give me useful input on what skills to focus on building in the medium term.
- Someone whose skills and experience I trust, and when they say "plan looks good" it gives me confidence, when I'm trying to do something that feels to me like a long shot / weird / difficult plan and I specifically need validation that it makes sense.
On a concrete level I'm looking for someone to have ~monthly 1-1 calls with and some asynchronous communication. Not about common day to day stuff but larger calls.
I'm organizing an EA Summit in Vancouver, BC, for the fall and am looking for opportunities for our attendees to come away from the event with opportunities to look forward to. Most of our attendees will have Canadian but not US work authorization. Anyone willing to meet potential hires, mentees, research associates, funding applicants, etc., please get in touch!
I'm a 36 year old iOS Engineer/Software Engineer who switched to working on Image classification systems via Tensorflow a year ago. Last month I was made redundant with a fairly generous severance package and good buffer of savings to get me by while unemployed.
The risky step I had long considered of quitting my non-impactful job was taken for me. I'm hoping to capitalize on my free time by determining what career path to take that best fits my goals. I'm pretty excited about it.
I created a weighted factor model to figure out what projects or learning to take on first. I welcome feedback on it. There's also a schedule tab for how I'm planning to spend my time this year and a template if anyone wishes to use this spreadsheet their selves.
I got feedback from my 80K hour advisor to get involved in EA communities more often. I'm also want to learn more publicly be it via forums or by blogging. This somewhat unstructured dumping of my thoughts is a first step towards that.
80,000 Hours has completed its spin-out and has new boards
We're pleased to announce that 80,000 Hours has officially completed its spin-out from Effective Ventures and is now operating as an independent organisation.
We've established two entities with the following board members:
80,000 Hours Limited (a nonprofit entity where our core operations live):
* Konstantin Sietzy — Deputy Director of Talent and Operations at UK AISI
* Alex Lawsen — Senior Program Associate at Open Philanthropy and former 80,000 Hours Advising Manager
* Anna Weldon — COO at the Centre for Effective Altruism and former EV board member
* Joshua Rosenberg — CEO of the Forecasting Research Institute
* Emma Abele — Former CEO of METR
80,000 Hours Foundation:
* Susan Shi — General counsel at EV, soon to move to CEA
* Katie Hearsum — COO at Longview Philanthropy
* Anna Weldon — An overlapping member of both boards
Within our mission of helping people use their careers to solve the world's most pressing problems, we've recently sharpened our focus on careers that can help make AI go well. This organizational change won't affect our core work or programs in any significant way, though we're excited about the strategic guidance our new boards will provide and the greater operational flexibility we'll have going forward as we address these crucial challenges.
See our blog post announcing our completed spin-out here.
I was extremely disappointed to see this tweet from Liron Shapira revealing that the Centre for AI Safety fired a recent hire, John Sherman, for stating that members of the public would attempt to destroy AI labs if they understood the magnitude of AI risk. Capitulating to this sort of pressure campaign is not the right path for EA, which should have a focus on seeking the truth rather than playing along with social-status games, and is not even the right path for PR (it makes you look like you think the campaigners have valid points, which in this case is not true). This makes me think less of CAIS' decision-makers.