Community
Community
Posts about the EA community and projects that focus on the EA community

Quick takes

4
10h
Any hints / info on what to look for in a mentor / how to find one? (Specifically for community building.) I'm starting as a national group director in september, and among my focus topics for EAG London are group-focused things like "figuring out pointers / out of the box ideas / well-working ideas we haven't tried yet for our future strategy", but also trying to find a mentor. These were some thoughts I came up with when thinking about this yesterday:  - I'm not looking for accountability or day to day support. I get that from inside our local group.  - I am looking for someone that can take a description of the higher level situation and see different things than I can. Either due to perspective differences or being more experienced and skilled.  - Also someone who can give me useful input on what skills to focus on building in the medium term.  - Someone whose skills and experience I trust, and when they say "plan looks good" it gives me confidence, when I'm trying to do something that feels to me like a long shot / weird / difficult plan and I specifically need validation that it makes sense. On a concrete level I'm looking for someone to have ~monthly 1-1 calls with and some asynchronous communication. Not about common day to day stuff but larger calls.  
19
5d
3
I'm a 36 year old iOS Engineer/Software Engineer who switched to working on Image classification systems via Tensorflow a year ago. Last month I was made redundant with a fairly generous severance package and good buffer of savings to get me by while unemployed. The risky step I had long considered of quitting my non-impactful job was taken for me. I'm hoping to capitalize on my free time by determining what career path to take that best fits my goals. I'm pretty excited about it.  I created a weighted factor model to figure out what projects or learning to take on first. I welcome feedback on it. There's also a schedule tab for how I'm planning to spend my time this year and a template if anyone wishes to use this spreadsheet their selves. I got feedback from my 80K hour advisor to get involved in EA communities more often. I'm also want to learn more publicly be it via forums or by blogging. This somewhat unstructured dumping of my thoughts is a first step towards that.
121
2mo
2
In light of recent discourse on EA adjacency, this seems like a good time to publicly note that I still identify as an effective altruist, not EA adjacent. I am extremely against embezzling people out of billions of dollars of money, and FTX was a good reminder of the importance of "don't do evil things for galaxy brained altruistic reasons". But this has nothing to do with whether or not I endorse the philosophy that "it is correct to try to think about the most effective and leveraged ways to do good and then actually act on them". And there are many people in or influenced by the EA community who I respect and think do good and important work.
16
6d
80,000 Hours has completed its spin-out and has new boards We're pleased to announce that 80,000 Hours has officially completed its spin-out from Effective Ventures and is now operating as an independent organisation. We've established two entities with the following board members: 80,000 Hours Limited (a nonprofit entity where our core operations live): * Konstantin Sietzy — Deputy Director of Talent and Operations at UK AISI * Alex Lawsen — Senior Program Associate at Open Philanthropy and former 80,000 Hours Advising Manager * Anna Weldon — COO at the Centre for Effective Altruism and former EV board member * Joshua Rosenberg — CEO of the Forecasting Research Institute * Emma Abele — Former CEO of METR 80,000 Hours Foundation: * Susan Shi — General counsel at EV, soon to move to CEA * Katie Hearsum — COO at Longview Philanthropy * Anna Weldon — An overlapping member of both boards Within our mission of helping people use their careers to solve the world's most pressing problems, we've recently sharpened our focus on careers that can help make AI go well. This organizational change won't affect our core work or programs in any significant way, though we're excited about the strategic guidance our new boards will provide and the greater operational flexibility we'll have going forward as we address these crucial challenges. See our blog post announcing our completed spin-out here.
10
4d
I'm organizing an EA Summit in Vancouver, BC, for the fall and am looking for opportunities for our attendees to come away from the event with opportunities to look forward to. Most of our attendees will have Canadian but not US work authorization. Anyone willing to meet potential hires, mentees, research associates, funding applicants, etc., please get in touch!
20
11d
3
I was extremely disappointed to see this tweet from Liron Shapira revealing that the Centre for AI Safety fired a recent hire, John Sherman, for stating that members of the public would attempt to destroy AI labs if they understood the magnitude of AI risk. Capitulating to this sort of pressure campaign is not the right path for EA, which should have a focus on seeking the truth rather than playing along with social-status games, and is not even the right path for PR (it makes you look like you think the campaigners have valid points, which in this case is not true). This makes me think less of CAIS' decision-makers.
40
2mo
6
I used to feel so strongly about effective altruism. But my heart isn't in it anymore. I still care about the same old stuff I used to care about, like donating what I can to important charities and trying to pick the charities that are the most cost-effective. Or caring about animals and trying to figure out how to do right by them, even though I haven't been able to sustain a vegan diet for more than a short time. And so on. But there isn't a community or a movement anymore where I want to talk about these sorts of things with people. That community and movement existed, at least in my local area and at least to a limited extent in some online spaces, from about 2015 to 2017 or 2018. These are the reasons for my feelings about the effective altruist community/movement, especially over the last one or two years: -The AGI thing has gotten completely out of hand. I wrote a brief post here about why I strongly disagree with near-term AGI predictions. I wrote a long comment here about how AGI's takeover of effective altruism has left me disappointed, disturbed, and alienated. 80,000 Hours and Will MacAskill have both pivoted to focusing exclusively or almost exclusively on AGI. AGI talk has dominated the EA Forum for a while. It feels like AGI is what the movement is mostly about now, so now I just disagree with most of what effective altruism is about. -The extent to which LessWrong culture has taken over or "colonized" effective altruism culture is such a bummer. I know there's been at least a bit of overlap for a long time, but ten years ago it felt like effective altruism had its own, unique culture and nowadays it feels like the LessWrong culture has almost completely taken over. I have never felt good about LessWrong or "rationalism" and the more knowledge and experience of it I've gained, the more I've accumulated a sense of repugnance, horror, and anger toward that culture and ideology. I hate to see that become what effective altruism is like. -The stori
24
1mo
I wanted to share some insights from my reflection on my mistakes around attraction/power dynamics — especially something about the shape of the blindspots I had. My hope is that this might help to avert cases of other people causing harm in similar ways. I don’t know for sure how helpful this will be; and I’m not making a bid for people to read it (I understand if people prefer not to hear more from me on this); but for those who want to look, I’ve put a couple of pages of material here.
Load more (8/209)

Posts in this space are about

CommunityEffective altruism lifestyle