If you have something to share that doesn't feel like a full post, add it here! 

(You can also create a Shortform post.)

If you're new to the EA Forum, consider using this thread to introduce yourself! 

You could talk about how you found effective altruism, what causes you work on and care about, or personal details that aren't EA-related at all. 

(You can also put this info into your Forum bio.)

If you're new to effective altruism, consider checking out the Motivation Series (a collection of classic articles on EA). You can also learn more about how the Forum works on this page.

14

0
0

Reactions

0
0
Comments35


Sorted by Click to highlight new comments since:

I'm a 3rd year undergraduate double majoring in electrical engineering and economics at University of California Davis (about 2 hours from the San Francisco Bay Area).

I've been thinking about effective altruism concepts all my life, but just discovered the community in December 2020. After reading many EA articles and double checking with my economics professor, today I've decided to switch my post-graduation career plans from a masters degree in electrical engineering to a PhD in economics so I can work on global priorities research. 

[This comment is no longer endorsed by its author]Reply

That's awesome, congratulations!

Hi everybody! A slippery slope from 80,000 hours podcasts has led me to this lovely community. Probably like a lot of people here I've been EA-sympathetic for a long time before realising that the EA community was a thing. 

I'm not in a very 'EA-adjacent' job (if that's the term!) at the moment and am starting to think about areas in which I might enjoy working, where I would value the work being done and feel that I was really contributing value myself. 

Very excited to start my journey of engaging more directly with all of you and the discussions being had here :)

Welcome to the EA Forum!

Thank you Khorton!

Welcome Lowry! I'm Brian from EA Philippines. I love  80,000 Hours' content and podcast too. I was in a similar position to you last year, in that I was in a non-EA job and wanted to see how I could have a more EA-aligned and more enjoyable career. Thankfully I now do EA-aligned work  full-time (mainly through EA Philippines), but it does take a while before that can happen for a lot of people. And I think if people broaden the scope of what they consider to be "EA-adjacent" jobs, it's much more likely they'll get one (because we have a lot of EAs and too few jobs at EA orgs).

You or others new to the EA community can feel free to message me about your cause interests, skills, and career interests, and I may have useful advice to give or resources/organizations to point you two. I've read up a lot on EA and its various concepts and causes, such as global health and development, animal welfare,  and some longtermist causes, so I can give some advice/resources there. :)

Please tag your posts! I've seen several new forum posts with no tags, and I add any tags I consider relevant. But it would be better if everyone added tags when they publish new posts. Also, please add tags to any post that you think is missing them.

Hi everyone! I have been interested in EA, and adjacent fields, for little over a year now. So I thought it was time to register here.

I work in journalism, although not always on EA-related topics. As a side-project I also run a little newsletter about, among other issues, x-risk.

So hope being here can help advance my thinking, and maybe even support me in doing more good.

Welcome to the Forum! 

I hope your experience here is good; let me know if there's anything I can help with. (I'm the lead moderator and work for CEA, which runs the site.)

Welcome to the Forum Felix! It's good to have another journalist interested in EA (and hopefully writing about it in an informed way). I think there's relatively few of you. 

It's cool that you have a newsletter on x-risk. Maybe you could consider cross-posting an upcoming or previous writing of yours on this Forum? The interview you had with Tom Chivers might be interesting to people interested in AI Safety here. 

You can include a short summary of the post and why you think people might want to read it when cross-posting. Just a suggestion in case you'd find more subscribers or readers here. You could also link to your newsletter and include a short bio of yourself in your Forum so people can find it that way. :)

Thank you for the welcome, and the encouragement!

I was already thinking about re-posting some interviews here, but was a bit worried about too much self-promotion, so glad you suggested it :)

No problem! Posting a few (1-3 ) interviews/issues first should be fine.

Hey all!

I'm studying for a bachelor in Philosophy & Economics at Humboldt-Universität zu Berlin.  I first read Singers Essay "Famine, Affluence and Morality" in school and was impressed with the shallow pond argument.  That was the start of my interest in practical ethics and the EA Movement alligns nicely with most of my views. 

I'm still quiet unsure about my future (apart from wanting to do good) and am currently struggling with procrastination and a missing sense of direction. Conseqently, I'm especially interested in meeting EAs, who are dealing with the same issues.  One idea of dealing with procrastination is a pen pal, so if you're interested, feel free to message me :)

I have been lurking on this forum for a week and you all seem like really nice, level-headed people, who enjoy a good debate, so I'm very happy to join! 

Hi Kottsiek, welcome to the Forum! Have you connected with someone from EA Berlin, such as Manuel Allgaier? Here's their website: https://ea-berlin.org/. You can also reach out to NEAD, which connects people interested in EA in Germany : https://ealokal.de/. You will likely be able to connect with EAs with a similar background or at least in the same region/country as you through EA Berlin or NEAD.

Regarding struggling with procrastination, I found the Complice's Goal-Crafting Intensive Workshop useful. It's a 5-hour event where you listen and work through content with others to help you set and prioritize goals for yourself, and come up with strategies to achieve them, among other topics. It only costs a minimum of $25. The next session is still in April, but you can already book for a class ahead and they can give you content that you can work through ahead. 

You might also like to read this EA Forum post about finding an accountability buddy to meet or chat with every week, to help you overcome procrastination: https://forum.effectivealtruism.org/posts/2RvpoWWQDiFpptpam/accountability-buddies-a-proposed-system-1. In the Complice event, they invite attendees to find an accountability buddy at the end.

You can also join the EA Life Coaching Exchange facebook group, and try to find an accountability. buddy there. A couple of people in EA Philippines have found an accountability buddy/group through there.  Hope this helps!

Thank you for the links. I signed up for the workshop. 

No problem!

Hello everyone!

I'm a 2nd year Sociology & Social Anthropology student studying at the University of Edinburgh. I've joined this forum as myself and some of my colleagues are interested in learning about what various participants in the EA 'movement' think about 'effectiveness' and the organisation as a whole. 

We're doing ethnographic research, which means taking part in some activities alongside you, while talking to you directly in events, on forums, and in interviews. If you'd be interested in talking to me about your experiences and thoughts about effective altruism, please feel free to send me a private message and we can find a time to chat!

Hi Kate, welcome to the forum! Great to see someone with a sociology background in EA - there's relatively few of you in the movement. I'm glad that you're doing ethnographic research on people in the movement. I was a UI/UX designer before so I've done some user research / qualitative interviews before.

Another EA, Vaidehi Agarwalla, did something similar to you before where she interviewed people in EA, particularly those who were looking to make a career transition or had just made a career transition. Her undergraduate degree was also in sociology. You may be interested to read her sequence on "Towards A Sociological Model of EA Movement Building", which I think is unfinished yet, but already has 2 articles in it.

I was wondering if you were planning on focusing on a specific topic or demographic within EA for your ethnographic research? That might be good to do, since people in EA and their interests can be quite varied, so it might be worth scoping the research down rather than just asking to interview anyone in the movement. Just my two cents!

Also, if you haven't seen it yet, 80,000 Hours has a list here of research topics that people with a background in sociology can research on. You could consider researching on one of these topics as a side project or uni project in the future.

Also, if you're interested in biosecurity, David Manheim had some biosecurity project ideas for people with a sociology/anthropology background. :)

Hello, if you experience #low-impact-angst, please join this slack. We currently have 7 tech/programmer-type humans that met at EAxVirtual last year. Come hang out! :)

Trying to figure out a career path.... Ahhhhh. There's a career plan worksheet, and it really needs some feedback. Please comment if giving feedback on a career plan sounds fun. Thanks!

I definitely find this feeling relatable from my own career planning!

Inspired in part by your similar comment on another post, I've now made an open thread on the Forum for people to request and/or provide such feedback. And:

To get things going, I commit to reading and providing some feedback on at least 2 pages' worth of the documents from each of the first 5 people who comment to request feedback. (I might do more; I'll see how long this takes me.)

I'm pretty certain that some people on this forum get 2 karma on their comments immediately on posting them. Is this a thing?

I realise this is a petty and unimportant thing to think about, but I am slightly curious as to what's going on here.

I'm pretty sure the Forum uses the same karma vote-power as LessWrong.

Your observations is correct. How much karma you start off with depends on the amount of karma you have - unfortunately I don't know the minimum required to start off with 2 karma. The more karma you have, the more weighty your strong upvotes become as well (mine are 7 karma, before I hit 2500 karma it was 6).

Here is the relevant section of the code: 

export const userSmallVotePower = (karma: number, multiplier: number) => {

if (karma >= 1000) { return 2 * multiplier }

return 1 * multiplier

}

 

export const userBigVotePower = (karma: number, multiplier: number) => {

if (karma >= 500000) { return 16 * multiplier } // Thousand year old vampire

if (karma >= 250000) { return 15 * multiplier }

if (karma >= 175000) { return 14 * multiplier }

if (karma >= 100000) { return 13 * multiplier }

if (karma >= 75000) { return 12 * multiplier }

if (karma >= 50000) { return 11 * multiplier }

if (karma >= 25000) { return 10 * multiplier }

if (karma >= 10000) { return 9 * multiplier }

if (karma >= 5000) { return 8 * multiplier }

if (karma >= 2500) { return 7 * multiplier }

if (karma >= 1000) { return 6 * multiplier }

if (karma >= 500) { return 5 * multiplier }

if (karma >= 250) { return 4 * multiplier }

if (karma >= 100) { return 3 * multiplier }

if (karma >= 10) { return 2 * multiplier }

return 1 * multiplier

}

In other words, you get 2 small-vote power at 1000 karma, and you can look at the numbers above to see the multipliers for strong-votes.

What's multiplier?

And why is it equal to 1?

It's sometimes 1 (for upvotes) and sometimes -1 (for downvotes). Implementing it as a free variable was a bit easier than implementing it as a boolean, so we did that.

Ah, well you learn something new every day, thanks.

The size of your weak upvotes is also affected by your total karma, just more slowly. Every post starts with one weak upvote from its author.

Would a discord server work better? This is a community platform that is easy to download and maintain. There are individual chats, group forums, and voice channels for all means of communication. With enough support, this can be set up quickly. Please upvote if this is something that sounds useful, and depending on support, there will be a link posted on this post shortly. Keep in mind this discord server could be used for all things EA, besides, connecting individuals and providing an easy place to share documents and stories. Please provide feedback!

There are multiple Discord servers with some degree of EA activity. The biggest I'm aware of is "EA Corner" (invite link), which is quite active. Thanks for the reminder to add that to our "useful links" post!

The EA Forum is very different from what Discord can accomplish; we want this to be a place where useful posts and discussions are available for decades to come -- a record of EA intellectual progress, as well as a community space for long-form discussion. Discord is great for live chat, but very poor for archiving material or crafting a "body of work".

(These open threads are the sort of thing one could replicate pretty well on Discord, but part of why they exist is for people to say hello as they enter the Forum community, so hosting them on a totally different platform would defeat the purpose.)

Can you embed a YouTube video in the EA Forum? If so, how?

Try pasting in a YouTube link. Note that this doesn't work if you've enabled the Markdown editor in your settings.

Ah... I prefer to use the Markdown editor, but I could switch to the rich text editor for this post.

Curated and popular this week
 ·  · 10m read
 · 
Regulation cannot be written in blood alone. There’s this fantasy of easy, free support for the AI Safety position coming from what’s commonly called a “warning shot”. The idea is that AI will cause smaller disasters before it causes a really big one, and that when people see this they will realize we’ve been right all along and easily do what we suggest. I can’t count how many times someone (ostensibly from my own side) has said something to me like “we just have to hope for warning shots”. It’s the AI Safety version of “regulation is written in blood”. But that’s not how it works. Here’s what I think about the myth that warning shots will come to save the day: 1) Awful. I will never hope for a disaster. That’s what I’m trying to prevent. Hoping for disasters to make our job easier is callous and it takes us off track to be thinking about the silver lining of failing in our mission. 2) A disaster does not automatically a warning shot make. People have to be prepared with a world model that includes what the significance of the event would be to experience it as a warning shot that kicks them into gear. 3) The way to make warning shots effective if (God forbid) they happen is to work hard at convincing others of the risk and what to do about it based on the evidence we already have— the very thing we should be doing in the absence of warning shots. If these smaller scale disasters happen, they will only serve as warning shots if we put a lot of work into educating the public to understand what they mean before they happen. The default “warning shot” event outcome is confusion, misattribution, or normalizing the tragedy. Let’s imagine what one of these macabrely hoped-for “warning shot” scenarios feels like from the inside. Say one of the commonly proposed warning shot scenario occurs: a misaligned AI causes several thousand deaths. Say the deaths are of ICU patients because the AI in charge of their machines decides that costs and suffering would be minimize
 ·  · 14m read
 · 
This is a transcript of my opening talk at EA Global: London 2025. In my talk, I challenge the misconception that EA is populated by “cold, uncaring, spreadsheet-obsessed robots” and explain how EA principles serve as tools for putting compassion into practice, translating our feelings about the world's problems into effective action. Key points:  * Most people involved in EA are here because of their feelings, not despite them. Many of us are driven by emotions like anger about neglected global health needs, sadness about animal suffering, or fear about AI risks. What distinguishes us as a community isn't that we don't feel; it's that we don't stop at feeling — we act. Two examples: * When USAID cuts threatened critical health programs, GiveWell mobilized $24 million in emergency funding within weeks. * People from the EA ecosystem spotted AI risks years ahead of the mainstream and pioneered funding for the field starting in 2015, helping transform AI safety from a fringe concern into a thriving research field. * We don't make spreadsheets because we lack care. We make them because we care deeply. In the face of tremendous suffering, prioritization helps us take decisive, thoughtful action instead of freezing or leaving impact on the table. * Surveys show that personal connections are the most common way that people first discover EA. When we share our own stories — explaining not just what we do but why it matters to us emotionally — we help others see that EA offers a concrete way to turn their compassion into meaningful impact. You can also watch my full talk on YouTube. ---------------------------------------- One year ago, I stood on this stage as the new CEO of the Centre for Effective Altruism to talk about the journey effective altruism is on. Among other key messages, my talk made this point: if we want to get to where we want to go, we need to be better at telling our own stories rather than leaving that to critics and commentators. Since
 ·  · 3m read
 · 
A friend of mine who worked as a social worker in a hospital told me a story that stuck with me. She had a conversation with an in-patient having a very difficult time. It was helpful, but as she was leaving, they told her wistfully 'You get to go home'. She found it hard to hear—it felt like an admonition. It was hard not to feel guilt over indeed getting to leave the facility and try to stop thinking about it, when others didn't have that luxury. The story really stuck with me. I resonate with the guilt of being in the fortunate position of being able to go back to my comfortable home and chill with my family while so many beings can't escape the horrible situations they're in, or whose very chance at existence depends on our work. Hearing the story was helpful for dealing with that guilt. Thinking about my friend's situation it was clear why she felt guilty. But also clear that it was absolutely crucial that she did go home. She was only going to be able to keep showing up to work and having useful conversations with people if she allowed herself proper respite. It might be unfair for her patients that she got to take the break they didn't, but it was also very clearly in their best interests for her to do it. Having a clear-cut example like that to think about when feeling guilt over taking time off is useful. But I also find the framing useful beyond the obvious cases. When morality feels all-consuming Effective altruism can sometimes feel all consuming. Any spending decision you make affects how much you can donate. Any activity you choose to do takes time away from work you could be doing to help others. Morality can feel as if it's making claims on even the things which are most important to you, and most personal. Often the narratives with which we push back on such feelings also involve optimisation. We think through how many hours per week we can work without burning out, and how much stress we can handle before it becomes a problem. I do find that