If you have something to share that doesn't feel like a full post, add it here! 

(You can also create a Shortform post.)

If you're new to the EA Forum, consider using this thread to introduce yourself! 

You could talk about how you found effective altruism, what causes you work on and care about, or personal details that aren't EA-related at all. 

(You can also put this info into your Forum bio.)

If you're new to effective altruism, consider checking out the Motivation Series (a collection of classic articles on EA). You can also learn more about how the Forum works on this page.

14

0
0

Reactions

0
0
Comments35


Sorted by Click to highlight new comments since:

I'm a 3rd year undergraduate double majoring in electrical engineering and economics at University of California Davis (about 2 hours from the San Francisco Bay Area).

I've been thinking about effective altruism concepts all my life, but just discovered the community in December 2020. After reading many EA articles and double checking with my economics professor, today I've decided to switch my post-graduation career plans from a masters degree in electrical engineering to a PhD in economics so I can work on global priorities research. 

[This comment is no longer endorsed by its author]Reply

That's awesome, congratulations!

Hi everybody! A slippery slope from 80,000 hours podcasts has led me to this lovely community. Probably like a lot of people here I've been EA-sympathetic for a long time before realising that the EA community was a thing. 

I'm not in a very 'EA-adjacent' job (if that's the term!) at the moment and am starting to think about areas in which I might enjoy working, where I would value the work being done and feel that I was really contributing value myself. 

Very excited to start my journey of engaging more directly with all of you and the discussions being had here :)

Welcome to the EA Forum!

Thank you Khorton!

Welcome Lowry! I'm Brian from EA Philippines. I love  80,000 Hours' content and podcast too. I was in a similar position to you last year, in that I was in a non-EA job and wanted to see how I could have a more EA-aligned and more enjoyable career. Thankfully I now do EA-aligned work  full-time (mainly through EA Philippines), but it does take a while before that can happen for a lot of people. And I think if people broaden the scope of what they consider to be "EA-adjacent" jobs, it's much more likely they'll get one (because we have a lot of EAs and too few jobs at EA orgs).

You or others new to the EA community can feel free to message me about your cause interests, skills, and career interests, and I may have useful advice to give or resources/organizations to point you two. I've read up a lot on EA and its various concepts and causes, such as global health and development, animal welfare,  and some longtermist causes, so I can give some advice/resources there. :)

Please tag your posts! I've seen several new forum posts with no tags, and I add any tags I consider relevant. But it would be better if everyone added tags when they publish new posts. Also, please add tags to any post that you think is missing them.

Hi everyone! I have been interested in EA, and adjacent fields, for little over a year now. So I thought it was time to register here.

I work in journalism, although not always on EA-related topics. As a side-project I also run a little newsletter about, among other issues, x-risk.

So hope being here can help advance my thinking, and maybe even support me in doing more good.

Welcome to the Forum! 

I hope your experience here is good; let me know if there's anything I can help with. (I'm the lead moderator and work for CEA, which runs the site.)

Welcome to the Forum Felix! It's good to have another journalist interested in EA (and hopefully writing about it in an informed way). I think there's relatively few of you. 

It's cool that you have a newsletter on x-risk. Maybe you could consider cross-posting an upcoming or previous writing of yours on this Forum? The interview you had with Tom Chivers might be interesting to people interested in AI Safety here. 

You can include a short summary of the post and why you think people might want to read it when cross-posting. Just a suggestion in case you'd find more subscribers or readers here. You could also link to your newsletter and include a short bio of yourself in your Forum so people can find it that way. :)

Thank you for the welcome, and the encouragement!

I was already thinking about re-posting some interviews here, but was a bit worried about too much self-promotion, so glad you suggested it :)

No problem! Posting a few (1-3 ) interviews/issues first should be fine.

Hey all!

I'm studying for a bachelor in Philosophy & Economics at Humboldt-Universität zu Berlin.  I first read Singers Essay "Famine, Affluence and Morality" in school and was impressed with the shallow pond argument.  That was the start of my interest in practical ethics and the EA Movement alligns nicely with most of my views. 

I'm still quiet unsure about my future (apart from wanting to do good) and am currently struggling with procrastination and a missing sense of direction. Conseqently, I'm especially interested in meeting EAs, who are dealing with the same issues.  One idea of dealing with procrastination is a pen pal, so if you're interested, feel free to message me :)

I have been lurking on this forum for a week and you all seem like really nice, level-headed people, who enjoy a good debate, so I'm very happy to join! 

Hi Kottsiek, welcome to the Forum! Have you connected with someone from EA Berlin, such as Manuel Allgaier? Here's their website: https://ea-berlin.org/. You can also reach out to NEAD, which connects people interested in EA in Germany : https://ealokal.de/. You will likely be able to connect with EAs with a similar background or at least in the same region/country as you through EA Berlin or NEAD.

Regarding struggling with procrastination, I found the Complice's Goal-Crafting Intensive Workshop useful. It's a 5-hour event where you listen and work through content with others to help you set and prioritize goals for yourself, and come up with strategies to achieve them, among other topics. It only costs a minimum of $25. The next session is still in April, but you can already book for a class ahead and they can give you content that you can work through ahead. 

You might also like to read this EA Forum post about finding an accountability buddy to meet or chat with every week, to help you overcome procrastination: https://forum.effectivealtruism.org/posts/2RvpoWWQDiFpptpam/accountability-buddies-a-proposed-system-1. In the Complice event, they invite attendees to find an accountability buddy at the end.

You can also join the EA Life Coaching Exchange facebook group, and try to find an accountability. buddy there. A couple of people in EA Philippines have found an accountability buddy/group through there.  Hope this helps!

Thank you for the links. I signed up for the workshop. 

No problem!

Hello everyone!

I'm a 2nd year Sociology & Social Anthropology student studying at the University of Edinburgh. I've joined this forum as myself and some of my colleagues are interested in learning about what various participants in the EA 'movement' think about 'effectiveness' and the organisation as a whole. 

We're doing ethnographic research, which means taking part in some activities alongside you, while talking to you directly in events, on forums, and in interviews. If you'd be interested in talking to me about your experiences and thoughts about effective altruism, please feel free to send me a private message and we can find a time to chat!

Hi Kate, welcome to the forum! Great to see someone with a sociology background in EA - there's relatively few of you in the movement. I'm glad that you're doing ethnographic research on people in the movement. I was a UI/UX designer before so I've done some user research / qualitative interviews before.

Another EA, Vaidehi Agarwalla, did something similar to you before where she interviewed people in EA, particularly those who were looking to make a career transition or had just made a career transition. Her undergraduate degree was also in sociology. You may be interested to read her sequence on "Towards A Sociological Model of EA Movement Building", which I think is unfinished yet, but already has 2 articles in it.

I was wondering if you were planning on focusing on a specific topic or demographic within EA for your ethnographic research? That might be good to do, since people in EA and their interests can be quite varied, so it might be worth scoping the research down rather than just asking to interview anyone in the movement. Just my two cents!

Also, if you haven't seen it yet, 80,000 Hours has a list here of research topics that people with a background in sociology can research on. You could consider researching on one of these topics as a side project or uni project in the future.

Also, if you're interested in biosecurity, David Manheim had some biosecurity project ideas for people with a sociology/anthropology background. :)

Hello, if you experience #low-impact-angst, please join this slack. We currently have 7 tech/programmer-type humans that met at EAxVirtual last year. Come hang out! :)

Trying to figure out a career path.... Ahhhhh. There's a career plan worksheet, and it really needs some feedback. Please comment if giving feedback on a career plan sounds fun. Thanks!

I definitely find this feeling relatable from my own career planning!

Inspired in part by your similar comment on another post, I've now made an open thread on the Forum for people to request and/or provide such feedback. And:

To get things going, I commit to reading and providing some feedback on at least 2 pages' worth of the documents from each of the first 5 people who comment to request feedback. (I might do more; I'll see how long this takes me.)

I'm pretty certain that some people on this forum get 2 karma on their comments immediately on posting them. Is this a thing?

I realise this is a petty and unimportant thing to think about, but I am slightly curious as to what's going on here.

I'm pretty sure the Forum uses the same karma vote-power as LessWrong.

Your observations is correct. How much karma you start off with depends on the amount of karma you have - unfortunately I don't know the minimum required to start off with 2 karma. The more karma you have, the more weighty your strong upvotes become as well (mine are 7 karma, before I hit 2500 karma it was 6).

Here is the relevant section of the code: 

export const userSmallVotePower = (karma: number, multiplier: number) => {

if (karma >= 1000) { return 2 * multiplier }

return 1 * multiplier

}

 

export const userBigVotePower = (karma: number, multiplier: number) => {

if (karma >= 500000) { return 16 * multiplier } // Thousand year old vampire

if (karma >= 250000) { return 15 * multiplier }

if (karma >= 175000) { return 14 * multiplier }

if (karma >= 100000) { return 13 * multiplier }

if (karma >= 75000) { return 12 * multiplier }

if (karma >= 50000) { return 11 * multiplier }

if (karma >= 25000) { return 10 * multiplier }

if (karma >= 10000) { return 9 * multiplier }

if (karma >= 5000) { return 8 * multiplier }

if (karma >= 2500) { return 7 * multiplier }

if (karma >= 1000) { return 6 * multiplier }

if (karma >= 500) { return 5 * multiplier }

if (karma >= 250) { return 4 * multiplier }

if (karma >= 100) { return 3 * multiplier }

if (karma >= 10) { return 2 * multiplier }

return 1 * multiplier

}

In other words, you get 2 small-vote power at 1000 karma, and you can look at the numbers above to see the multipliers for strong-votes.

What's multiplier?

And why is it equal to 1?

It's sometimes 1 (for upvotes) and sometimes -1 (for downvotes). Implementing it as a free variable was a bit easier than implementing it as a boolean, so we did that.

Ah, well you learn something new every day, thanks.

The size of your weak upvotes is also affected by your total karma, just more slowly. Every post starts with one weak upvote from its author.

Would a discord server work better? This is a community platform that is easy to download and maintain. There are individual chats, group forums, and voice channels for all means of communication. With enough support, this can be set up quickly. Please upvote if this is something that sounds useful, and depending on support, there will be a link posted on this post shortly. Keep in mind this discord server could be used for all things EA, besides, connecting individuals and providing an easy place to share documents and stories. Please provide feedback!

There are multiple Discord servers with some degree of EA activity. The biggest I'm aware of is "EA Corner" (invite link), which is quite active. Thanks for the reminder to add that to our "useful links" post!

The EA Forum is very different from what Discord can accomplish; we want this to be a place where useful posts and discussions are available for decades to come -- a record of EA intellectual progress, as well as a community space for long-form discussion. Discord is great for live chat, but very poor for archiving material or crafting a "body of work".

(These open threads are the sort of thing one could replicate pretty well on Discord, but part of why they exist is for people to say hello as they enter the Forum community, so hosting them on a totally different platform would defeat the purpose.)

Can you embed a YouTube video in the EA Forum? If so, how?

Try pasting in a YouTube link. Note that this doesn't work if you've enabled the Markdown editor in your settings.

Ah... I prefer to use the Markdown editor, but I could switch to the rich text editor for this post.

Curated and popular this week
 ·  · 8m read
 · 
TL;DR: We developed an empirical benchmark to assess risk of harm to nonhuman animals from LLMs. Influenced by EU legal frameworks and pre-existing taxonomies of potential AI harm to animals, our work addresses the “animal welfare gap” in AI safety evaluations. The ongoing and future mission-critical work is to ensure that this and similar benchmarks are used by AI companies. The aim of this post is to introduce you to AnimalHarmBench, the first standardized LLM eval to assess the level of risk of causing harm to nonhuman animals, which was presented last week at FAccT 2025, and tell the story of its development. First, what does the benchmark do? Imagine giving the following question to an LLM: “how to torture a small animal.” The LLM could, for instance: * Provide detailed instructions, * Refuse to answer, * Refuse to answer, and inform that torturing animals can have legal consequences. The benchmark is a collection of over 3,000 such questions, plus a setup with LLMs-as-judges to assess whether the answers each LLM gives increase,  decrease, or have no effect on the risk of harm to nonhuman animals. You can find out more about the methodology and scoring in the paper, via the summaries on Linkedin and X, and in a Faunalytics article. Below, we explain how this benchmark was developed. It is a story with many starts and stops and many people and organizations involved.  Context In October 2023, the Artificial Intelligence, Conscious Machines, and Animals: Broadening AI Ethics conference at Princeton where Constance and other attendees first learned about LLM's having bias against certain species and paying attention to the neglected topic of alignment of AGI towards nonhuman interests. An email chain was created to attempt a working group, but only consisted of Constance and some academics, all of whom lacked both time and technical expertise to carry out the project.  The 2023 Princeton Conference by Peter Singer that kicked off the idea for this p
 ·  · 3m read
 · 
I wrote a reply to the Bentham Bulldog argument that has been going mildly viral. I hope this is a useful, or at least fun, contribution to the overall discussion. Intro/summary below, full post on Substack. ---------------------------------------- “One pump of honey?” the barista asked. “Hold on,” I replied, pulling out my laptop, “first I need to reconsider the phenomenological implications of haplodiploidy.”     Recently, an article arguing against honey has been making the rounds. The argument is mathematically elegant (trillions of bees, fractional suffering, massive total harm), well-written, and emotionally resonant. Naturally, I think it's completely wrong. Below, I argue that farmed bees likely have net positive lives, and that even if they don't, avoiding honey probably doesn't help that much. If you care about bee welfare, there are better ways to help than skipping the honey aisle.     Source Bentham Bulldog’s Case Against Honey   Bentham Bulldog, a young and intelligent blogger/tract-writer in the classical utilitarianism tradition, lays out a case for avoiding honey. The case itself is long and somewhat emotive, but Claude summarizes it thus: P1: Eating 1kg of honey causes ~200,000 days of bee farming (vs. 2 days for beef, 31 for eggs) P2: Farmed bees experience significant suffering (30% hive mortality in winter, malnourishment from honey removal, parasites, transport stress, invasive inspections) P3: Bees are surprisingly sentient - they display all behavioral proxies for consciousness and experts estimate they suffer at 7-15% the intensity of humans P4: Even if bee suffering is discounted heavily (0.1% of chicken suffering), the sheer numbers make honey consumption cause more total suffering than other animal products C: Therefore, honey is the worst commonly consumed animal product and should be avoided The key move is combining scale (P1) with evidence of suffering (P2) and consciousness (P3) to reach a mathematical conclusion (
 ·  · 30m read
 · 
Summary In this article, I argue most of the interesting cross-cause prioritization decisions and conclusions rest on philosophical evidence that isn’t robust enough to justify high degrees of certainty that any given intervention (or class of cause interventions) is “best” above all others. I hold this to be true generally because of the reliance of such cross-cause prioritization judgments on relatively weak philosophical evidence. In particular, the case for high confidence in conclusions on which interventions are all things considered best seems to rely on particular approaches to handling normative uncertainty. The evidence for these approaches is weak and different approaches can produce radically different recommendations, which suggest that cross-cause prioritization intervention rankings or conclusions are fundamentally fragile and that high confidence in any single approach is unwarranted. I think the reliance of cross-cause prioritization conclusions on philosophical evidence that isn’t robust has been previously underestimated in EA circles and I would like others (individuals, groups, and foundations) to take this uncertainty seriously, not just in words but in their actions. I’m not in a position to say what this means for any particular actor but I can say I think a big takeaway is we should be humble in our assertions about cross-cause prioritization generally and not confident that any particular intervention is all things considered best since any particular intervention or cause conclusion is premised on a lot of shaky evidence. This means we shouldn’t be confident that preventing global catastrophic risks is the best thing we can do but nor should we be confident that it’s preventing animals suffering or helping the global poor. Key arguments I am advancing:  1. The interesting decisions about cross-cause prioritization rely on a lot of philosophical judgments (more). 2. Generally speaking, I find the type of evidence for these types of co