If you're new to the EA Forum, consider using this thread to introduce yourself! 

You could talk about how you found effective altruism, what causes you work on and care about, or personal details that aren't EA-related at all. 

(You can also put this info into your Forum bio.)


If you have something to share that doesn't feel like a full post, add it here! 

(You can also create a Shortform post.)


Open threads are also a place to share good news, big or small. See this post for ideas.

15

0
0

Reactions

0
0
Comments8


Sorted by Click to highlight new comments since:

Just trying to get myself comfortable with posting on the forum, since I'm new to it.

I'm from Brazil (Rio Grande do Sul), I consider myself deeply concerned about ethics, and I believe there are analytical methods that can get us closer not only to ethical truths (be they objective or not) but also to the methods whereby we may abide by those truths. 

I have a medical degree and I'm currently taking an online MicroMasters in Statistics and Data Science at MITx. I plan to take part in public health research, though I'm pretty much open to change gears if presented with sufficient evidence to do so.

Thank you all for supporting the EA community!

Hi there!  I'm new to the forum and thought I'd post here just to break the ice and get comfortable posting on the forum.  It's great to meet all of you!  Looking forward to interesting conversations!

Hi! I joined the forum recently, and wanted to introduce myself. 

I am a Bachelor's student in Computer Science and Economics in the Eastern US. Throughout the years, I attempted to introduce effective altruism to my friends and classmates - when appropriate. The concept seemed to resonate especially well with students in engineering and finance, but ultimately the efforts rarely resulted in concrete changes. 

That problem got irreversibly stuck in my mind: Why do these people, who are both good and can intellectually see the net benefits of EA, find it difficult to engage with? Was it because we are students and stereotypically dislike spending any amount of money?

From what those people have done and said, the problem might lie in the perceived inaccessibility of EA (for example, the added research step of ensuring effective use of donations discouraged many from taking action) and/or perceived emotional distance of the results (for example, using evidence and logic to discard some altruistic missions in favor of others may have taken away from the emotional component of altruism, which seems to be the more traditional aspect) .

I don't know why EA is not more prevalent or 'easy' to get into. I think it should be. But maybe it was my approach that was faulty; I have a lot to learn. So, I am here to learn more and do better, effectively. 

Hi there!

A quick thought about your quandary: I have been very puzzled by this throughout my time as an EA as well and my best model for people who 1) intellectually understand EA but 2) don't act on it is that they are mostly signalling, which is super cheap to do. Taking real action (e.g. donating your hard earned money that you could have used on yourself) is much more costly.

Experience has also born the following out for me: For people who don't intellectually (I'd go so far as to say intuitively) get EA, I think there is (almost) no hope of getting them on board. It seems deeply dispositional to me. This lends itself to a strategy that tries to uncover existing EAs who have never heard of it rather than converting those who have but show resistance.

Just my two cents!

The barrier to action is definitely a big thing. When I was a student, I avoided donating money. I told myself I'd start donating when I got a job and started making good money. Then, when I did get a job, I procrastinated for another two years. 

The thing that convinced me to finally do it was joining a different online group where I tried to do a good deed every day. When I got that down, I got into the habit of doing good, which made me rethink EA. After some thought, I committed to try giving 10% just for a year. A month later, I made the Giving What We Can pledge. After I'd made the commitment I realised it wasn't that hard, and I felt a lot better about myself afterwards.

If I could go back in time, I think what I'd ask my past self to do is not to commit to donating 10%, but to commit to donating just 1%  for a year. 1% is nothing, and anyone can do that - but once you start intuitively understanding that A) You feel better donating this money, and B) You really don't miss it, it's a lot easier to scale up. Going from 0 to 1 is a bigger step than from 1 to 10.

I still don't have a full solution, but I think that might be a place to begin.

Medium-time lurker, first-time commenter (I think)! I'll be posting a piece tomorrow or Friday about whether Effective Altruists should sign Up for Oxford’s COVID challenge study. Hoping to start a lively discussion!

Another great initiative in trying to make the Forum more friendly. Congrats!

 I'm from Brazil, São Paulo. Joined EA community in Dec 2016. Trying my best to help the community grow well.

Hiiii!  I was drawn on to here by the creative writing contest! Hoping to participate... except the thing I'm trying to write will. not. behave. (It's driving me bananas.) That said, I love writing fiction. SOMETIMES it flows. And sometimes it lets me grapple with the way Reality is tangled up with so much ambiguity.

I end up bumping into EA and EA-interested people on Discords! ...and I heard about the creative writing contest on a really fun Discord server I'm on.

Curated and popular this week
 ·  · 1m read
 · 
Although some of the jokes are inevitably tasteless, and Zorrilla is used to set up punchlines, I enjoyed it and it will surely increase concerns and donations for shrimp. I'm not sure what impression the audience will have of EA in general.  Last week The Daily Show interviewed Rutger Bregman about his new book Moral Ambition (which includes a profile of Zorrilla and SWP). 
 ·  · 7m read
 · 
Article 5 of the 1948 Universal Declaration of Human Rights states: "Obviously, no one shall be subjected to torture or to cruel, inhuman or degrading treatment or punishment." OK, it doesn’t actually start with "obviously," but I like to imagine the commissioners all murmuring to themselves “obviously” when this item was brought up. I’m not sure what the causal effect of Article 5 (or the 1984 UN Convention Against Torture) has been on reducing torture globally, though the physical integrity rights index (which “captures the extent to which people are free from government torture and political killings”) has increased from 0.48 in 1948 to 0.67 in 2024 (which is good). However, the index reached 0.67 already back in 2001, so at least according to this metric, we haven’t made much progress in the past 25 years. Reducing government torture and killings seems to be low in tractability. Despite many countries having a physical integrity rights index close to 1.0 (i.e., virtually no government torture or political killings), many of their citizens still experience torture-level pain on a regular basis. I’m talking about cluster headache, the “most painful condition known to mankind” according to Dr. Caroline Ran of the Centre for Cluster Headache, a newly-founded research group at the Karolinska Institutet in Sweden. Dr. Caroline Ran speaking at the 2025 Symposium on the recent advances in Cluster Headache research and medicine Yesterday I had the opportunity to join the first-ever international research symposium on cluster headache organized at the Nobel Forum of the Karolinska Institutet. It was a 1-day gathering of roughly 100 participants interested in advancing our understanding of the origins of and potential treatments for cluster headache. I'd like to share some impressions in this post. The most compelling evidence for Dr. Ran’s quote above comes from a 2020 survey of cluster headache patients by Burish et al., which asked patients to rate cluster headach
 ·  · 2m read
 · 
A while back (as I've just been reminded by a discussion on another thread), David Thorstad wrote a bunch of posts critiquing the idea that small reductions in extinction risk have very high value, because the expected number of people who will exist in the future is very high: https://reflectivealtruism.com/category/my-papers/mistakes-in-moral-mathematics/. The arguments are quite complicated, but the basic points are that the expected number of people in the future is much lower than longtermists estimate because: -Longtermists tend to neglect the fact that even if your intervention blocks one extinction risk, there are others it might fail to block; surviving for billions  (or more) of years likely  requires driving extinction risk very low for a long period of time, and if we are not likely to survive that long, even conditional on longtermist interventions against one extinction risk succeeding, the value of preventing extinction (conditional on more happy people being valuable) is much lower.  -Longtermists tend to assume that in the future population will be roughly as large as the available resources can support. But ever since the industrial revolution, as countries get richer, their fertility rate falls and falls until it is below replacement. So we can't just assume future population sizes will be near the limits of what the available resources will support. Thorstad goes on to argue that this weakens the case for longtermism generally, not just the value of extinction risk reductions, since the case for longtermism is that future expected population  is many times the current population, or at least could be given plausible levels of longtermist extinction risk reduction effort. He also notes that if he can find multiple common mistakes in longtermist estimates of expected future population, we should expect that those estimates might be off in other ways. (At this point I would note that they could also be missing factors that bias their estimates of