I lead Effective Altruism Lund in southern Sweden, while wrapping up my M.Sc. in Engineering Physics specializing in machine learning. I'm a social team player who likes high ceilings and big picture work. Scared of AI, intrigued by biorisk, hopeful about animal welfare.
My interests outside of EA, in hieroglyphs: 🎸🧙🏼♂️🌐💪🏼🎉📚👾🎮✍🏼🛹
Hi, thanks to the forum team for running this and thank you to all advisors!
In December I get my MSc in Engineering Physics, where I've gone heavy on statistics, machine learning and mathematical modeling. I've also run several student orgs so I'm an adept administrator and leader. I want to leverage these skills to improve alternative proteins.
Core considerations currently:
I'm also very thankful for any general advice or considerations I might not've thought of.
I see a huge opportunity here for university group organizers to connect promising group members with (AIM incubated or other) charities that could use volunteers. It builds motivation, connections and a CV for the student even if it's not ideal for the charity for the same reasons that volunteering rarely is. If we can make it mutually beneficial then all the better!
I don't see the report considering the growth of money in recipients' pockets. It treats giving like throwing money into a black hole, not as another investment with returns.
To put it concretely, let's say person A is in the global top 5% income wise and person B is below the poverty line. Person A (most on this forum) can then choose to invest their money, grow it and give at death.
Let's ignore risk of value drift and say you manage to grow it 7% annually. That's nice, but instead giving it away means person B can
All of these things make the money grow in person B's pockets as well. My prior (medium epistemic status) is that this growth trumps the 7% an index fund can offer. I argue that after 60-odd years of compounding - most EAs are young and healthy - person B has more than they would've if they got the money all in one lump sum from person A at death.
So this model only considers the resources of person A at death, when what we really care about is the resources of persons A and B combined.
Epistemic status: I'm community builder with a technical background and surface level understanding of alignment techniques from BlueDot.
This post is well-written and the core takeaway is important. I’d add one caveat: starting from weak priors should increase our urgency to seek out evidence, not delay action. Once there's reasonable uncertainty that there's something morally salient there, I worry we’ll collectively shrug, defaulting to “just a tool” or retreating behind epistemic modesty. We can't let epistemic caution turn into neglect.
One concrete intervention is Forethought’s proposal that future LLMs be able to end conversations they're uncomfortable with. I find this a plausible and robust way to fulfill potential preferences. We need more proposals like that.
On another note, please consider your use of adjectives.
To the extent that harshness is an EA norm, I think it's inherited from rationalist culture. In my experience with spaces like LessWrong, quite jarring critiques are fairly normal even for trivial things (e.g. “that argument is stupid”). There, bluntness is viewed as efficiency, getting bad ideas off the table faster.
EA spaces are optimized for a different goal, and tone matters for that goal. We need people to feel welcomed, encouraged, and inspired to contribute; not like they’re auditioning for a role in a debate team. A good measure of how well we're doing on this is the fear people have of posting on the forum.
I haven’t read titotal’s post, so I won’t comment on that case, but I’ve definitely noticed the broader pattern Alfredo is pointing out. And I think we should be intentional about whether it serves the kind of community we want to build.
I'm interested in hearing more about the cases you found for and against EA ideas/arguments applying without utilitarianism. I personally am very much consequentialist but not necessarily fully utilitarian, so curious both for myself and as a community builder. I'm not a philosopher so my footing is probably much less certain than yours.
The first quote you mention sounds more like a dog whistle to me. I actually think it's great if we can "weaponize capitalist engines" against the world's most pressing problems. But if you hate capitalism, it sounds insidious.
The rest I agree is uncharitable. Like, surely you wouldn't come out the shallow pond feeling moral guily, you'd be extatic that you just saved a child! To me, Singer's thought experiment always implied I should feel the same way about donations.
EA’s goal is impact, not growth for its own sake. Because cost-effectiveness can vary by 100x or more, shifting one person’s career from a typical path to a highly impactful one is equivalent to adding a hundred contributors. I agree with the EA stance that the former is often more feasible.
This doesn’t fully address why we maintain a soft tone outwardly, but it does imply we could afford to be a bit less soft inwardly. I predict that SMA will surpass EA in numbers, while EA will be ahead of SMA in terms of impact.
I grew up worrying a lot about the harms of our everyday consumption, and for a few years I'd just given up. I decided that the best we could do was give some value to the relatively tiny pool of people who meet us. So naturally, realizing that I could have a net positive effect on the world was huge! I'm still riding that high and I think we should pause more often to appreciate that. It acts as a strong source of satisfaction for me even when others work harder or donate more than I do.
This post also reminds me of this 80k episode. I particularly enjoy the framing that you might feel sorrow about not being Ilya Sutskever, but you shouldn't feel guilt. You didn't choose to be born with your particular set of IQ, health factors, surroundings etc so why think that they're your fault?