This is a special post for quick takes by Eugenics-Adjacent. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.
Sorted by Click to highlight new quick takes since:

Over the last few years community members have gone to great lengths to assure people that EA did not favor the idea of prioritizing deaths among poor third-world national human populations in order to preserve wealthier nations for the purpose of fostering artificial intelligence that could one day save humanity. 

This idea was popularized in a very influential founding document of longtermism, and every public-facing EA figure had to defend themselves against accusations of believing this. 

Now, however, with an EA as acting president, we are seeing aid to third-world countries systematically dismantled under an explicitly America-first agenda led by a tech entrepreneur in the AI space. 

How can we deny that this is what EA stands for? 

I deny that we have an EA as acting president. 

Which part? He's said it's his personal philosophy. And he's currently an unelected official making top-level executive decisions in our federal government. At least one of the young tech workers helping him feed foreign aid "into the wood chipper" is also an avowed effective altruist.

acting president [...] unelected official

While Musk is influential, it wasn't clear you were talking about him until your reply

"At least one of the young tech workers helping him feed foreign aid "into the wood chipper" is also an avowed effective altruist."

Can you provide a link for this? Not that I find it implausible, just curious. 

Merely listing EA under "Memetics adjacence" does not support the claim "is also an avowed effective altruist."

It's hard to say what "memetics adjacence" means. I take it to be the list of ideologies he subscribes to or feels an affinity with. 

There's also a Cole Killian EA Forum account with one comment from 2022. Looks like he's deleted things though. I googled the post 'SBF, Pascal's Mugging, and a Proposed Solution' and found a dead link. It's on Internet Archive though, you can check it here

Will Aldred
*Moderator Comment20
5
2

The moderation team is issuing @Eugenics-Adjacent a 6-month ban for flamebait and trolling.

I’ll note that Eugenics-Adjacent’s posts and comments have been mostly about pushing against what they see as EA groupthink. In banning them, I do feel a twinge of “huh, I hope I’m not making the Forum more like an echo chamber.” However, there are tradeoffs at play. “Overrun by flamebait and trolling” seems to be the default end state for most internet spaces: the Forum moderation team is committed to fighting against this default.

All in all, we think the ratio of “good” EA criticism to more-heat-than-light criticism in Eugenics-Adjacent’s contributions is far too low. Additionally, at -220 karma (at the time of writing), Eugenics-Adjacent is one of the most downvoted users of all time—we take this as a clear indication that other users are finding their contributions unhelpful. If Eugenics-Adjacent returns to the Forum, we’ll expect to see significant improvement. I expect we’ll ban them indefinitely if anything like the above continues.

As a reminder, a ban applies to the person behind the account, not just to the particular account.

If anyone has questions or concerns, feel free to reach out or reply in this thread. If you think we’ve made a mistake, you can appeal.

"How can we deny that this is what EA stands for? "

Because most/all leaders would disavow it, including Nick Beckstead, who I imagine wrote the founding document you mean-indeed he's already disavowed it-and we don't personally control Elon, whether or not he considers himself EA? And also, EAs, including some quite aggressively un-PC ones like Scott Alexander and Matthew Adelstein/Bentham's Bulldog have been pushing back strongly against the aid cuts/the America First agenda behind them? 

Having said that, it definitely reduced my opinion of Will MacAskill, or at least his political judgment, that he tried to help SBF get in on Elon's twitter purchase, since I think Elon's fascist leanings were pretty obvious even at that point. And I agree we can ask whether EA ideas influence Musk in a bad direction, whether or not EAs themselves approve of the direction he is going in. 

Curated and popular this week
 ·  · 9m read
 · 
This is Part 1 of a multi-part series, shared as part of Career Conversations Week. The views expressed here are my own and don't reflect those of my employer. TL;DR: Building an EA-aligned career starting from an LMIC comes with specific challenges that shaped how I think about career planning, especially around constraints: * Everyone has their own "passport"—some structural limitation that affects their career more than their abilities. The key is recognizing these constraints exist for everyone, just in different forms. Reframing these from "unfair barriers" to "data about my specific career path" has helped me a lot. * When pursuing an ideal career path, it's easy to fixate on what should be possible rather than what actually is. But those idealized paths often require circumstances you don't have—whether personal (e.g., visa status, financial safety net) or external (e.g., your dream org hiring, or a stable funding landscape). It might be helpful to view the paths that work within your actual constraints as your only real options, at least for now. * Adversity Quotient matters. When you're working on problems that may take years to show real progress, the ability to stick around when the work is tedious becomes a comparative advantage. Introduction Hi, I'm Rika. I was born and raised in the Philippines and now work on hiring and recruiting at the Centre for Effective Altruism in the UK. This post might be helpful for anyone navigating the gap between ambition and constraint—whether facing visa barriers, repeated setbacks, or a lack of role models from similar backgrounds. Hearing stories from people facing similar constraints helped me feel less alone during difficult times. I hope this does the same for someone else, and that you'll find lessons relevant to your own situation. It's also for those curious about EA career paths from low- and middle-income countries—stories that I feel are rarely shared. I can only speak to my own experience, but I hop
 ·  · 8m read
 · 
And other ways to make event content more valuable.   I organise and attend a lot of conferences, so the below is correct and need not be caveated based on my experience, but I could be missing some angles here. Also on my substack. When you imagine a session at an event going wrong, you’re probably thinking of the hapless, unlucky speaker. Maybe their slides broke, they forgot their lines, or they tripped on a cable and took the whole stage backdrop down. This happens sometimes, but event organizers usually remember to invest the effort required to prevent this from happening (e.g., checking that the slides work, not leaving cables lying on the stage). But there’s another big way that sessions go wrong that is sorely neglected: wasting everyone’s time, often without people noticing. Let’s give talks a break. They often suck, but event organizers are mostly doing the right things to make them not suck. I’m going to pick on two event formats that (often) suck, why they suck, and how to run more useful content instead. Panels Panels. (very often). suck. Reid Hoffman (and others) have already explained why, but this message has not yet reached a wide enough audience: Because panelists know they'll only have limited time to speak, they tend to focus on clear and simple messages that will resonate with the broadest number of people. The result is that you get one person giving you an overly simplistic take on the subject at hand. And then the process repeats itself multiple times! Instead of going deeper or providing more nuance, the panel format ensures shallowness. Even worse, this shallow discourse manifests as polite groupthink. After all, panelists attend conferences for the same reasons that attendees do – they want to make connections and build relationships. So panels end up heavy on positivity and agreement, and light on the sort of discourse which, through contrasting opinions and debate, could potentially be more illuminating. The worst form of shal
 ·  · 8m read
 · 
Confidence Level: I’ve been an organizer at UChicago for over a year now with my co-organizer, Avik. I also started the UChicago Rationality Group, co-organized a 50-person Midwest EA Retreat, and have spoken to many EA organizers from other universities. A lot of this post is based on vibes and conversations with other organizers, so while it's grounded in experience, some parts are more speculative than others. I’ll try to flag the more speculative points when I can (the * indicates points that I’m less certain about).  I think it’s really important to make sure that EA principles persist in the future. To give one framing for why I believe this: if you think EA is likely to significantly reduce the chances of existential risks, you should think that losing EA is itself a factor significantly contributing to existential risks.  Therefore, I also think one of the most important ways to have a large impact in university (and in general) is to organize/start a university EA group.  Impact Through Force Multiplication 1. Scope – It's easy to be scope insensitive with respect to movement building and creating counterfactual EAs, but a few counterfactual EAs potentially means millions of dollars going to either direct work or effective charities. Getting one more cracked EA involved can potentially double your impact! 1. According to this post from 2021 by the Uni Groups Team: “Assuming a 20% discount rate, a 40 year career, and $2 million of additional value created per year per highly engaged Campus Centre alumnus, ten highly engaged Campus Centre alumni would produce around $80 million of net present value. The actual number is lower, because of counterfactuals.” It should be noted that campus centre alumni is referring to numbers estimated from these schools. 2. They also included an anecdote of a potential near-best-case scenario that I think is worth paraphrasing: The 2015 Stanford EA group included: Redwood CEO Buck Shlegeris, OpenPhil Program Direct