Welcome to the EA Forum bot site. If you are trying to access the Forum programmatically (either by scraping or via the api) please use this site rather than forum.effectivealtruism.org.

This site has the same content as the main site, but is run in a separate environment to avoid bots overloading the main site and affecting performance for human users.

Quick takes

Show community
View more
Set topic
Frontpage
Global health
Animal welfare
Existential risk
Biosecurity & pandemics
10 more
EU opportunities for early-career EAs: quick overview from someone who applied broadly I applied to several EU entry programmes to test the waters, and I wanted to share what worked, what didn’t, and what I'm still uncertain about, hoping to get some insights. Quick note: I'm a nurse, currently finishing a Master of Public Health, and trying to contribute as best I can to reducing biological risks. My specialisation is in Governance and Leadership in European Public Health, which explains my interest in EU career paths. I don’t necessarily think the EU is the best option for everyone. I just happen to be exploring it seriously at the moment and wanted to share what I’ve learned in case it’s useful to others. ⌨️ What I applied to & how it went * Blue Book traineeship – got it (starting October at HERA.04, Emergency Office of DG HERA) * European Committee of the Regions traineeship – rejected in pre-selection * European Economic & Social Committee traineeship – same * Eurofound traineeship – no response * EMA traineeship (2 applications: Training Content and Vaccine Outreach) – no response * Center for Democracy & Technology internship – no response * Schuman traineeship (Parliament) – no response * EFSA traineeship – interview but no feedback (I indicated HERA preference, so not surprised) If anyone needed a reminder: rejection is normal and to be expected, not a sign of your inadequacy. It only takes one “yes.” 📄 Key EA Forum posts that informed and inspired me * “EAs interested in EU policy: Consider applying for the European Commission’s Blue Book Traineeship” * “What I learned from a week in the EU policy bubble” – excellent perspective on the EU policymaking environment 🔍 Where to find EU traineeships All together here: 🔗 https://eu-careers.europa.eu/en/job-opportunities/traineeships?institution=All Includes Blue Book, Schuman, and agency-specific roles (EMA, EFSA, ECDC...). Traineeships are just traineeships: don’t underestimate what
I just saw that Season 3 Episode 9 of Leverage: Redemption ("The Poltergeist Job") that came out on 2025 May 29 has an unfortunately very unflattering portrayal of "effective altruism". The main antagonist and CEO of Futurilogic, Matt, uses EA to justify horrific actions, including allowing firefighters to be injured when his company's algorithm throttles cell service during emergencies. He also literally murders people while claiming it's for the greater good. And if that's not enough, he's also laundering money for North Korea through crypto investments! Why would he do this? He explicitly invokes utilitarian reasoning ("Trolley Theory 101") to dismiss harm caused: And when wielding an axe to kill someone, Matt says: "This is altruism, Skylar! Whatever I need to do to save the world." But what's his cause area? Something about ending "global hunger and homelessness" through free internet access. Matt never articulates any real theory of change beyond "make money (and do crimes) → launch free internet → somehow save world." And of course the show depicts the EA tech executives at Futurilogic as being in a "polycule" with a "hive mind" mentality. Bummer.
We should shut down EA UK, change our mind   EA UK is hiring a new director, and if we don't find someone who can suggest a compelling strategy, shutting down is a likely outcome despite having ~9 months of funding runway. Over the last decade EA in the UK has been pretty successful, Loxbridge in particular has the highest number of people involved in EA, there are multiple EA related organisations, and many people in government, tech, business, academia, media, etc who are positively inclined towards EA. Because of this success (not that we're claiming counterfactual credit), there is less low hanging fruit for a national/city group to do. For example: * Conferences - EAG London and student summits are run by CEA * Co-working - There are at least 3 different places to co-work (LEAH, LISA, AIM) for 100+ people, as well as many other orgs that have space for guests * Student groups - A combination of Arcadia Impact and CEA * Incubation of new organisations - AIM/CE * Media outreach - Mainly done by the most relevant organisations/CEA   I'm not saying mission accomplished, but for EA specific community building in the UK, I think there will have to be a good understanding of the existing landscape and ideas for what is missing and is unlikely to be done by someone else.
10
calebp
16h
4
The flip side of “value drift” is that you might get to dramatically “better” values in a few years time and regret locking yourself into a path where you’re not able to fully capitalise on your improved values. 
Hot Take: Securing AI Labs could actually make things worse There's a consensus view that stronger security at leading AI labs would be a good thing. It's not at all clear to me that this is the case. Consider the extremes: In a maximally insecure world, where anyone can easily steal any model that gets trained, there's no profit or strategic/military advantage to be gained from doing the training, so nobody's incentivised to invest much to do it. We'd only get AGI if some sufficiently-well-resourced group believed it would be good for everyone to have an AGI, and were willing to fund its development as philanthropy. In a maximally secure world, where stealing trained models is impossible, whichever company/country got to AGI first could essentially dominate everyone else. In this world there's huge incentive to invest and to race. Of course, our world lies somewhere between these two. State actors almost certainly could steal models from any of the big 3; potentially organised cybercriminals/rival companies too, but most private individuals could not. Still, it seems that marginal steps towards a higher security world make investment and racing more appealing as the number of actors able to steal the products of your investment and compete with you for profits/power falls. But I notice I am confused. The above reasoning predicts that nobody should be willing to make significant investments in developing AGI with current levels of cyber-security, since if they succeeded their AGI would immediately be stolen by multiple gouvenments (and possibly rival companies/cybercriminals), which would probably nullify any return on the investment. What I observe is OpenAI raising $40 billion in their last funding round, with the explicit goal of building AGI. So now I have a question: given current levels of cybersecurity, why are investors willing to pour so much cash into building AGI? ...maybe it's the same reason various actors are willing to invest into building open