This is a special post for quick takes by Mahdi Complex. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.
Sorted by Click to highlight new quick takes since:

The question I’m currently trying to answer is, how did we get to a point where the main actor concerning itself with humanity’s survival, the plight of those most in need, technological utopianism and humanity’s destiny in the cosmos is a small eclectic network of academics, young professionals and misfits. It’s not governments, it’s not international organizations, it’s not religious institutions. It’s a group of non-profits, primarily funded by a bunch of eccentric billionaires. Am I the only one who thinks that this is crazy and really calls for an explanation? How did the world come to be this way? Why was EA necessary in the first place?

I get the impression that almost all philosophical work that is happening within EA is very “first principles” and problem oriented, and does not seem to engage or concern itself at all with the history, ideas and assumptions that are the foundations of the institutions that are actually “in charge” today.

I’d like to know if anyone has opinions or resources to share on this matter.

My own theory is that the answer lies in the philosophical underpinnings of classical liberalism. My ideas are a bit weird and potentially controversial. Who within EA would be a good person for me to reach out to, and get feedback from?

The keyword is "civilizational inadequacy" and one cluster of failures here is Moloch. The best source is probably Inadequate Equilibria, but before reading the book you should read the book review.

I have read the book and the book review. They provide some great descriptive insights into what's going on, but I'm more interested in a historical perspective of where what might be viewed as "consensus reality" and the correct order of things came from.

Check out the book “innovator’s dilemma” and subsequent works. Inertial is real, and big institutions get stuck and rarely can put themselves out of business (Netflix is a notable exception; they baked their vision into the name when they were simply mail order DVDs). Collapse by Jared Diamond is also worth checking out.

The fundamental problem we run into is our innate desire to be part of the tribe makes us susceptible to going along with shared deceptions. See the Asch compliance experiments, and the great episode of Mind Field that replicated the results (it’s on YouTube). Connection > truth.

History is littered with examples of a shared falsehood embraced by the masses until a small group of dissidents eventually forces the shared understanding past the inflection point; e.g. earth is flat, earth is the center of the universe, the ether, relativity, illness caused by microbes…

Never doubt that a small group of thoughtful, committed citizens can change the world; indeed, it's the only thing that ever has. —Margaret Mead

Curated and popular this week
 ·  · 10m read
 · 
Regulation cannot be written in blood alone. There’s this fantasy of easy, free support for the AI Safety position coming from what’s commonly called a “warning shot”. The idea is that AI will cause smaller disasters before it causes a really big one, and that when people see this they will realize we’ve been right all along and easily do what we suggest. I can’t count how many times someone (ostensibly from my own side) has said something to me like “we just have to hope for warning shots”. It’s the AI Safety version of “regulation is written in blood”. But that’s not how it works. Here’s what I think about the myth that warning shots will come to save the day: 1) Awful. I will never hope for a disaster. That’s what I’m trying to prevent. Hoping for disasters to make our job easier is callous and it takes us off track to be thinking about the silver lining of failing in our mission. 2) A disaster does not automatically a warning shot make. People have to be prepared with a world model that includes what the significance of the event would be to experience it as a warning shot that kicks them into gear. 3) The way to make warning shots effective if (God forbid) they happen is to work hard at convincing others of the risk and what to do about it based on the evidence we already have— the very thing we should be doing in the absence of warning shots. If these smaller scale disasters happen, they will only serve as warning shots if we put a lot of work into educating the public to understand what they mean before they happen. The default “warning shot” event outcome is confusion, misattribution, or normalizing the tragedy. Let’s imagine what one of these macabrely hoped-for “warning shot” scenarios feels like from the inside. Say one of the commonly proposed warning shot scenario occurs: a misaligned AI causes several thousand deaths. Say the deaths are of ICU patients because the AI in charge of their machines decides that costs and suffering would be minimize
 ·  · 1m read
 · 
Shape and lead the future of effective altruism in the UK — apply to be the Director of EA UK. The UK has the world's second-largest EA community, with London having the highest concentration of EAs globally. This represents a significant opportunity to strengthen and grow the effective altruism movement where it matters most. The EA UK board is recruiting for a new Director, as our existing Director is moving on to another opportunity. We believe that the strongest version of EA UK is one where the Director is implementing a strategy that they have created themselves, hence the open nature of this opportunity. As Director of EA UK, you'll have access to: * An established organisation with 9 years of London community building experience * An extensive network and documented history of what works (and what doesn't) * 9+ months of secured funding to develop and implement your vision, and additional potential funding and connections to funders * A supportive board and engaged community eager to help you succeed Your task would be to determine how to best leverage these resources to maximize positive impact through community building in the UK. This is a unique opportunity for a self-directed community-builder to shape EA UK's future. You'll be responsible for both setting the strategic direction and executing on that strategy. This is currently a one-person organisation (you), so you'll need to thrive working independently while building connections across the EA ecosystem. There is scope to pitch expansion to funders and the board. We hope to see your application! Alternatively, if you know anyone who might be a good fit for this role, please email madeleine@goodstructures.co. 
 ·  · 14m read
 · 
This is a transcript of my opening talk at EA Global: London 2025. In my talk, I challenge the misconception that EA is populated by “cold, uncaring, spreadsheet-obsessed robots” and explain how EA principles serve as tools for putting compassion into practice, translating our feelings about the world's problems into effective action. Key points:  * Most people involved in EA are here because of their feelings, not despite them. Many of us are driven by emotions like anger about neglected global health needs, sadness about animal suffering, or fear about AI risks. What distinguishes us as a community isn't that we don't feel; it's that we don't stop at feeling — we act. Two examples: * When USAID cuts threatened critical health programs, GiveWell mobilized $24 million in emergency funding within weeks. * People from the EA ecosystem spotted AI risks years ahead of the mainstream and pioneered funding for the field starting in 2015, helping transform AI safety from a fringe concern into a thriving research field. * We don't make spreadsheets because we lack care. We make them because we care deeply. In the face of tremendous suffering, prioritization helps us take decisive, thoughtful action instead of freezing or leaving impact on the table. * Surveys show that personal connections are the most common way that people first discover EA. When we share our own stories — explaining not just what we do but why it matters to us emotionally — we help others see that EA offers a concrete way to turn their compassion into meaningful impact. You can also watch my full talk on YouTube. ---------------------------------------- One year ago, I stood on this stage as the new CEO of the Centre for Effective Altruism to talk about the journey effective altruism is on. Among other key messages, my talk made this point: if we want to get to where we want to go, we need to be better at telling our own stories rather than leaving that to critics and commentators. Since