This is a special post for quick takes by Clifford. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.
Sorted by Click to highlight new quick takes since:

What do you use as a guide to “common sense” or “everyday ethics”?

I think people in EA often recommend against using EA to guide your everyday decision-making. I think the standard advice is “don’t sweat the small stuff” and apply EA thinking to big life decisions like your career choice or annual donations. EA doesn’t have much to say and isn’t a great guide to think about how you behave with your friends and family or in your community.

I’m curious, as a group of people who take ethics seriously, are there other frameworks or points of reference that you use to help you make decisions in your personal life?

I feel like “stoicism” is a common one and I’ve enjoyed learning about this. I suspect religion is another common answer for others. Are there others?

Something I try to use sometimes but not very consistently is something like:

"If this section of my life was a short story or a movie, would normal people think of me as a good character?"

Where by "a good character" I mean morally good/nice, and not interesting or complex.

This heuristic isn't perfect because it likely overweights act/omission distinctions and as you imply, is a bad choice for big life decisions (Having a direct impact on individuals is likely a bad compass for altruistic career choice, grant decisions should not be decided by who has a more compelling story). I also think everyday ethics overvalues niceness and undervalues some types of honesty. But I think it's a decent heuristic that can't go very wrong as a representation of broad societal norm/ethics, which are probably "good enough" for most everyday decisions.

  1. Abadar: People shouldn't regret trading with me.
  2. Keltham: Don't cause messes just because nobody is policing me, which causes an incentive to police me more.

I felt this thread needs some extra trolling, sry

I don't have any great answers for this, but my not very well thought-out response is to say that virtue ethics tends to be helpful (such as the ideas of stoicism, for which Massimo Pigliucci's book is a decent introduction). I think about the kind of person I want to be, how I want others to see me, and so on.

There are some ways in which ideas of stoicism have some overlap with Buddhism (mainly Buddhist psychology) in the area of awareness of our reactions, what is/isn't within our control, and recognizing the interconnectedness of things. However, but since I know so little about Buddhism I'm not sure to what extent my perception of this similarity is simply "western pop Buddhism." My impression is that much of "western pop Buddhism" is focused on being calm and being cognizant of your locus of control (Alan Watts, Jack Kornfield, and everything derived from Mindfulness Based Stress Reduction[1]). As a white American guy who lived in China for a decade, I'm also very aware of and cautious of the stereotypes of westerners seeking "Eastern wisdom."

If I push myself to be a little more concrete, I think that being considerate is really big in my mind, as is some type of striving for improvement. I generally find that moral philosophy hasn't been much help in the minutia of day-to-day life:

  • how do I figure out how much responsibility I have for this professional failure that I was involved in
  • at what point is it justified to stop trying in a romantic relationship
  • how honest should I be when I discover something that other people would want to know but which would cause harm to me
  • how should I balance loyalty to a friend with each individual being responsible for their own actions
  • to what extent should I take ownership of someone choosing to react negatively to my words/actions
  • how responsible am I for things that I couldn't really control/influence/impact
  • what level of admiration/respect should I have for a person who is very productive and intelligent and knowledgeable when I realize that he/she benefited from lots of external things (grew up in wealthy neighborhood, attended very well-funded school, received lots of gifts/scholarships, etc.)
  1. ^

    McMindfulness: How Mindfulness Became the New Capitalist Spirituality was a pretty good critique of this.

Thanks Joseph! I’ll check out Massimo Pigliucci.

I like your concrete examples. Would be curious if other people have principles which guide how they act in response to those questions.

I'm coming back to this after more than a year because I recently read the book Wild Problems: A Guide to the Decisions That Define Us. I found it to be a better-than-average moral guide to good behavior. It leans toward virtue ethics rather than deontology or utilitarianism. I recommend it.

It felt very practical (in the sense of how to approach life). It isn't practical in teaching you a specific/isolated skill, but it is practical in that this nurtures a mindset, an approach, a perspective that will lead to better choices, better relationships, and a better life. To the extent that one's life is like a garden that needs nurturing and cultivation, I think that Wild Problems is a pretty good does of care/water/sunshine.

I personally stick to the golden rule, it has many iterations and for good reason, my personal favorite being the Mosaic version: “Whatever is hurtful to you, do not do to any other”. Very simple, very helpful. 

I like this framework - "The Lazy Genius guide to nearly everything, but I'm too lazy to count".  It says to decide once for all the small stuff (like what to wear to the store or what to order for lunch) so you can enjoy the moment.

Curated and popular this week
 ·  · 10m read
 · 
Regulation cannot be written in blood alone. There’s this fantasy of easy, free support for the AI Safety position coming from what’s commonly called a “warning shot”. The idea is that AI will cause smaller disasters before it causes a really big one, and that when people see this they will realize we’ve been right all along and easily do what we suggest. I can’t count how many times someone (ostensibly from my own side) has said something to me like “we just have to hope for warning shots”. It’s the AI Safety version of “regulation is written in blood”. But that’s not how it works. Here’s what I think about the myth that warning shots will come to save the day: 1) Awful. I will never hope for a disaster. That’s what I’m trying to prevent. Hoping for disasters to make our job easier is callous and it takes us off track to be thinking about the silver lining of failing in our mission. 2) A disaster does not automatically a warning shot make. People have to be prepared with a world model that includes what the significance of the event would be to experience it as a warning shot that kicks them into gear. 3) The way to make warning shots effective if (God forbid) they happen is to work hard at convincing others of the risk and what to do about it based on the evidence we already have— the very thing we should be doing in the absence of warning shots. If these smaller scale disasters happen, they will only serve as warning shots if we put a lot of work into educating the public to understand what they mean before they happen. The default “warning shot” event outcome is confusion, misattribution, or normalizing the tragedy. Let’s imagine what one of these macabrely hoped-for “warning shot” scenarios feels like from the inside. Say one of the commonly proposed warning shot scenario occurs: a misaligned AI causes several thousand deaths. Say the deaths are of ICU patients because the AI in charge of their machines decides that costs and suffering would be minimize
 ·  · 1m read
 · 
Shape and lead the future of effective altruism in the UK — apply to be the Director of EA UK. The UK has the world's second-largest EA community, with London having the highest concentration of EAs globally. This represents a significant opportunity to strengthen and grow the effective altruism movement where it matters most. The EA UK board is recruiting for a new Director, as our existing Director is moving on to another opportunity. We believe that the strongest version of EA UK is one where the Director is implementing a strategy that they have created themselves, hence the open nature of this opportunity. As Director of EA UK, you'll have access to: * An established organisation with 9 years of London community building experience * An extensive network and documented history of what works (and what doesn't) * 9+ months of secured funding to develop and implement your vision, and additional potential funding and connections to funders * A supportive board and engaged community eager to help you succeed Your task would be to determine how to best leverage these resources to maximize positive impact through community building in the UK. This is a unique opportunity for a self-directed community-builder to shape EA UK's future. You'll be responsible for both setting the strategic direction and executing on that strategy. This is currently a one-person organisation (you), so you'll need to thrive working independently while building connections across the EA ecosystem. There is scope to pitch expansion to funders and the board. We hope to see your application! Alternatively, if you know anyone who might be a good fit for this role, please email madeleine@goodstructures.co. 
 ·  · 14m read
 · 
This is a transcript of my opening talk at EA Global: London 2025. In my talk, I challenge the misconception that EA is populated by “cold, uncaring, spreadsheet-obsessed robots” and explain how EA principles serve as tools for putting compassion into practice, translating our feelings about the world's problems into effective action. Key points:  * Most people involved in EA are here because of their feelings, not despite them. Many of us are driven by emotions like anger about neglected global health needs, sadness about animal suffering, or fear about AI risks. What distinguishes us as a community isn't that we don't feel; it's that we don't stop at feeling — we act. Two examples: * When USAID cuts threatened critical health programs, GiveWell mobilized $24 million in emergency funding within weeks. * People from the EA ecosystem spotted AI risks years ahead of the mainstream and pioneered funding for the field starting in 2015, helping transform AI safety from a fringe concern into a thriving research field. * We don't make spreadsheets because we lack care. We make them because we care deeply. In the face of tremendous suffering, prioritization helps us take decisive, thoughtful action instead of freezing or leaving impact on the table. * Surveys show that personal connections are the most common way that people first discover EA. When we share our own stories — explaining not just what we do but why it matters to us emotionally — we help others see that EA offers a concrete way to turn their compassion into meaningful impact. You can also watch my full talk on YouTube. ---------------------------------------- One year ago, I stood on this stage as the new CEO of the Centre for Effective Altruism to talk about the journey effective altruism is on. Among other key messages, my talk made this point: if we want to get to where we want to go, we need to be better at telling our own stories rather than leaving that to critics and commentators. Since