This is a special post for quick takes by Clifford. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.
Sorted by Click to highlight new quick takes since:

What do you use as a guide to “common sense” or “everyday ethics”?

I think people in EA often recommend against using EA to guide your everyday decision-making. I think the standard advice is “don’t sweat the small stuff” and apply EA thinking to big life decisions like your career choice or annual donations. EA doesn’t have much to say and isn’t a great guide to think about how you behave with your friends and family or in your community.

I’m curious, as a group of people who take ethics seriously, are there other frameworks or points of reference that you use to help you make decisions in your personal life?

I feel like “stoicism” is a common one and I’ve enjoyed learning about this. I suspect religion is another common answer for others. Are there others?

Something I try to use sometimes but not very consistently is something like:

"If this section of my life was a short story or a movie, would normal people think of me as a good character?"

Where by "a good character" I mean morally good/nice, and not interesting or complex.

This heuristic isn't perfect because it likely overweights act/omission distinctions and as you imply, is a bad choice for big life decisions (Having a direct impact on individuals is likely a bad compass for altruistic career choice, grant decisions should not be decided by who has a more compelling story). I also think everyday ethics overvalues niceness and undervalues some types of honesty. But I think it's a decent heuristic that can't go very wrong as a representation of broad societal norm/ethics, which are probably "good enough" for most everyday decisions.

  1. Abadar: People shouldn't regret trading with me.
  2. Keltham: Don't cause messes just because nobody is policing me, which causes an incentive to police me more.

I felt this thread needs some extra trolling, sry

I don't have any great answers for this, but my not very well thought-out response is to say that virtue ethics tends to be helpful (such as the ideas of stoicism, for which Massimo Pigliucci's book is a decent introduction). I think about the kind of person I want to be, how I want others to see me, and so on.

There are some ways in which ideas of stoicism have some overlap with Buddhism (mainly Buddhist psychology) in the area of awareness of our reactions, what is/isn't within our control, and recognizing the interconnectedness of things. However, but since I know so little about Buddhism I'm not sure to what extent my perception of this similarity is simply "western pop Buddhism." My impression is that much of "western pop Buddhism" is focused on being calm and being cognizant of your locus of control (Alan Watts, Jack Kornfield, and everything derived from Mindfulness Based Stress Reduction[1]). As a white American guy who lived in China for a decade, I'm also very aware of and cautious of the stereotypes of westerners seeking "Eastern wisdom."

If I push myself to be a little more concrete, I think that being considerate is really big in my mind, as is some type of striving for improvement. I generally find that moral philosophy hasn't been much help in the minutia of day-to-day life:

  • how do I figure out how much responsibility I have for this professional failure that I was involved in
  • at what point is it justified to stop trying in a romantic relationship
  • how honest should I be when I discover something that other people would want to know but which would cause harm to me
  • how should I balance loyalty to a friend with each individual being responsible for their own actions
  • to what extent should I take ownership of someone choosing to react negatively to my words/actions
  • how responsible am I for things that I couldn't really control/influence/impact
  • what level of admiration/respect should I have for a person who is very productive and intelligent and knowledgeable when I realize that he/she benefited from lots of external things (grew up in wealthy neighborhood, attended very well-funded school, received lots of gifts/scholarships, etc.)
  1. ^

    McMindfulness: How Mindfulness Became the New Capitalist Spirituality was a pretty good critique of this.

Thanks Joseph! I’ll check out Massimo Pigliucci.

I like your concrete examples. Would be curious if other people have principles which guide how they act in response to those questions.

I'm coming back to this after more than a year because I recently read the book Wild Problems: A Guide to the Decisions That Define Us. I found it to be a better-than-average moral guide to good behavior. It leans toward virtue ethics rather than deontology or utilitarianism. I recommend it.

It felt very practical (in the sense of how to approach life). It isn't practical in teaching you a specific/isolated skill, but it is practical in that this nurtures a mindset, an approach, a perspective that will lead to better choices, better relationships, and a better life. To the extent that one's life is like a garden that needs nurturing and cultivation, I think that Wild Problems is a pretty good does of care/water/sunshine.

I personally stick to the golden rule, it has many iterations and for good reason, my personal favorite being the Mosaic version: “Whatever is hurtful to you, do not do to any other”. Very simple, very helpful. 

I like this framework - "The Lazy Genius guide to nearly everything, but I'm too lazy to count".  It says to decide once for all the small stuff (like what to wear to the store or what to order for lunch) so you can enjoy the moment.

Curated and popular this week
 ·  · 25m read
 · 
Epistemic status: This post — the result of a loosely timeboxed ~2-day sprint[1] — is more like “research notes with rough takes” than “report with solid answers.” You should interpret the things we say as best guesses, and not give them much more weight than that. Summary There’s been some discussion of what “transformative AI may arrive soon” might mean for animal advocates. After a very shallow review, we’ve tentatively concluded that radical changes to the animal welfare (AW) field are not yet warranted. In particular: * Some ideas in this space seem fairly promising, but in the “maybe a researcher should look into this” stage, rather than “shovel-ready” * We’re skeptical of the case for most speculative “TAI<>AW” projects * We think the most common version of this argument underrates how radically weird post-“transformative”-AI worlds would be, and how much this harms our ability to predict the longer-run effects of interventions available to us today. Without specific reasons to believe that an intervention is especially robust,[2] we think it’s best to discount its expected value to ~zero. Here’s a brief overview of our (tentative!) actionable takes on this question[3]: ✅ Some things we recommend❌ Some things we don’t recommend * Dedicating some amount of (ongoing) attention to the possibility of “AW lock ins”[4]  * Pursuing other exploratory research on what transformative AI might mean for animals & how to help (we’re unconvinced by most existing proposals, but many of these ideas have received <1 month of research effort from everyone in the space combined — it would be unsurprising if even just a few months of effort turned up better ideas) * Investing in highly “flexible” capacity for advancing animal interests in AI-transformed worlds * Trying to use AI for near-term animal welfare work, and fundraising from donors who have invested in AI * Heavily discounting “normal” interventions that take 10+ years to help animals * “Rowing” on na
 ·  · 3m read
 · 
About the program Hi! We’re Chana and Aric, from the new 80,000 Hours video program. For over a decade, 80,000 Hours has been talking about the world’s most pressing problems in newsletters, articles and many extremely lengthy podcasts. But today’s world calls for video, so we’ve started a video program[1], and we’re so excited to tell you about it! 80,000 Hours is launching AI in Context, a new YouTube channel hosted by Aric Floyd. Together with associated Instagram and TikTok accounts, the channel will aim to inform, entertain, and energize with a mix of long and shortform videos about the risks of transformative AI, and what people can do about them. [Chana has also been experimenting with making shortform videos, which you can check out here; we’re still deciding on what form her content creation will take] We hope to bring our own personalities and perspectives on these issues, alongside humor, earnestness, and nuance. We want to help people make sense of the world we're in and think about what role they might play in the upcoming years of potentially rapid change. Our first long-form video For our first long-form video, we decided to explore AI Futures Project’s AI 2027 scenario (which has been widely discussed on the Forum). It combines quantitative forecasting and storytelling to depict a possible future that might include human extinction, or in a better outcome, “merely” an unprecedented concentration of power. Why? We wanted to start our new channel with a compelling story that viewers can sink their teeth into, and that a wide audience would have reason to watch, even if they don’t yet know who we are or trust our viewpoints yet. (We think a video about “Why AI might pose an existential risk”, for example, might depend more on pre-existing trust to succeed.) We also saw this as an opportunity to tell the world about the ideas and people that have for years been anticipating the progress and dangers of AI (that’s many of you!), and invite the br
 ·  · 12m read
 · 
I donated my left kidney to a stranger on April 9, 2024, inspired by my dear friend @Quinn Dougherty (who was inspired by @Scott Alexander, who was inspired by @Dylan Matthews). By the time I woke up after surgery, it was on its way to San Francisco. When my recipient woke up later that same day, they felt better than when they went under. I'm going to talk about one complication and one consequence of my donation, but I want to be clear from the get: I would do it again in a heartbeat. Correction: Quinn actually donated in April 2023, before Scott’s donation. He wasn’t aware that Scott was planning to donate at the time. The original seed came from Dylan's Vox article, then conversations in the EA Corner Discord, and it's Josh Morrison who gets credit for ultimately helping him decide to donate. Thanks Quinn! I met Quinn at an EA picnic in Brooklyn and he was wearing a shirt that I remembered as saying "I donated my kidney to a stranger and I didn't even get this t-shirt." It actually said "and all I got was this t-shirt," which isn't as funny. I went home and immediately submitted a form on the National Kidney Registry website. The worst that could happen is I'd get some blood tests and find out I have elevated risk of kidney disease, for free.[1] I got through the blood tests and started actually thinking about whether to do this. I read a lot of arguments, against as well as for. The biggest risk factor for me seemed like the heightened risk of pre-eclampsia[2], but since I live in a developed country, this is not a huge deal. I am planning to have children. We'll just keep an eye on my blood pressure and medicate if necessary. The arguments against kidney donation seemed to center around this idea of preserving the sanctity or integrity of the human body: If you're going to pierce the sacred periderm of the skin, you should only do it to fix something in you. (That's a pretty good heuristic most of the time, but we make exceptions to give blood and get pier