This is a special post for quick takes by Clifford. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.
Sorted by Click to highlight new quick takes since:

What do you use as a guide to “common sense” or “everyday ethics”?

I think people in EA often recommend against using EA to guide your everyday decision-making. I think the standard advice is “don’t sweat the small stuff” and apply EA thinking to big life decisions like your career choice or annual donations. EA doesn’t have much to say and isn’t a great guide to think about how you behave with your friends and family or in your community.

I’m curious, as a group of people who take ethics seriously, are there other frameworks or points of reference that you use to help you make decisions in your personal life?

I feel like “stoicism” is a common one and I’ve enjoyed learning about this. I suspect religion is another common answer for others. Are there others?

Something I try to use sometimes but not very consistently is something like:

"If this section of my life was a short story or a movie, would normal people think of me as a good character?"

Where by "a good character" I mean morally good/nice, and not interesting or complex.

This heuristic isn't perfect because it likely overweights act/omission distinctions and as you imply, is a bad choice for big life decisions (Having a direct impact on individuals is likely a bad compass for altruistic career choice, grant decisions should not be decided by who has a more compelling story). I also think everyday ethics overvalues niceness and undervalues some types of honesty. But I think it's a decent heuristic that can't go very wrong as a representation of broad societal norm/ethics, which are probably "good enough" for most everyday decisions.

  1. Abadar: People shouldn't regret trading with me.
  2. Keltham: Don't cause messes just because nobody is policing me, which causes an incentive to police me more.

I felt this thread needs some extra trolling, sry

I don't have any great answers for this, but my not very well thought-out response is to say that virtue ethics tends to be helpful (such as the ideas of stoicism, for which Massimo Pigliucci's book is a decent introduction). I think about the kind of person I want to be, how I want others to see me, and so on.

There are some ways in which ideas of stoicism have some overlap with Buddhism (mainly Buddhist psychology) in the area of awareness of our reactions, what is/isn't within our control, and recognizing the interconnectedness of things. However, but since I know so little about Buddhism I'm not sure to what extent my perception of this similarity is simply "western pop Buddhism." My impression is that much of "western pop Buddhism" is focused on being calm and being cognizant of your locus of control (Alan Watts, Jack Kornfield, and everything derived from Mindfulness Based Stress Reduction[1]). As a white American guy who lived in China for a decade, I'm also very aware of and cautious of the stereotypes of westerners seeking "Eastern wisdom."

If I push myself to be a little more concrete, I think that being considerate is really big in my mind, as is some type of striving for improvement. I generally find that moral philosophy hasn't been much help in the minutia of day-to-day life:

  • how do I figure out how much responsibility I have for this professional failure that I was involved in
  • at what point is it justified to stop trying in a romantic relationship
  • how honest should I be when I discover something that other people would want to know but which would cause harm to me
  • how should I balance loyalty to a friend with each individual being responsible for their own actions
  • to what extent should I take ownership of someone choosing to react negatively to my words/actions
  • how responsible am I for things that I couldn't really control/influence/impact
  • what level of admiration/respect should I have for a person who is very productive and intelligent and knowledgeable when I realize that he/she benefited from lots of external things (grew up in wealthy neighborhood, attended very well-funded school, received lots of gifts/scholarships, etc.)
  1. ^

    McMindfulness: How Mindfulness Became the New Capitalist Spirituality was a pretty good critique of this.

Thanks Joseph! I’ll check out Massimo Pigliucci.

I like your concrete examples. Would be curious if other people have principles which guide how they act in response to those questions.

I'm coming back to this after more than a year because I recently read the book Wild Problems: A Guide to the Decisions That Define Us. I found it to be a better-than-average moral guide to good behavior. It leans toward virtue ethics rather than deontology or utilitarianism. I recommend it.

It felt very practical (in the sense of how to approach life). It isn't practical in teaching you a specific/isolated skill, but it is practical in that this nurtures a mindset, an approach, a perspective that will lead to better choices, better relationships, and a better life. To the extent that one's life is like a garden that needs nurturing and cultivation, I think that Wild Problems is a pretty good does of care/water/sunshine.

I personally stick to the golden rule, it has many iterations and for good reason, my personal favorite being the Mosaic version: “Whatever is hurtful to you, do not do to any other”. Very simple, very helpful. 

I like this framework - "The Lazy Genius guide to nearly everything, but I'm too lazy to count".  It says to decide once for all the small stuff (like what to wear to the store or what to order for lunch) so you can enjoy the moment.

Curated and popular this week
 ·  · 7m read
 · 
This is a linkpost for a paper I wrote recently, “Endogenous Growth and Excess Variety”, along with a summary. Two schools in growth theory Roughly speaking: In Romer’s (1990) growth model, output per person is interpreted as an economy’s level of “technology”, and the economic growth rate—the growth rate of “real GDP” per person—is proportional to the amount of R&D being done. As Jones (1995) pointed out, populations have grown greatly over the last century, and the proportion of people doing research (and the proportion of GDP spent on research) has grown even more quickly, yet the economic growth rate has not risen. Growth theorists have mainly taken two approaches to reconciling [research] population growth with constant economic growth. “Semi-endogenous” growth models (introduced by Jones (1995)) posit that, as the technological frontier advances, further advances get more difficult. Growth in the number of researchers, and ultimately (if research is not automated) population growth, is therefore necessary to sustain economic growth. “Second-wave endogenous” (I’ll write “SWE”) growth models posit instead that technology grows exponentially with a constant or with a growing population. The idea is that process efficiency—the quantity of a given good producible with given labor and/or capital inputs—grows exponentially with constant research effort, as in a first-wave endogenous model; but when population grows, we develop more goods, leaving research effort per good fixed. (We do this, in the model, because each innovator needs a monopoly on his or her invention in order to compensate for the costs of developing it.) Improvements in process efficiency are called “vertical innovations” and increases in good variety are called “horizontal innovations”. Variety is desirable, so the one-off increase in variety produced by an increase to the population size increases real GDP, but it does not increase the growth rate. Likewise exponential population growth raise
 ·  · 25m read
 · 
Epistemic status: This post — the result of a loosely timeboxed ~2-day sprint[1] — is more like “research notes with rough takes” than “report with solid answers.” You should interpret the things we say as best guesses, and not give them much more weight than that. Summary There’s been some discussion of what “transformative AI may arrive soon” might mean for animal advocates. After a very shallow review, we’ve tentatively concluded that radical changes to the animal welfare (AW) field are not yet warranted. In particular: * Some ideas in this space seem fairly promising, but in the “maybe a researcher should look into this” stage, rather than “shovel-ready” * We’re skeptical of the case for most speculative “TAI<>AW” projects * We think the most common version of this argument underrates how radically weird post-“transformative”-AI worlds would be, and how much this harms our ability to predict the longer-run effects of interventions available to us today. Without specific reasons to believe that an intervention is especially robust,[2] we think it’s best to discount its expected value to ~zero. Here’s a brief overview of our (tentative!) actionable takes on this question[3]: ✅ Some things we recommend❌ Some things we don’t recommend * Dedicating some amount of (ongoing) attention to the possibility of “AW lock ins”[4]  * Pursuing other exploratory research on what transformative AI might mean for animals & how to help (we’re unconvinced by most existing proposals, but many of these ideas have received <1 month of research effort from everyone in the space combined — it would be unsurprising if even just a few months of effort turned up better ideas) * Investing in highly “flexible” capacity for advancing animal interests in AI-transformed worlds * Trying to use AI for near-term animal welfare work, and fundraising from donors who have invested in AI * Heavily discounting “normal” interventions that take 10+ years to help animals * “Rowing” on na
 ·  · 14m read
 · 
As we mark one year since the launch of Mieux Donner, we wanted to share some reflections on our journey and our ongoing efforts to promote effective giving in France. Mieux Donner was founded through the Effective Incubation Programme by Ambitious Impact and Giving What We Can. TLDR  * Prioritisation is important. And when the path forward is unclear, trying a lot of different potential priorities with high productivity leads to better results than analysis paralysis. * Ask yourself what the purpose of your organisation is. If you are a mainly marketing/communication org, hire people from this sector (not engineers) and don’t be afraid to hire outside of EA. * Effective altruism ideas are less controversial than we imagined and affiliation has created no (or very little) push back * Hiring early has helped us move fast and is a good idea when you have a clear process and a lot of quality applicants Summary of our progress and activities in year 1 In January 2025, we set a new strategy with time allocation for our different activities. We set one clear goal - 1M€ in donations in 2025. To achieve this goal we decided: Our primary focus for 2025 is to grow our audience. We will experiment with a variety of projects to determine the most effective ways to grow our audience. Our core activities in 2025 will focus on high-impact fundraising and outreach efforts. The strategies where we plan to spend the most time are : * SEO content (most important) * UX Optimization of the website * Social Media ; Peer to Peer fundraising ; Leveraging our existing network The graphic below shows how we plan to spend our marketing time: We are also following partnership opportunities and advising a few high net worth individuals who reached out to us and who will donate by the end of the year. Results: one year of Mieux Donner On our initial funding proposal in June 2024, we wrote down where we wanted to be in one year. Let’s see how we fared: Meta Goals * Spendi