This is a linkpost for https://psyarxiv.com/w52zm

In this paper, we argue that utilitarians who try to act on utilitarianism in the real world face many psychological obstacles, ranging from selfishness to moral biases to limits to epistemic and instrumental rationality. To overcome the most important of these obstacles, utilitarians need to cultivate a number of virtues. We argue that utilitarians should prioritize six virtues.

  • Moderate altruism - to set aside some of their resources for others.
  • Moral expansiveness - to care about distant beneficiaries.
  • Effectiveness-focus - to prioritize the most effective interventions.
  • Truth-seeking - to overcome epistemic biases to find those effective interventions.
  • Collaborativeness - to engage in fruitful collaboration with other utilitarians, as well as non-utilitarians.
  • Determination - to consistently act on utilitarian principles with persistence and deliberation

In addition, we argue that utilitarians should normally not engage in harm for the greater good, but should stick to common sense norms such as norms against lying and stealing. 

So in our view, real-world utilitarianism converges with common sense morality in some respects. Utilitarians should follow common sense norms and should not feel that they have to sacrifice almost all of their resources for others, in contrast to what it might seem at first glance.

But in other ways, real-world utilitarianism diverges from common sense morality. Because some opportunities to do good are so much more effective than others, utilitarians should cultivate virtues that allow them to take those opportunities, such as effectiveness-focus and moral expansiveness. Those virtues are not emphasized by common sense morality.

Some of our suggested virtues are commonly associated with utilitarianism. Moral expansiveness is maybe the clearest example. By contrast, virtues such as truth-seeking, collaborativeness, and determination do not tend to be associated with utilitarianism, and are not conceptually tied to it. But empirically, it just turns out that they are very important in order to maximize utilitarian impact in the real world.

Comments7


Sorted by Click to highlight new comments since:

Great! I broadly endorse the above virtues and can't say much on the object level. On meta-level, I am curious about how do you think about the impact of this paper. I have certain guesses:

  • The paper's conclusion says: "We hope that it should inspire a debate among philosophers and psychologists about what virtues utilitarians should prioritize the most." Is that it?
  • Or are you aiming at figuring out recommendations for EAs to follow (akin to CEA's Guiding principles and Lucius Caviola's talk Against naive effective altruism)?
  • Or maybe you want to re-associate utilitarianism with nice/warm virtues because it appears cold to some (like Bleeding Heart Libertarians was reframing libertarianism)?

Thanks for your comment. The comparison to Bleeding Heart Libertarians is good and instructive; thanks for that. Yes, one goal of our paper is to show that utilitarianism as practiced in the real world isn't about breaking rules and similar. Instead, when you actually apply utilitarianism, you need virtues that most people would feel positively about - like truth-seeking and collaboration. And yes, we do hope that that gives a different and more positive image of utilitarianism.

We also want to give recommendations to people who already believe in utilitarianism inside and outside the EA community, yes.

We are also at the early stages of an empirical project focused on getting a better psychological understanding of these virtues.

Great paper! Though I believe one particular value ought to be cultivated above all though it only gets a passing mention in the article.

Kindness (Agape love).

Summary: Practicing "uncalculated" "less-impactful" goodness in frequent, small ways, should prove very helpful in the practice of larger-scale impactful calculated goodness .

  • It is common and "easy" to practice/cultivate 
  • I posit that a greater level of kindness leads to much greater ease of overcoming the psychological obstacles to cultivating the listed utilitarian virtues. Conversely, someone unkind by nature will have a much harder time to cultivate them
  • The IMPACT of becoming kinder thus can affect all other areas, and therefore is likely to be a highly -effective- way of increasing global well-being.
  • Increased Kindness has a ripple effect not simply on ourselves and our ability to do more good, but on others as well in ways that are difficult to quantify. 
  • Kindness applied daily, by a large segment of the population (or even a small one, arguably, if they are otherwise effective), with minimal effort, could dramatically impact the world, in ways that a similar effort in any one of the other virtues are unlikely to approach.

Which leads to a counter-intuitive hypothesis: 

Kindness, cultivated in daily life, applied to causes that may appear/be less-effective, but that come to us/that we come across during the daily bustle, could actually have the greatest impact on the world. 

I expect there are diminishing returns, and only a (small?) portion of one's resources ought to be dedicated to the effort. Anecdotal evidence however (EDIT: Actually I believe there is research on the topic presented in 80 000 hours?) seems to indicate that at least the emotional energy resource level increases significantly through acts of kindness, providing additional returns on the investment. 

 

Again:

Practicing "uncalculated" "less-impactful" goodness in frequent, small ways, should prove very helpful in the practice of larger-scale impactful, calculated goodness.

Interesting work, thanks for posting.

One very minor point:
I see that you use the term "Truth-seeking." I've heard this term used before in the extended community, and I generally like it, but my impression is that it's "primary" definition is particular to political situations. See: https://en.wikipedia.org/wiki/Truth-seeking

Have you found any existing discussion here? Do you think it's fine for us to use the word in the way you do in this paper, in all settings, without this causing confusing?

Thanks, good question. I'm not quite sure how strongly the word "truth-seeking" is associated with this political usage (related to truth and reconciliation commissions, etc.). My intuition would have been that you can use it in the wider sense that we use it in here without risk of misunderstanding, but I haven't thought about it before and am open to input.

the way you used it seems a lot more normal to me than the political usage

Personally, I don't feel like I understand it's regular use much. My (brief) investigation has made me fairly confused on the matter.

If anyone else reading this feels like they have a better impression here, I'd be curious.

Curated and popular this week
 ·  · 25m read
 · 
Epistemic status: This post — the result of a loosely timeboxed ~2-day sprint[1] — is more like “research notes with rough takes” than “report with solid answers.” You should interpret the things we say as best guesses, and not give them much more weight than that. Summary There’s been some discussion of what “transformative AI may arrive soon” might mean for animal advocates. After a very shallow review, we’ve tentatively concluded that radical changes to the animal welfare (AW) field are not yet warranted. In particular: * Some ideas in this space seem fairly promising, but in the “maybe a researcher should look into this” stage, rather than “shovel-ready” * We’re skeptical of the case for most speculative “TAI<>AW” projects * We think the most common version of this argument underrates how radically weird post-“transformative”-AI worlds would be, and how much this harms our ability to predict the longer-run effects of interventions available to us today. Without specific reasons to believe that an intervention is especially robust,[2] we think it’s best to discount its expected value to ~zero. Here’s a brief overview of our (tentative!) actionable takes on this question[3]: ✅ Some things we recommend❌ Some things we don’t recommend * Dedicating some amount of (ongoing) attention to the possibility of “AW lock ins”[4]  * Pursuing other exploratory research on what transformative AI might mean for animals & how to help (we’re unconvinced by most existing proposals, but many of these ideas have received <1 month of research effort from everyone in the space combined — it would be unsurprising if even just a few months of effort turned up better ideas) * Investing in highly “flexible” capacity for advancing animal interests in AI-transformed worlds * Trying to use AI for near-term animal welfare work, and fundraising from donors who have invested in AI * Heavily discounting “normal” interventions that take 10+ years to help animals * “Rowing” on na
 ·  · 3m read
 · 
About the program Hi! We’re Chana and Aric, from the new 80,000 Hours video program. For over a decade, 80,000 Hours has been talking about the world’s most pressing problems in newsletters, articles and many extremely lengthy podcasts. But today’s world calls for video, so we’ve started a video program[1], and we’re so excited to tell you about it! 80,000 Hours is launching AI in Context, a new YouTube channel hosted by Aric Floyd. Together with associated Instagram and TikTok accounts, the channel will aim to inform, entertain, and energize with a mix of long and shortform videos about the risks of transformative AI, and what people can do about them. [Chana has also been experimenting with making shortform videos, which you can check out here; we’re still deciding on what form her content creation will take] We hope to bring our own personalities and perspectives on these issues, alongside humor, earnestness, and nuance. We want to help people make sense of the world we're in and think about what role they might play in the upcoming years of potentially rapid change. Our first long-form video For our first long-form video, we decided to explore AI Futures Project’s AI 2027 scenario (which has been widely discussed on the Forum). It combines quantitative forecasting and storytelling to depict a possible future that might include human extinction, or in a better outcome, “merely” an unprecedented concentration of power. Why? We wanted to start our new channel with a compelling story that viewers can sink their teeth into, and that a wide audience would have reason to watch, even if they don’t yet know who we are or trust our viewpoints yet. (We think a video about “Why AI might pose an existential risk”, for example, might depend more on pre-existing trust to succeed.) We also saw this as an opportunity to tell the world about the ideas and people that have for years been anticipating the progress and dangers of AI (that’s many of you!), and invite the br
 ·  · 12m read
 · 
I donated my left kidney to a stranger on April 9, 2024, inspired by my dear friend @Quinn Dougherty (who was inspired by @Scott Alexander, who was inspired by @Dylan Matthews). By the time I woke up after surgery, it was on its way to San Francisco. When my recipient woke up later that same day, they felt better than when they went under. I'm going to talk about one complication and one consequence of my donation, but I want to be clear from the get: I would do it again in a heartbeat. Correction: Quinn actually donated in April 2023, before Scott’s donation. He wasn’t aware that Scott was planning to donate at the time. The original seed came from Dylan's Vox article, then conversations in the EA Corner Discord, and it's Josh Morrison who gets credit for ultimately helping him decide to donate. Thanks Quinn! I met Quinn at an EA picnic in Brooklyn and he was wearing a shirt that I remembered as saying "I donated my kidney to a stranger and I didn't even get this t-shirt." It actually said "and all I got was this t-shirt," which isn't as funny. I went home and immediately submitted a form on the National Kidney Registry website. The worst that could happen is I'd get some blood tests and find out I have elevated risk of kidney disease, for free.[1] I got through the blood tests and started actually thinking about whether to do this. I read a lot of arguments, against as well as for. The biggest risk factor for me seemed like the heightened risk of pre-eclampsia[2], but since I live in a developed country, this is not a huge deal. I am planning to have children. We'll just keep an eye on my blood pressure and medicate if necessary. The arguments against kidney donation seemed to center around this idea of preserving the sanctity or integrity of the human body: If you're going to pierce the sacred periderm of the skin, you should only do it to fix something in you. (That's a pretty good heuristic most of the time, but we make exceptions to give blood and get pier