Hide table of contents

For a bunch of geeks like us who are interested in ethics and doing the right thing, I'm surprised to see so few mentions of T. M. Scanlon on the EA Forum[1]. Is there any particular reason for this, or is it just the general explanation of Scanlon not being heavily referenced by Toby Ord, Will MaCaskill, or Peter Singer, and therefore is not referenced much by EAs?

Here is the Stanford Encyclopedia of Philosophy article on contractualism, in case anyone wants to read some more. To be clear, I am no expert on Scanlon. I hadn't even heard of Scanlon and contractualism before reading How to Be Perfect, a playful book about ethics by the creator of the much-loved-by-EAs show The Good Place.

 - - - - 

EDIT: I've decided to track my changing thinking via edits. Here are some of my current best guesses as to contributing factors.

Factor 1: This is somewhat indicative of a characteristic of EAs: we dabble in ethics just enough to feel justified in our actions using utility and expected value, and then we move forward with a project/task/venture (with vague gestures toward cluelessness and uncertainty). 

Factor 2: Scanlon isn't nearly as influential as How to Be Perfect suggests. He doesn't show up on lists of the most influential moral philosophers.

Factor 3: EAs want something that feels more objective/rigorous than the fuzzy "reasonableness" that forms a core of Scanlon's ideas.

Factor 4: Scanlon's ideas don't provide much in regards to what we should do, and instead focus on what actions we should avoid.

  1. ^

    Actually, outside of the forum also. I haven't heard anyone mention T. M. Scanlon or contractualism at all.

15

0
0

Reactions

0
0
New Answer
New Comment


6 Answers sorted by

The most obvious reason is probably aggregation. Scanlonians are among the philosophers most interested in developing non-aggregative theories of beneficence, and EA analyses tend to assume purely aggregative theories of beneficence as a starting point. More simply it could just be that Scanlon is still relatively obscure despite his moment in the sun on the Good Place.

[anonymous]3
0
0

Strongly agreed. For those who want exposition on this point, see Ashford's article on demandingness in contractualism vs. utilitarianism https://doi.org/10.1086/342853

1
Joseph
Thanks for sharing additional resources!
2
Devin Kalish
So long as we’re sharing recommendations, Parfit also has a good paper that’s relevant to this, which a good deal of the more recent partial aggregation debate is leap-frogging off of.

I did read some Scanlon a while back, and recall finding the theory a bit ungrounded compared to utilitarianism (lots of reasonable people being reasonable to one another without a reasonable explanation of "reasonable", which everything rests on).

I think Will named "What We Owe the Future" as a kind of retort to "What We Owe to Each Other" FWIW.

Yes, although I don't have any reference to piont to, I also suspected that the naming of "What We Owe the Future" was a bit of a reference to Scanlon.

I think the most basic answer is that Scanlon's philosophy doesn't really address the questions the EA community is most interested in, i.e., what are the best opportunities to have a positive impact on the world? What We Owe to Each Other offers a theory of wrongness, which is a very different framing. 

I'm a fan of Scanlon's work, but it has some pretty significant gaps, in my opinion. For example, it doesn't give great guidance on how to think of moral obligations to non-human animals or future generations.

I think you can make a pretty persuasive Scanlonian-style argument for some of the GWWC-style work, global health interventions, etc. But I'm not sure the Scanlonian argument adds all that much to these topics.

I think people could probably get a lot out of reading Scanlon, especially those who want to better understand non-consequentialist approaches to morality. But there are a lot of good and important books to read, and I'm not sure I'd prioritise recommending Scanlon out of all the many possibilities.

I would like to hear more about this. Want to write a post about it?

Actually, I'd love to read an overview of the most important/influential moral philosophers of the 20th century and their ideas. It is the kind of thing that someone who is knowledgeable about the area could throw together relatively easily, but someone who knows almost nothing about it (like me) would spend weeks or months learning the basics.

I'd love to read an overview of the most important/influential moral philosophers of the 20th century and their ideas

I think many introductions to moral philosophy will do this!
 

It is the kind of thing that someone who is knowledgeable about the area could throw together relatively easily

I think this is probably not the case! Writing succinctly about big ideas is very difficult.

5
Joseph
Hmmm. This gives me a small update toward thinking that I should just find a moral philosophy textbook and work my way through it.

I would if I knew more about it! Haha. But I think that my entire understanding of contractualism and Scanlon would only take up two or three sentences.

I am interested in learning more about ethics and moral philosophy, but I'm not interested in applying to a masters degree in philosophy, nor delving through a few dozen densely-written tomes.

Adopting an anthropologist's lens, I do find it curious that we EAs focus so much on a few small sections of the moral theories that exist, but I'm not sure if that is more due to "these ones are the best" or due to some type of oversight/bias/sloppy thinking.

There have been about 4 papers on future generations and contracualism, including the following and what it responds to: https://www.cser.ac.uk/resources/wrongness-human-extinction/q

I don't get the impression that EAs are particularly motivated by morality. Rather, they are motivated to produce things they see as good. Some moral theories, like contractualism, see producing a lot of good things (within the bounds of our other moral duties) as morally optional. You're not doing wrong by living a normal decent life. It seems perfectly aligned with EA to hold one of those theories and still personally aim to do as much good as possible.

A moral theory is more important in what it tells you you can't do in pursuit of the good. Generally what is practical to do if you're trying to effectively pursue the good and abiding by the standard moral rules of society (e.g. don't steal money to give to charity) go hand in hand, so I would expect to see less discussion of this on the forum. Where they come apart, it is probably a significant reputational risk to discuss them.

So this depends if you take EA to be more fundamentally interested in theories of beneficence (roughly what ought you do to positively help others) or in theories of axiology (roughly what makes a world better or worse). I’m suspicious of most theories that pull these apart, but importantly Scanlon’s work is really interested in trying to separate the two, and basically ditch the direct relevance of axiology altogether. Certainly he goes beyond telling people what they ought not to do. If EA is fundamentally about beneficence, Scanlon is very relevant, if it’s more about axiology, he’s more or less silent.

Curated and popular this week
 ·  · 7m read
 · 
This is a linkpost for a paper I wrote recently, “Endogenous Growth and Excess Variety”, along with a summary. Two schools in growth theory Roughly speaking: In Romer’s (1990) growth model, output per person is interpreted as an economy’s level of “technology”, and the economic growth rate—the growth rate of “real GDP” per person—is proportional to the amount of R&D being done. As Jones (1995) pointed out, populations have grown greatly over the last century, and the proportion of people doing research (and the proportion of GDP spent on research) has grown even more quickly, yet the economic growth rate has not risen. Growth theorists have mainly taken two approaches to reconciling [research] population growth with constant economic growth. “Semi-endogenous” growth models (introduced by Jones (1995)) posit that, as the technological frontier advances, further advances get more difficult. Growth in the number of researchers, and ultimately (if research is not automated) population growth, is therefore necessary to sustain economic growth. “Second-wave endogenous” (I’ll write “SWE”) growth models posit instead that technology grows exponentially with a constant or with a growing population. The idea is that process efficiency—the quantity of a given good producible with given labor and/or capital inputs—grows exponentially with constant research effort, as in a first-wave endogenous model; but when population grows, we develop more goods, leaving research effort per good fixed. (We do this, in the model, because each innovator needs a monopoly on his or her invention in order to compensate for the costs of developing it.) Improvements in process efficiency are called “vertical innovations” and increases in good variety are called “horizontal innovations”. Variety is desirable, so the one-off increase in variety produced by an increase to the population size increases real GDP, but it does not increase the growth rate. Likewise exponential population growth raise
 ·  · 25m read
 · 
Epistemic status: This post — the result of a loosely timeboxed ~2-day sprint[1] — is more like “research notes with rough takes” than “report with solid answers.” You should interpret the things we say as best guesses, and not give them much more weight than that. Summary There’s been some discussion of what “transformative AI may arrive soon” might mean for animal advocates. After a very shallow review, we’ve tentatively concluded that radical changes to the animal welfare (AW) field are not yet warranted. In particular: * Some ideas in this space seem fairly promising, but in the “maybe a researcher should look into this” stage, rather than “shovel-ready” * We’re skeptical of the case for most speculative “TAI<>AW” projects * We think the most common version of this argument underrates how radically weird post-“transformative”-AI worlds would be, and how much this harms our ability to predict the longer-run effects of interventions available to us today. Without specific reasons to believe that an intervention is especially robust,[2] we think it’s best to discount its expected value to ~zero. Here’s a brief overview of our (tentative!) actionable takes on this question[3]: ✅ Some things we recommend❌ Some things we don’t recommend * Dedicating some amount of (ongoing) attention to the possibility of “AW lock ins”[4]  * Pursuing other exploratory research on what transformative AI might mean for animals & how to help (we’re unconvinced by most existing proposals, but many of these ideas have received <1 month of research effort from everyone in the space combined — it would be unsurprising if even just a few months of effort turned up better ideas) * Investing in highly “flexible” capacity for advancing animal interests in AI-transformed worlds * Trying to use AI for near-term animal welfare work, and fundraising from donors who have invested in AI * Heavily discounting “normal” interventions that take 10+ years to help animals * “Rowing” on na
 ·  · 14m read
 · 
As we mark one year since the launch of Mieux Donner, we wanted to share some reflections on our journey and our ongoing efforts to promote effective giving in France. Mieux Donner was founded through the Effective Incubation Programme by Ambitious Impact and Giving What We Can. TLDR  * Prioritisation is important. And when the path forward is unclear, trying a lot of different potential priorities with high productivity leads to better results than analysis paralysis. * Ask yourself what the purpose of your organisation is. If you are a mainly marketing/communication org, hire people from this sector (not engineers) and don’t be afraid to hire outside of EA. * Effective altruism ideas are less controversial than we imagined and affiliation has created no (or very little) push back * Hiring early has helped us move fast and is a good idea when you have a clear process and a lot of quality applicants Summary of our progress and activities in year 1 In January 2025, we set a new strategy with time allocation for our different activities. We set one clear goal - 1M€ in donations in 2025. To achieve this goal we decided: Our primary focus for 2025 is to grow our audience. We will experiment with a variety of projects to determine the most effective ways to grow our audience. Our core activities in 2025 will focus on high-impact fundraising and outreach efforts. The strategies where we plan to spend the most time are : * SEO content (most important) * UX Optimization of the website * Social Media ; Peer to Peer fundraising ; Leveraging our existing network The graphic below shows how we plan to spend our marketing time: We are also following partnership opportunities and advising a few high net worth individuals who reached out to us and who will donate by the end of the year. Results: one year of Mieux Donner On our initial funding proposal in June 2024, we wrote down where we wanted to be in one year. Let’s see how we fared: Meta Goals * Spendi