In their report on Charter Cities, DavidBernard and Jason Schukraft write:

Finally, the laboratories of governance model may add to the neocolonialist critique of charter cities. Charter cities are not only risky, they are also controversial... Whether or not this criticism is justified, it would probably resonate with many socially-minded individuals, thereby reducing the appeal of charter cities.

I want to preface my comments by saying that I respect the authors, I thought the report as a whole was super useful, and I'm personally biased because I like Charter Cities.

Having said that, I thought this particular piece of reasoning was really poor.

Note the phrasing "Whether or not this criticism is justified". The authors are bracketing the question of actual material harms, instead invoking the neocolonialist critique only to point out that it might entail PR risk.

There are various reasonable ways to engage with Leftist Ethics, but this strikes me as among the worst. As I understand it (admittedly a bit uncharitably), the authors are saying:

  • We don't care whether or not neocolonialism is actually bad.

  • But we are worried someone might think that it's bad.

  • So we should avoid grant-making that could be critiqued on those grounds.

There are several problems here:

  • We should care if neocolonialism is real, if it's bad, and if it's induced by Charter Cities. If so, that should impact the cost-effectiveness estimate, not just factor in as a side-comment about PR-risk.

  • The question of PR-risk is a purely logistical question that should be bracketed from discussions of cost-effectiveness. In the case that an intervention is found to have high cost-effectiveness and high PR-risk, we should think strategically about how to fund it, perhaps by privately recommending the intervention to individual donors as opposed to foundations.

  • We should cite and engage with specific arguments, not imagine and then be haunted by some imagined spectre of Leftism. The authors mention the "neocolonialist critique" three times, never bothering to actually explain what it is, who advocates for it, how harmful it is, or how it could be avoided.

In other words, the authors take Leftist Ethics both too seriously, and not seriously enough.

Of course we can all come up with some caricature of the neocolonialist critique, but that's precisely the problem. You can think of it as a kind of inverted-Strawman. In the classical version, you invent an ideological opponent in order to dismiss it. In the inverted version, you invent an ideological opponent in order to fall prey to it, and then avoid engaging with it seriously by saying "we don't care if it's true, but other people might think it is."

Although Charter Cities are just one application of this flavor of argument, were we to apply its logic more generally, we would arrive at a devastating and paralyzing ideology. Leftist Ethics genuinely does warn against many things, but imagined Leftist Ethics potentially warns against everything.

So not only is the logic itself poor, the principle it entails backtests horribly against important EA causes and interventions:

  • Funding bednets? Isn't it too paternalistic to think you know better than poor people? How can you justify any intervention other than GiveDirectly?

  • AI Safety? That's just a speculative distraction from the real problems of algorithmic bias, predictive policing and government surveillance.

I'm not even making this up. Here's a recent GiveDirectly post on EA Forum explaining that "The GiveWell team is incredibly intelligent, but they're mainly Americans or Europeans who haven't spent significant time in environments of extreme poverty... We think that imbalance is grossly unjust." Or here's Daron Acemoglu in The Washington Post arguing that we should "stop worrying about evil super-intelligence" in order to focus instead on the "most-ominous... labor-market effects of AI".

Again, at least in these cases we can engage with the object-level arguments and dismiss them. But once you start saying, "Whether or not this criticism is justified, it would probably resonate with many socially-minded individuals", you're just totally screwed.


So what should we do instead? At a high level: either ignore Leftist Ethics entirely, or take them much more seriously. Specifically, some reasonable options are to:

  1. Ignore Leftist Ethics: Just write it off entirely and double down on utilitarianism.

  2. Incorporate Leftist Ethics using a formal meta-ethical framework: Take the humble outside-view where Leftist Ethics gets assigned some credence on the basis of having many smart proponents. See MacAskill on Normative Uncertainty as a Voting Problem, Nick Bostrom on the Parliamentary Model of Meta-ethics and MacAskill, Bykvist and Ord on Moral Uncertainty. Note that you still have to understand Leftist Ethics rigorously enough to know that it actually endorses.

  3. Think of Leftist Ethics as a helpful pointer to some Utilitarian-relevant concerns: Take the potential accusation of neocolonialism seriously, but only as an object-level concern about the material harms, then evaluate it as you would any other cost.

You'll note that none of these incorporate any notion of PR-risk. My view is that the question of "how does this impact our reputation and long-term giving prospects?" should be bracketed in discussions of cost-effectiveness.

If an analysis does find that an intervention is highly cost-effective, and separately finds that it carries some potential PR-risk, that's fine. Maybe it shouldn't be funded by Open Philanthropy, but the recommendation can still be referred to either private donors or philanthropist-driven institutions like the Survival and Flourishing Fund.

Finally, how should this work operationally? I see three promising avenues:

  1. We try to run a "What Leftist Ethics can teach Effective Altruism" workshop, post the slides online and slightly improve everyone's understanding across the board.

  2. 99% of Effective Altruists ignore Leftist Ethics entirely in their professional writing, but some organization (probably Rethink Priorities) hires one or two people whose job is to analyse Effective Altruist initiatives through the lens of Leftist Ethics and then incorporate those views through approach #2 or #3 outlined above.

  3. We say something like: "There's already a lot of money and talent that takes Leftist Ethics seriously, we recognize their work as important, but choose to be the one haven of Utilitarian thinking in the world".

Those all seem much better to me than the current status quo.

And again, sorry for beating up on the Rethink Priorities report, I thought it was very good otherwise.

(Disclosure: I've spoken to Mark Lutter who runs the Charter Cities Institute informally and have received support from him in the form of advice and introductions. We haven't discussed this idea, and he did not review a draft of this post.)

82

0
0

Reactions

0
0

More posts like this

Comments20
Sorted by Click to highlight new comments since:

One of the authors of the charter cities report here. I'll just add a few remarks to clarify how we intended the quoted passage. I'll highlight three disagreements with the interpretation offered in the original post.

We should care if neocolonialism is real, if it's bad, and if it's induced by Charter Cities. If so, that should impact the cost-effectiveness estimate, not just factor in as a side-comment about PR-risk.

(1) We absolutely care whether neocolonialism is bad (or, if neocolonialism is inherently bad, we care about whether charter cities would instantiate neocolonialism). However, we only had ~100 research hours to devote to this topic, so we bracketed that concern for the time being. These sort of prioritization decisions are difficult but necessary in order to produce research outputs in a timely manner.

We should cite and engage with specific arguments, not imagine and then be haunted by some imagined spectre of Leftism. The authors mention the "neocolonialist critique" three times, never bothering to actually explain what it is, who advocates for it, how harmful it is, or how it could be avoided.

(2) The neocolonial critique of charter cities is well-known in the relevant circles, though it comes in many varieties. (See, among others, van de Sand 2019 and citations therein.) We probably should have included a footnote with examples. The fact that we didn't engage with the critique more extensively (or really, at all) is some indication of how seriously we take the argument. We could have been more explicit about that.

The question of PR-risk is a purely logistical question that should be bracketed from discussions of cost-effectiveness. In the case that an intervention is found to have high cost-effectiveness and high PR-risk, we should think strategically about how to fund it, perhaps by privately recommending the intervention to individual donors as opposed to foundations.

(3) I'm not entirely sure why PR-risk needs to be excluded from cost effectiveness analysis (it's just another downside), though I'm not opposed in practice to doing this. I agree that there are ways to mitigate PR risk. At no point in the report did we claim that PR risks ought to disqualify charter cities (or any other intervention) from funding.

I'm not entirely sure why PR-risk needs to be excluded from cost effectiveness analysis (it's just another downside), though I'm not opposed in practice to doing this.

PR risk is a lot weirder and more complicated than a lot of people take it to be. Breaking it off into a separate discussion, or a separate bucket, seems wise to me in a lot of cases.

Thanks! Really appreciate getting a reply for you, and thanks for clarifying how you meant this passage to be understood.

I agree that you don't claim the PR risks should disqualify charter cities, but you do cite it as a concern right? I think part of my confusion stems from the distinction between "X is a concern we're noting" and "X is a parameter in the cost-effectiveness model", and from trying to understand the relative importance of the various qualitative and quantitative arguments made throughout.

I.e., one way of interpreting your report would be:

  1. There are various ways to think about the benefits of Charter Cities
  2. Some of those ways are highly uncertain and/or difficulty to model, here are some briefly comments on why we think so
  3. We're going to focus on quantitatively modeling this one path to impact
  4. On the basis of that model, we can't recommend funding Charter Cities and don't believe that they're cost-effective for that particular path to impact

In that case, it makes less sense for me to think of the neocolonialism critique as a argument against Charter Cities, and more sense to think of it as an explanation for why you didn't choose to prioritize analyzing a different path to impact.

Is that about right? Or closer to right than my original interpretation?

I think part of my confusion stems from the distinction between "X is a concern we're noting" and "X is a parameter in the cost-effectiveness model"

The distinction is largely pragmatic. Charter cities, like many complex interventions, are hard to model quantitatively. For the report, we replicated, adjusted, and extended a quantitative model that Charter Cities Institute originally proposed. If that's your primary theory of change for charter cities, it seems like the numbers don't quite work out. But there are many other possible theories of change, and we would love to see charter city advocates spend some time turning those theories of change into quantitative models.

I think PR risks are relevant to most theories of change that involve charter cities, but they are certainly not my main concern.

If one chooses options 2 or 3, I see no particular reason why one should focus on "Leftist Ethics" in particular. If one chooses one of those options, one would presumably also want to incorporate other ethical views; e.g. libertarianism, maybe some versions of virtue ethics, etc.

(I'm not hereby rejecting option 1, which I think should be on the table.)

Yes that's true. Though I have not read any EA report that includes a paragraph of the flavor "Libertarians are worried about X, we have no opinion on whether or not X is true, but it creates substantial PR-risk."

That might be because libertarians are less inclined to drum up big PR-scandals, but it's also because EAs tend to be somewhat sympathetic to libertarianism already.

My sense is that people mostly ignore virtue ethics, though maybe Open Phil thinks about them as part of their "worldview diversification" approach. In that case, I think it would be useful to have a specific person serving as a community virtue ethicist instead of a bunch of people who just casually think "this seems reasonable under virtue ethics so it's robust to worldview diversification". I have no idea if that's what happens currently, but basically I agree with you.

Though I have not read any EA report that includes a paragraph of the flavor "Libertarians are worried about X, we have no opinion on whether or not X is true, but it creates substantial PR-risk."

I'm not sure I understand your reasoning. I thought you were saying that we should focus on whether ethical theories are true (or have some chance of being true), and not so much on the PR-risk? And if so, it doesn't seem to matter that Libertarians tend to have fewer complaints (which may lead to bad PR).

Fwiw libertarianism and virtue ethics were just two examples. My point is that there's no reason to single out Leftist Ethics among the many potential alternatives to utilitarianism.

Okay, as I understand the discussion so far:

  • The RP authors said they were concerned about PR risk from a leftist critique
  • I wrote this post, explaining how I think those concerns could more productively be addressed
  • You asked, why I'm focusing on Leftist Ethics in particular
  • I replied, because I haven't seen authors cite concerns about PR risk stemming from other kinds of critique

That's all my comment was meant to illustrate, I think I pretty much agree with your initial comment.

Ah, I see. Thanks!

I suspect that good bits of leftish ethics are less represented in EA than libertarian thought and maybe even virtue ethics. So carefully onboarding some good thinkers might be good for cognitive diversity and hence better conversations/research around some issues.

(TBC, I also want to see more well-reasoned conservative voices in EA. Generally, I don't see diversity as an end in itself. And see fairly significant risks in "diluting" EA. But my take is that diluting happens by itself already via onboarding in the universities and desire to accommodate more people willing to contribute... so some gardening/fencing might be desired.)

I guess you are one of the best people to ask about how EA correlates with (other) political philosophies/thought clusters. Any thoughts?

I think there might sometimes be the opposite problem - that EA is staying too close to recommendations that more mainstream political groups would make, for various reasons.

In the 2019 EA Survey, 40% of EAs said their political views are "centre-left", whereas 32% said they're "left".

This is really interesting and I'd be happy to see a more recent statistic (though I don't expect it to have changed by much). But even if there are more of us than I think, I find very little consideration, in EA contexts, of concrete leftist ideas, e.g. "Maybe capitalism is a problem we can and should be addressing, now that we have billions of dollars committed to EA", or "Could charitable giving do in some cases more harm than good by shifting the perceived responsibility from the state to individuals?".

I agree that this could apply to every other philosophical school too. I do feel, however, that coming from a mostly English speaking perspective, economically liberal ideas are already much more ingrained into the core of EA than leftist ideas are, since they are more prominent in those political systems(?). Then there are probably perspectives that are even less present, like those of people in developing countries or with backgrounds of poverty or sickness or oppression.

Ultimately, since we all share the pragmatism and the will to test different courses of action for improving the world, I think this could be less of a political debate than is found outside EA on the same ideas, and more about just expanding the classes of ideas we consider, and deciding where to look for them and who could look for them best.

I sort of expect the young college EAs to be more leftist, and expect them to be more prominent in the next few years. Though that could be wrong, maybe college EAs are heavily selected for not being already committed to leftist causes.

I don't think I'm the best person to ask haha. I basically expect EAs to be mostly Grey Tribe, pretty democratic, but with some libertarian influences, and generally just not that interested in politics. There's probably better data on this somewhere, or at least the EA-related SlateStarCodex reader survey.

This is fairly aligned with my take but I think EAs are more blue than grey and more left than you might be implying. (Ah, by you I meant Stefan, he does/did a lot of empirical psychological/political research into relevant topics.)

Here is a very interesting perspective on EA as part of leftish analytical philosophy (under R/Neoliberal): https://sootyempiric.blogspot.com/2021/11/the-anglo-american-analytic-philosophy.html

I haven't thought much about it but It seems that building stronger ties with Red Plenty might be useful strategically (for movement resilience) and we are not at all incompatible:

  • I think there are good insights there that are quite appreciated here (e.g., legibility);
  • I think it's reasonable to incorporate some of their though into evaluation (e.g., IDinsight studied beneficiary preferences for GiveWell).

Thanks for sharing this piece – it's nice to gain perspective on where the EA movement, and I personally, fit into philosophy overall! 😅 I'm definitely an r/neoliberal type and have noticed that I tend to be less frustrated with leftist thinkers who fit into the analytic philosophy mold, like Liam Kofi Bright, than with those who rely less on/outright reject reason. At the same time, I've noticed that NL-types have certain blind spots, and I think more collaboration with the other types of analytic philosophers could help ameliorate these.

I thought their approach was pretty reasonable; they briefly noted a (PR) concern that might be relevant to some but not all donors, and then (as far as I can see) essentially  omitted it from their formal cost-benefit analysis, as we must always do with a multitude of second or third order considerations.

As I understand your comment, you think the structure of the report is something like:

  1. Here's our main model
  2. Here are it's implications
  3. By the way, here's something else to note that isn't included in the formal analysis

That's not how I interpret the report's framing. I read it more as:

  1. Here's our main model focused on direct benefits
  2. There are other direct benefits, such as Charter Cities as Laboratories of Governance
  3. Those indirect benefits might out-weight the direct ones, and might make Charter Cities attractive from a hits-based perspective
  4. One concern with the conception of Charter Cities as Laboratories of Governance is that it adds to the neocolonialist critique.
  5. "the laboratories of governance model may add to the neocolonialist critique of charter cities. Charter cities are not only risky, they are also controversial... Whether or not this criticism is justified, it would probably resonate with many socially-minded individuals, thereby reducing the appeal of charter cities."

So that's a bit different. It's not "here's a random side note". It's "Although we focus on modeling X, Charter Cities advocates might say the real value comes from Y, but we're not focusing on Y, in part, because of this neocolonialist critique."

I think that the mainstream objections from 'leftist ethics' are mostly  best thought of as claims about politics and economics that are broadly compatible with Utilitarianism but have very different views about things like the likely effects of charter cities on their environments - so if you want to take these criticisms seriously then go with 3, not 2.

There are some left-wing ideas that really do include different fundamental claims about ethics (Marxists think utilitarianism is mistaken and a consequence of alienation) - those could be addressed by a moral uncertainty framework, if you thought that was necessary. But most of what you've described looks like non-marxist socialism which isn't anti-utilitarian by nature.

As to the question of how seriously to take these critiques beyond their PR value, I think that we should engage with alternate perspectives , but I also think that this particular perspective sometimes gets inaccurately identified as the 'ethics of mainstream society' which we ought to pay special attention to because it talks about the concerns relevant to most people, because of the social circles that many of us move in.

I do think that we ought to be concerned when our views recommend things wildly at odds with what most people think is good, but these critiques aren't that - they're an alternative (somewhat more popular) worldview, that like EA is also believed preferentially by academics and elites. When talking about the Phil Torres essay, I said something similar,

One substantive point that I do think is worth making is that Torres isn't coming from the perspective of common-sense morality Vs longtermism, but rather a different, opposing, non-mainstream morality that (like longtermism) is much more common among elites and academics.

...

But I think it's still important to point out that Torres's world-view goes against common-sense morality as well, and that like longtermists he thinks it's okay to second guess the deeply held moral views of most people under the right circumstances.

...

FWIW, my guess is that if you asked a man in the street whether weak longtermist policies or degrowth environmentalist policies were crazier, he'd probably choose the latter.

As long as we are clear that these debates are not a case of 'the mainstream ethical views of society vs EA-utilitarianism', and instead see them as two alternate non-mainstream ethical views that disagree (mostly about facts but probably about some normative claims), then I think engaging with them is a good idea.

Thank you for putting this (and solutions) in clear words

Curated and popular this week
Relevant opportunities