Richard Y Chappell🔸

Associate Professor of Philosophy @ University of Miami
7280 karmaJoined Working (6-15 years)South Miami, FL 33146, USA
www.goodthoughts.blog/
Interests:
Bioethics

Bio

Participation
1

Academic philosopher, co-editor of utilitarianism.net, writes goodthoughts.blog

🔸10% Pledge #54 with GivingWhatWeCan.org

Comments
471

Funnily enough, the main example that springs to mind is the excessive self-flagellation post-FTX. Many distanced themselves from the community and its optimizing norms/mindset—for understandable reasons, but ones more closely tied to "expressing" (and personal reputation management) than to actually "helping", IMO.

I'd be curious to hear if others think of further candidate examples.

EA Infrastructure Fund or Giving What We Can? For the latter, "our best-guess giving multiplier for [2023-24] was approximately 6x".

I think it's more like he disagrees with you about the relative strengths of the objections and responses. (fwiw, I'm inclined to agree with him, and I don't have any personal stake in the matter.)

Any intellectual community will have (at least implicit) norms surrounding which assumptions / approaches are regarded as:

(i) presumptively correct or eligible to treat as a starting premise for further argument; this is the community "orthodoxy".

(ii) most plausibly mistaken, but reasonable enough to be worth further consideration (i.e. valued critiques, welcomed "heterodoxy")

(iii) too misguided to be worth serious engagement.

It would obviously be a problem for an intellectual community if class (ii) were too narrow. Claims like "dissent isn't welcome" imply that (ii) is non-existent: your impression is that the only categories within EA culture are (i) and (iii). If that were true, I agree it would be bad. But reasoning from the mere existence of class (iii) to negative conclusions about community epistemics is far too hasty. Any intellectual community will have some things they regard as not worth engaging with. (Classic examples include, e.g., biologists' attitudes towards theistic alternatives to Darwinian evolution, or historians' attitudes towards various conspiracy theories.)

People with different views will naturally dispute which of these three categories any given contribution ideally ought to fall into. People don't tend to regard their own contributions as lacking intellectual worth, so if they experience a lack of engagement it's very tempting to leap to the conclusion that others must be dogmatically dismissing them. Sometimes they're right! But not always. So it's worth being aware of the "outside view" that (a) some contributions may be reasonably ignored, and (b) anyone on the receiving end of this will subjectively experience it just as the OP describes, as seeming like dogmatic/unreasonable dismissal.

Given the unreliability of personal subjective impressions on this issue, it's an interesting question what more-reliable evidence one could look for to try to determine whether any given instance of non-engagement (and/or wider community patterns of dis/engagement) is objectively reasonable or not. Seems like quite a tricky issue in social epistemology!

I'm not seeing the barrier to Person A's thinking there's a 1/1000 chance, conditional on reaching the 50th century, of going extinct in that century. We could easily expect to survive 50 centuries at that rate, and then have the risk consistently decay (halving each century, or something like that) beyond that point, right?

If you instead mean to invoke, say, the 50 millionth century, then I'd think it's crazy on its face to suddenly expect a 1/1000 chance of extinction after surviving so long. That would no longer "seem, on the face of it, credible".

Am I missing something?

Thanks, yeah, I like your point there that "false negatives are costlier than false positives in this case", and so even <50% credence can warrant significant action. (I wouldn't literally say we should "act as if 3H is true" in all respects—as per Nuno's comment, uncertainty may justify some compounding "patient philanthropy", which could have high stakes if the hinge comes later. But that's a minor quibble: I take myself to be broadly in agreement with your larger gist.)

My main puzzlement there is how you could think that you ought to perform an act that you simultaneously ought to hope that you fail to perform, subsequently (and predictably) regret performing, etc. (I assume here that all-things-considered preferences are not cognitively isolated, but have implications for other attitudes like hope and regret.) It seems like there's a kind of incoherence in that combination of attitudes, that undermines the normative authority of the original "ought" claim. We should expect genuinely authoritative oughts to be more wholeheartedly endorsable.

Right, so one crucial clarification is that we're talking about act-inclusive states of affairs, not mere "outcomes" considered in abstraction from how they were brought about. Deontologists certainly don't think that we can get far merely thinking about the latter, but if they assess an action positively then it seems natural enough to take them to be committed to the action's actually being performed (all things considered, including what follows from it). I've written about this more in Deontology and Preferability. A key passage:

If you think that other things besides impartial value (e.g. deontic constraints) truly matter, then you presumably think that moral agents ought to care about more than just impartial value, and thus sometimes should prefer a less-valuable outcome over a more-valuable one, on the basis of these further considerations. Deontologists are free to have, and to recommend, deontologically-flavored preferences. The basic concept of preferability is theory-neutral on its face, begging no questions.

Thanks! You might like my post, 'Axiology, Deontics, and the Telic Question' which suggests a reframing of ethical theory that avoids the common error. (In short: distinguish ideal preferability vs instrumental reasoning / decision theory rather than axiology vs deontics.)

I wonder if it might also help address Mogensen's challenge. Full aggregation seems plausibly true of preferability not just axiology. But then given principles of instrumental rationality linking reasons for preference/desire to reasons for action, it's hard to see how full aggregation couldn't also be true with regard to choiceworthiness. (But maybe he'd deny my initial claim about preferability?)

Load more