JL

Jakob Lohmar

DPhil Student in Philosophy @ University of Oxford
154 karmaJoined Pursuing a doctoral degree (e.g. PhD)Oxford, Vereinigtes Königreich

Bio

I'm currently writing a dissertation on Longtermism with a focus on non-consequentialist considerations and moral uncertainty. Generally interested in the philosophical aspects of global priorities research and planning to contribute to that research also after my DPhil. Before moving to Oxford, I studied philosophy and a bit of economics in Germany where I helped organizing the local group EA Bonn for a couple of years. I also worked in Germany for a few semesters as a research assistant and taught some seminars on moral uncertainty and the epistemology of disagreement.

How I can help others

If you have a question about philosophy, I could try to help you with it :)

Comments
30

I'm late to the party but would still be interested what you think of this: cause areas can be individuated in more or less fine-grained ways. For example, we could consider 'animal welfare' one cause area or 'wild animal welfare' and 'farmed animal welfare' two cause areas, and again we could individuate more fine-grainedly between 'wild invertebrate welfare' and 'wild vertebrate welfare' and so on. I think that you might even end up with (what is intuitively thought of as) interventions at some point by making causes more and more fine-grained. If so, there is no fundamental difference between 'causes' and 'interventions'.

Now that is not to say that distinguishing between causes and interventions is not useful, and some cause/intervention individuations are certainly more intuitive than others. But if there are several permissible/useful/intuitive ways of individuation them, you might get a different picture of the resource allocation between CP and WCP (and indeed also CCP). Generally I think that the more fine-grained causes are individuated, the more work will count as CP rather then WCP. Conversely, if you individuate causes in a very coarse-grained way, it is unsurprising that most prioritization work will count as 'within a cause'. In the extreme case where you only consider a single all-encompassing cause, all prioritization will necessarily be within that cause. If you distinguish only between two causes (say, human and non-human welfare), there can be genuine CP - namely between these two causes - but it still woulnd't be surprising if most prio-work falls within one of these two causes and therefore counts as WCP. Now you distinguish between three causes. That is not unusual in EA but still very coarse-grained and I think you could sensibly distinguish instead between, say, 10 cause areas or so. Would this affect the result of your analysis such that more prio work would count as CP?

If there are so many new promising causes that EA should plausibly focus on (from something like 5 to something like 10 or 20), cause-prio between these causes (and ideas for related but distinct causes) should be especially valuable as well. I think Will agrees with this - after all his post is based on exactly this kind of work! - but the emphasis in this post seemed to be that EA should invest more into these new cause areas rather than investigate further which of them are the most promising - and which aren't that promising after all. It would be surprising if we couldn't still learn much more about their expected impact.

Hey Kritika, great work! I must admit that I didn't read all passages carefully yet, but here are some high-level thoughts that immediately came to mind.

  1. The requirements for Institutional Longtermism that you suggest seem to me like desiderata from a (purely) longtermist perspective, but I don't see why they should be considered requirements? For example, you suggest that it is a requirement that core long-term policies can only be modified by a supermajority of e.g. 90%. This may be desirable from a longtermist perspective, but long-term policies that can be modified by a smaller supermajority or even just a majority vote would still be valuable from a longtermist perspective.
  2. This seems analogous to other causes, such as animal welfare. From a purely animal welfare perspective, it may be desirable to have animal welfare policies that cannot be modified by even 90% of voters, and so on. But that doesn't mean that animal welfare is incompatible with democracy?
  3. I guess you see the difference between the longtermist cause and other causes to be in longtermism's demands: we should design institutions without exception such that they are optimized for the long-term future because the long-term future matter that incredibly much. But that would be a very extreme form of longtermism. Even Greaves' and MacAskill's Strong Longtermism only makes claims about what we should do / what is best to do on the margin. It doesn't say that we should spend all (or even 50% of) our resources on the long-term future. Similarly, Institutional (even Strong) Longtermism could merely claim that a fraction of public resources should be spent on the long term. Let's say that's 10%. Then, decisions about the remaining 90% of public resources could be made based on democratic procedures.
  4. Finally, I think it's even desirable from a longtermist perspective to leave important political decisions in the hands of future people: they probably know better how to improve the long term (e.g. because of improved forecasting). 

That seems like a strange combination indeed! I will need to think more about this...

and perhaps under-rewarded given it is less exciting.

...especially so in academia! I'd say that in philosophy mediocre new ideas are more publishable than good objections.

Yeah that makes sense to me. I still think that one doesn't need to be conceptually confused (even though this is probably a common source of disagreement) to believe both that (i) one action's outcome is preferable to the other action's outcome even though (ii) one ought to perform the latter action. For example, one might think the former outcome is overall preferable because it has much better consequences. But conceptual possibility aside, I agree that this is a weird view to have. At the very least, it seems that all else equal one should prefer the outcome of the action that one takes to be the most choiceworthy. Not sure if it has some plausibility to say that this doesn't necessarily hold if other things are not equal - such as in the case where the other action has the better consequences.

Thanks - also for the link! I like your notion of preferability and the analysis of competing moral theories in terms of this notion. What makes me somewhat hesitant is that the objects of preferability, in your sense, seem to be outcomes or possible worlds rather that the to-be-evaluated actions themselves? If so, I wonder if one could push back against your account by insisting that the choiceworthiness of available acts is not necessarily a function of the preferability of their outcomes since... not all morally relevant features of an action are necessarily fully reflected in the preferability of its outcome?

But assuming that they are, I guess that non-consequentialists who reject full aggregation would say that the in-aggregate larger good is not necessarily preferable. But I'm not sure. I agree that this seems not very intutive.

I couldn't agree more. Moral philosophers tend to distinguish the 'axiological' from the 'deontic' and then interpret 'deontic' in a very narrow way, which leaves out many other (in my opinion: more interesting) normative questions. This is epistemically detrimental, especially when combined with the misconception that 'axiology is only for consequentialists'. It invites flawed reasoning of the kind: "consideration X may be important for axiology but since we're not consequentialists, that doesn't really matter to us, and surely X doesn't *oblige* us to act in a certain way (that would be far too demanding!), so we don't need to bother with X". 

That said, I think there is still a good objection to the stakes-sensitivity principle, which is from Andreas Mogensen: full aggregation is true when it comes to axiology ('the stakes'), but it arguably isn't true with regard to choiceworthiness/reasons. Hence, it could be that an action has superb consequences, but that only gives us relatively weak reason to perform the action. That reason may not be strong enough to outweigh other non-consequentialist considerations such as constraints.

That's an interesting take! I have a lots of thoughts on this (maybe I will add other comments later), but here is the most general one: One thing is to create new ideas, another thing is to assess their plausibility. You seem to focus a lot on the former -- most of your examples for valuable insights are new ideas rather than objections or critical appraisals. But testing and critically discussing ideas is valuable too. Without such work, there would be an overabundance of ideas without separation between the good and bad ones. I think the value of many essays in this volume stems from doing this kind of work. They address an already existing promising idea - longtermism - and assess its plausibility and importance.

I think that most longtermists are aware of the motivational challenge you point out. In fact, major works on longtermism address this challenge, such as Toby Ord's "The Precipice", which argues for the importance of mitigating existential risks from a wide range of moral views. Since the motivational challenge is already understood, I think that the most valuable part of this post are the final paragraphs that sketch how the motivational challenge could be overcome. Like Toby, I'd encourage you to further develop these ideas of yours - especially since they seem to come apart from moral philosophy's obsession with the question whether we are 'required' or 'obligated' to do the right/best thing.

Load more