Philosophy
Philosophy
Investigation of the abstract features of the world, including morals, ethics, and systems of value

Quick takes

76
1y
5
This is a cold take that’s probably been said before, but I thought it bears repeating occasionally, if only for the reminder: The longtermist viewpoint has gotten a lot of criticism for prioritizing “vast hypothetical future populations” over the needs of "real people," alive today. The mistake, so the critique goes, is the result of replacing ethics with math, or utilitarianism, or something cold and rigid like that. And so it’s flawed because it lacks the love or duty or "ethics of care" or concern for justice that lead people to alternatives like mutual aid and political activism. My go-to reaction to this critique has become something like “well you don’t need to prioritize vast abstract future generations to care about pandemics or nuclear war, those are very real things that could, with non-trivial probability, face us in our lifetimes.” I think this response has taken hold in general among people who talk about X-risk. This probably makes sense for pragmatic reasons. It’s a very good rebuttal to the “cold and heartless utilitarianism/pascal's mugging” critique. But I think it unfortunately neglects the critical point that longtermism, when taken really seriously — at least the sort of longtermism that MacAskill writes about in WWOTF, or Joe Carlsmith writes about in his essays — is full of care and love and duty. Reading the thought experiment that opens the book about living every human life in sequential order reminded me of this. I wish there were more people responding to the “longtermism is cold and heartless” critique by making the case that no, longtermism at face value is worth preserving because it's the polar opposite of heartless. Caring about the world we leave for the real people, with emotions and needs and experiences as real as our own, who very well may inherit our world but who we’ll never meet, is an extraordinary act of empathy and compassion — one that’s way harder to access than the empathy and warmth we might feel for our neighbors
3
7d
In response to Caviola, L., Schubert, S., & Greene, J. D. (2021). The psychology of (in)effective altruism. I have issues with EA in general in fundamental ways, so much so that after reading this paper made me dig in more and write this 2000 word post out of sheer frustration with the pride in it. One thing that really stands out reading this paper is how much EA positions itself as offering an almost irrefutable logic: maximize your positive impact by supporting only the most “effective” causes, and anything less is, at best, an error and, at worst, a kind of moral failing. But I find myself pushing back on this framing, and this paper, perhaps unintentionally, provides ample ammunition for why EA’s core assumptions might not only be psychologically unrealistic but also normatively suspect. And while I can already hear the chorus of counterarguments (“but that isn’t real EA”), I hear “real nationalism/communism” has never been tried.  For one, the entire concept of “effectiveness” is far less straightforward than the EA movement wants to admit. The paper acknowledges, for example, the serious epistemic obstacles: most people are skeptical that you can meaningfully compare the impact of a malaria net to, say, a local arts education program or mental health intervention. This skepticism is not just a cognitive bias; it reflects a real, unresolved debate about what counts as a “good” and how to measure the value of different outcomes. The cost-per-QALY approach that EA champions comes out of health economics and imports a lot of its own value-laden assumptions. In practice, this means that the “effectiveness” metric often boils down to what is most quantifiable, not necessarily what is most valuable, important, or just. There are profound issues with comparing across domains, especially when different forms of flourishing or suffering are involved. Even WELLBYs and similar attempts to aggregate “well-being” risk flattening important moral distinctions for the sake
23
4mo
1
Hi! I’m looking for help with a project. If you’re interested or know someone who might be, it would be really great if you let me know/share. I'll check the forum forum for dms. 1. Help with acausal research and get mentoring to learn about decision theory * Motivation: Caspar Oesterheld (inventor/discoverer of ECL/MSR), Emery Cooper and I are doing a project where we try to get LLMs to help us with our acausal research. * Our research is ultimately aimed at making future AIs acausally safe. * Project: In a first step, we are trying to train an LLM classifier that evaluates critiques of arguments. To do so, we need a large number of both good and bad arguments about decision theory (and other areas of Philosophy.) * How you’ll learn: If you would like to learn about decision theory, anthropics, open source game theory, …, we supply you with a curriculum. There’s a lot of leeway for what exactly you want to learn about. You go through the readings. * If you already know things and just want to test your ideas, you can optionally skip this step. * Your contribution: While doing your readings, you, write up critiques of arguments you read. * Bottom-line: We get to use your arguments/critiques for our projects and you get our feedback on them. (We have to read and label them for the project anyway.) * Logistics: Unfortunately, you’d be a volunteer. I might be able to pay you a small amount out-of-pocket, but it’s not going to be very much. Caspar and Em are both university employed and I am similar in means to an independent researcher. We are also all non-Americans based in the US which makes it harder for us to acquire money for projects and such for boring and annoying reasons. * Why are we good mentors: Caspar has dozens of publications on related topics. Em has a handful. And I have been around. 2. Be a saint and help with acausal research by doing tedious manual labor and getting little in return We also need help with various grindy tasks that a
10
2mo
2
Here's an argument I made in 2018 during my philosophy studies: A lot of animal welfare work is technically "long-termist" in the sense that it's not about helping already existing beings. Farmed chickens, shrimp, and pigs only live for a couple of months, farmed fish for a few years. People's work typically takes longer to impact animal welfare. For most people, this is no reason to not work on animal welfare. It may be unclear whether creating new creatures with net-positive welfare is good, but only the most hardcore presentists would argue against preventing and reducing the suffering of future beings. But once you accept the moral goodness of that, there's little to morally distinguish the suffering from chickens in the near-future from the astronomic amounts of suffering that Artificial Superintelligence can do to humans, other animals, and potential digital beings. It could even lead to the spread of factory farming across the universe! (Though I consider that unlikely) The distinction comes in at the empirical uncertainty/speculativeness of reducing s-risk. But I'm not sure if that uncertainty is treated the same as uncertainty about shrimp or insect welfare. I suspect many people instead work on effective animal advocacy because that's where their emotional affinity lies and it's become part of their identity, because they don't like acting on theoretical philosophical grounds, and they feel discomfort imagining the reaction of their social environment if they were to work on AI/s-risk. I understand this, and I love people for doing so much to make the world better. But I don't think it's philosophically robust.
40
1y
1
Having a baby and becoming a parent has had an incredible impact on me. Now more than ever, I feel more connected and concerned about the wellbeing of others. I feel as though my heart has literally grown. I wanted to share this as I expect there are many others who are questioning whether to have children -- perhaps due to concerns about it limiting their positive impact, among many others. But I'm just here to say it's been beautiful, and amazing, and I look forward to the day I get to talk with my son about giving back in a meaningful way.  
3
18d
2
Would anyone be up for reading and responding to this article? I find myself agreeing with a lot of it. "Effective altruism is a movement that excludes poor people"
3
1mo
6
Are we (the relatively wealthy) the moral equivalent of Nazis? The Nazi party murdered c. 12 million people* in the Holocaust, and had 8 million members, making each member 'individually responsible' for c. 1.5 deaths.  If it costs 5000 USD to save a life, and someone has more than 7500 USD to spare, but spends that money on a car, are they the moral equivalent of a Nazi**? Has anyone else experienced the grief of realising the extent of this atrocity embedded in casual day-to-day life in middle class parts of wealthy countries? Did you also have a breakdown? My best way out of this has been to view strategy as more important than morality. Moral positions won't do anything helpful here. Instead I  grieve and use the clarity that grief brings to spread awareness; building a movement to deeply change things***.   Footnotes * This is the best figure availble, even though the numbers are unclear: https://www.ilholocaustmuseum.org/holocaust-misconceptions/  ** I'm not one to believe in individual responsibility in this way - the system that has lead to this level of inequality has been built up by many people over many centuries, each one with good intentions in some way or another. But the same applies to the creation of the Nazi party and the circumstances that led people to be part of it. Additionally, this is an 'outcome based' morality, rather than an 'intention based' morality. It's down to you to decide whether this is a valid way of assessing morality. *** Just because someone is the moral equivalent of a Nazi doesn't mean that yelling at them (/yourself) that they're a Nazi is necessarily the best way to change their (/your) behaviour. I'm open to all the options, as long as they change this stuff, and fast.  
11
5mo
Just sharing my 2024 Year in Review post from Good Thoughts. It summarizes a couple dozen posts in applied ethics and ethical theory (including issues relating to naive instrumentalism and what I call "non-ideal decision theory") that would likely be of interest to many forum readers. (Plus a few more specialist philosophy posts that may only appeal to a more niche audience.)
Load more (8/59)