Philosophy
Philosophy
Investigation of the abstract features of the world, including morals, ethics, and systems of value

Quick takes

10
3d
Okay, so one thing I don't get about "common sense ethics" discourse in EA is, which common sense ethical norms prevail? Different people even in the same society have different attitudes about what's common sense. For example, pretty much everyone agrees that theft and fraud in the service of a good cause - as in the FTX case - is immoral. But what about cases where the governing norms are ambiguous or changing? For example, in the United States, it's considered customary to tip at restaurants and for deliveries, but there isn't much consensus on when and how much to tip, especially with digital point-of-sale systems encouraging people to tip in more situations. (Just as an example of how conceptions of "common sense ethics" can differ: I just learned that apparently, you're supposed to tip the courier before you get a delivery now, otherwise they might refuse to take your order at all. I've grown up believing that you're supposed to tip after you get service, but many drivers expect you to tip beforehand.) You're never required to tip as a condition of service, so what if you just never tipped and always donated the equivalent amount to highly effective charities instead? That sounds unethical to me but technically it's legal and not a breach of contract. Going further, what if you started a company, like a food delivery app, that hired contractors to do the important work and paid them subminimum wages[1], forcing them to rely on users' generosity (i.e. tips) to make a living? And then made a 40% profit margin and donated the profits to GiveWell? That also sounds unethical - you're taking with one hand and giving with the other. But in a capitalist society like the U.S., it's just business as usual. 1. ^ Under federal law and in most U.S. states, employers can pay tipped workers less than the minimum wage as long as their wages and tips add up to at least the minimum wage. However, many employers get away with not ensuring that tipped workers earn th
15
5mo
In his recent interview on the 80000 Hours Podcast, Toby Ord discussed how nonstandard analysis and its notion of hyperreals may help resolve some apparent issues arising from infinite ethics (link to transcript). For those interested in learning more about nonstandard analysis, there are various books and online resources. Many involve fairly high-level math as they are aimed at putting what was originally an intuitive but imprecise idea onto rigorous footing. Instead of those, you might want to check out a book like that of H. Jerome Keisler's Elementary Calculus: An Infinitesimal Approach, which is freely available online. This book aims to be an introductory calculus textbook for college students, which uses hyperreals instead of limits and delta-epsilon proofs to teach the essential ideas of calculus such as derivatives and integrals. I haven't actually read this book but believe it is the best known book of this sort. Here's another similar-seeming book by Dan Sloughter.
4
2mo
I think this isn't mentioned enough in EA, and I feel the need to point out this quote from William_MacAskill_when-should-an-effective-altruist-donate.pdf (globalprioritiesinstitute.org): "" (p. 7)
10
5mo
Julia Nefsky is giving a research seminar in the Institute for Futures Studies titled "Expected utility, the pond analogy and imperfect duties", which sounds interesting for the community. It will be on September 27 at 10:00-11:45 (CEST) and can be attended for free in person or online (via zoom). You can find the abstract here and register here. I don't know Julia or her work and I'm not philosopher, so I cannot directly assess the expected quality of the seminar, but I've seen several seminars from the Institute for Futures Studies that where very good (eg. from Olle Häggström --and in Sep 20 Anders Sandberg gives one as well). I hope this is useful information.
7
5mo
1
I'm curious what people who're more familiar with infinite ethics think of Manheim & Sandberg's What is the upper limit of value?, in particular where they discuss infinite ethics (emphasis mine): I first read their paper a few years ago and found their arguments for the finiteness of value persuasive, as well as their collectively-exhaustive responses in section 4 to possible objections. So ever since then I've been admittedly confused by claims that the problems of infinite ethics still warrant concern w.r.t. ethical decision-making (e.g. I don't really buy Joe Carlsmith's arguments for acknowledging that infinities matter in this context, same for Toby Ord's discussion in a recent 80K podcast). What am I missing?
11
8mo
1
Steelmanning is typically described as responding to the “strongest” version of an argument you can think of. Recently, I heard someone describe it a slightly different way, as responding to the argument that you “agree with the most.”  I like this framing because it signals an extra layer of epistemic humility: I am not a perfect judge of what the best possible argument is for a claim. In fact, reasonable people often disagree on what constitutes a strong argument for a given claim. This framing also helps avoid a tone of condescension that sometimes comes with steelmanning. I’ve been in a few conversations in which someone says they are “steelmanning” some claim X, but says it in a tone of voice that communicates two things: * The speaker thinks that X is crazy. * The speaker thinks that those who believe X need help coming up with a sane justification for X, because X-believers are either stupid or crazy. It’s probably fine to have this tone of voice if you’re talking about flat earthers or young earth creationists, and are only “steelmanning” X as a silly intellectual exercise. But if you’re in a serious discussion, framing “steelmanning” as being about the argument you "agree with the most" rather than the "strongest" argument might help signal that you take the other side seriously. Anyone have thoughts on this? Has this been discussed before? 
8
9mo
I think we separate causes and interventions into "neartermist" and "longtermist" causes too much. Just as some members of the EA community have complained that AI safety is pigeonholed as a "long-term" risk when it's actually imminent within our lifetimes[1], I think we've been too quick to dismiss conventionally "neartermist" EA causes and interventions as not valuable from a longtermist perspective. This is the opposite failure mode of surprising and suspicious convergence - instead of assuming (or rationalizing) that the spaces of interventions that are promising from neartermist and longtermist perspectives overlap a lot, we tend to assume they don't overlap at all, because it's more surprising if the top longtermist causes are all different from the top neartermist ones. If the cost-effectiveness of different causes according to neartermism and longtermism are independent from one another (or at least somewhat positively correlated), I'd expect at least some causes to be valuable according to both ethical frameworks. I've noticed this in my own thinking, and I suspect that this is a common pattern among EA decision makers; for example, Open Phil's "Longtermism" and "Global Health and Wellbeing" grantmaking portfolios don't seem to overlap. Consider global health and poverty. These are usually considered "neartermist" causes, but we can also tell a just-so story about how global development interventions such as cash transfers might also be valuable from the perspective of longtermism: * People in extreme poverty who receive cash transfers often spend the money on investments as well as consumption. For example, a study by GiveDirectly found that people who received cash transfers owned 40% more durable goods (assets) than the control group. Also, anecdotes show that cash transfer recipients often spend their funds on education for their kids (a type of human capital investment), starting new businesses, building infrastructure for their communities, and h
4
4mo
Greetings! I'm a doctoral candidate and I have spent three years working as a freelance creator, specializing in crafting visual aids, particularly of a scientific nature. However, I'm enthusiastic about contributing my time to generate visuals that effectively support EA causes.  Typically, my work involves producing diagrams for academic grant applications, academic publications, and presentations. Nevertheless, I'm open to assisting with outreach illustrations or social media visuals as well. If you find yourself in need of such assistance, please don't hesitate to get in touch! I'm happy to hop on a zoom chat
Load more (8/27)