Quick takes

I'm curious what people who're more familiar with infinite ethics think of Manheim & Sandberg's What is the upper limit of value?, in particular where they discuss infinite ethics (emphasis mine):

Bostrom’s discussion of infinite ethics is premised on the moral relevance of physically inaccessible value. That is, it assumes that aggregative utilitarianism is over the full universe, rather than the accessible universe. This requires certain assumptions about the universe, as well as being premised on a variant of the incomparability argument that we dism

... (read more)
1
Mo Putera
Just noting for my own future edification this LW exchange between David Manheim (who argues that infinite ethics is irrelevant to actual decisions, per paper above) and Joe Carlsmith (who argues the opposite, per essay above), which only increased my conviction that Manheim and Anders Sandberg were right. FWIW here's Claude Sonnet 3.5 attempting to first steelman Carlsmith's essay and then being neutral about which stance is more right: 

If you maximize expcted value, you should be taking expected values through small probabilities, including that we have the physics wrong or that things could go on forever (or without hard upper bound) temporally. Unless you can be 100% in no infinities, then your expected values will be infinite or undefined. And there are, I think, hypotheses that can't be ruled out and that could involve infinite affectable value.

In response to Carl Shulman on acausal influence, David Manheim said to renormalize. I'm sympathetic and would probably agree with doing some... (read more)

1
Mo Putera
Sandberg's recent 80K podcast interview transcript has this quote:

I didn't want to read all of @Vasco Grilo🔸's post on the "meat eating" problem and all 80+ comments, so I expanded all the comments and copy/pasted the entire webpage into Claude with the following prompt: "Please give me a summary of the authors argument (dot points, explained simply) and then give me a summary of the kinds of push back he got (dot points, explained simply, thematised, giving me a sense of the concentration/popularity of themes in the push back)"

Below is the result (the Forum team might want to consider how posts with large numbers of co... (read more)

Showing 3 of 4 replies (Click to show all)
2
Ozzie Gooen
Thanks for summarizing! It strikes me that the above criticisms don't really seem consequentialist / hedonic utilitarian-focused. I'm curious if other criticisms are, or if some of these are intended to be as such (some complex logic like, "Acting in the standard-morality way will wind up being good for consequentialist reasons in some round-about way".) More generally, those specific objections strike me as very weak. I'd expect and hope that people at Open Philanthropy and GiveWell would have better objections. 

No worries :)

I personally think this is a conversation worth having, but I can imagine a bunch of reasons people wouldn’t want to. For one thing, it is a PR nightmare!

4
yanni kyriacos
No problem :)

I came across this extract from John Stuart Mill's autobiography on his experience of a period when he became depressed and lost motivation in his goal of improving society. It sounded similar to what I hear from time to time of EAs finding it difficult to maintain motivation and happiness alongside altruism, and thought some choice quotes would be interesting to share. Mill's solution was finding pleasure in other pursuits, particularly poetry.

Mill writes that his episode started in 1826, when he was 20 years old - but he had already been a keen utilitari... (read more)

Showing 3 of 4 replies (Click to show all)
2
MHR🔸
Mill was working as a colonial administrator in the British East India Company at this point in his life, right? Could there have been a role for cognitive dissonance in driving his depression? 

I guess it's hard to know without being in Mill's head. Though from what I've read it doesn't sound like he ever really wavered from favouring Britain having India as a colony.

1
ClimateDoc
Well, everyone will have their own emotional journey - not everyone with motivations to do good will have an experience like Mill's! But the point to not make improving social welfare the sole target and to have alternative sources of satisfaction seems to me quite common in discussions around EA and mental health, at least for those who do have difficulties.

Why does distributing malaria nets work? Why hasn't everyone bought a bednet already?

  • If it's because they can't afford bednets, why don't more GiveDirectly recipients buy them?
  • Is it because nobody in the local area sells bednets? If so, why doesn't anyone sell them?
  • Is it because people don't think bednets are worth it? If so, why do they use the bednets when given them for free?
Showing 3 of 11 replies (Click to show all)
22
Karthik Tadepalli
Merely subsidizing nets, as opposed to free distribution, used to be a much more popular idea. My understanding is that that model was nuked by this paper showing that demand for nets falls discontinuously at any positive price (60 percentage points reduction in demand when going from 100% subsidy to 90% subsidy). So unless people's value for their children's lives are implausibly low, people are making mistakes in their choice of whether or not to purchase a bednet. New Incentives, another GiveWell top charity, can move people to vaccinate their children with very small cash transfers (I think $10). The fact that $10 can mean the difference between whether people protect their children from life threatening diseases or not is crazy if you think about it. This is not a rare finding. This paper found very low household willingness to pay for cleaning up contaminated wells, which cause childhood diarrhea and thus death. Their estimates imply that households in rural Kenya are willing to pay at most $770 to prevent their child's death, which just doesn't seem plausible. Ergo, another setting where people are making mistakes. Another; demand for motorcycle helmets is stupidly low and implies that Nairobi residents value a statistical life at $220, less than 10% of annual income. Unless people would actually rather die than give up 10% of their income for a year, this is clearly another case where people's decisions do not reflect their true value. This is not that surprising if you think about it. People in rich countries and poor countries alike are really bad at investing in preventative health. Each year I dillydally on getting the flu vaccine, even though I know the benefits are way higher than the costs, because I don't want to make the trip to CVS (an hour out of my day, max). My friend doesn't wear a helmet when cycling, even at night or in the rain, because he finds it inconvenient. Most of our better health in the rich world doesn't come from us actively mak

I'm pretty sure the personal benefits of getting the flu vaccine for a male in their 20-30s is not much higher than the costs. Agree on the bike helmet thing though. 

6
MichaelDickens
I think this is the best explanation I've seen, it sounds likely to be correct.

Haven't seen anyone mention RAND as a possible best charity for AI stuff and I guess I'd like to throw their hat in the ring or at least invite people to tell me why I'm wrong. My core claims are approximately:

  • Influencing the US (federal) government is probably one of the most scalable cost-effective routes for AI safety.
  • Think tanks are one of the most cost-effective ways to influence the US government.
  • The prestige of the think tank matters for getting into the room/influencing change.
  • Rand is among the most prestigious think tank doing AI safety work.
  • It's
... (read more)
Showing 3 of 4 replies (Click to show all)

I don't know unfortunately, basically just going off trusting the leadership to be cost effective plus they are in a really good position to influence policy/executive orders.

2
huw
I am seeing here that they already work closely with Open Philanthropy and were involved in drafting the Executive Order on AI. So this does not seem like a neglected avenue.
1
Charlie_Guthmann
Yea I have no idea if they actually need money but if they still want to hire more people to the AI team wouldn't it be better to give the money to RAND to hire those policymakers rather than like the Americans for Responsible Innovation - which open phil currently recommends but is much less prestigious and I'm not sure if they are working side by side with legislators. The fact that open phil gave grants but doesn't currently recommend for individual donors makes me think you are right that they don't need money atm but it would be nice to be sure. 

wrote this as a kind of reflection or metaphor kind-of-inspired by the recent discourse about the animal eating problem. i tried rewriting it as a Legible EA Forum Version but it felt superficial, i'll just leave it like this and ask anyone seeing this to disregard if not interested.

you are an entity from an abstract, timeless nonexistence. you have been accidentally summoned into a particular world by its inhabitants and physics. some patterns which inhabit this world are manipulating light to communicate faster than any others and studying the space of a

... (read more)

Adult film star Abella Danger apparently took an class on EA at University of Miami, became convinced, and posted about EA to raise $10k for One for the World. She was PornHub's most popular female performer in 2023 and has ~10M followers on instagram. Her post has ~15k likes, comments seem mostly positive.

I think this might be the class that @Richard Y Chappell🔸 teaches?

Thanks Abella and kudos to whoever introduced her to EA!

6
akash 🔸
(Tangential but related) There is probably a strong case to be made for recruiting the help of EA sympathetic celebrities to promote effective giving, and maybe even raise funds. I am a bit hesitant about "cause promotion" by celebrities, but maybe some version of that idea is also defensible. Turns out, someone wrote about it on the Forum a few years ago, but I don't know how much subsequent discussion there has been on this topic since then.

I follow a lot of YouTubers and streamers who run large-scale charitable events (example, example, example) and I've always thought about how great it would be to convince them to give the money to an effective charity.

32
Ben_West🔸
It looks like she did a giving season fundraiser for Helen Keller International, which she credits to the EA class she took. Maybe we will see her at a future EAG!

Has anyone thought about donation swaps for tax-deductible giving (a bit like kidney paired donations)? I feel like a good amount of people would be excited about giving to nonprofits that fall outside of the typical options (eg. LEEP or SWP instead of Givewell or THL), but end up defaulting to the latter because that's the only way for them to make tax-deductible donations to EA-aligned charities in their country. 
I would be excited about a mechanism allowing me to make tax-deductible donations to a charity of choice for someone in my country, who wo... (read more)

There used to be such a system: https://forum.effectivealtruism.org/posts/YhPWq784eRDr5999P/announcing-the-ea-donation-swap-system It got shut down 7 months ago (see the comments on that post).

1
Ian Turner
My impression is that most EA oriented charities can already accept tax-advantaged donations from most countries via intermediaries. Is there a particular country/charity combination you are thinking of that is not currently possible?
2
Arthur D
I think this is true for english-speaking countries, but less so for most european countries. In Switzerland for example, you have direct access to 12 EA-aligned tax-deductible charities through Effektiv-Spenden (6 GH&W, 2 FAW, 4 Climate change). GWWC on the other hand enables direct tax-deductible donations to 50+ funds & nonprofits across all main cause areas for the US, UK, Australia (& the Netherlands).

I feel pretty disappointed by some of the comments (e.g. this one) on Vasco Grilo's recent post arguing that some of GiveWell's grants are net harmful because of the meat eating problem. Reflecting on that disappointment, I want to articulate a moral principle I hold, which I'll call non-dogmatism. Non-dogmatism is essentially a weak form of scope sensitivity.[1]

Let's say that a moral decision process is dogmatic if it's completely insensitive to the numbers on either side of the trade-off. Non-dogmatism rejects dogmatic moral decision processes.

A central ... (read more)

Showing 3 of 5 replies (Click to show all)
6
MichaelStJules
EDIT: Rereading, I'm not really disagreeing with you. I definitely agree with the sentiment here:   (Edited) So, rather than just the possibility that all tradeoffs between humans and chickens should favour humans, I take issue with >99% confidence in that position or otherwise treating it like it's true. Whatever someone thinks makes humans infinitely more important than chickens[1] could actually be present in chickens in some similarly important form with non-tiny or even modest probability (examples here), or not actually be what makes humans important at all (more general related discussion, although that piece defends a disputed position). In my view, this should in principle warrant some tradeoffs favouring chickens. Or, if they don't think there's anything at all, say except the mere fact of species membership, then this is just pure speciesism and seems arbitrary. 1. ^ Or makes humans matter at all, but chickens lack, so chickens don't matter at all.
6
Guive
I also disagree with those comments, but can you provide more argument for your principle? If I understand correctly, you are suggesting the principle that X can be lexicographically[1] preferable to Y if and only if Y has zero value. But, conditional on saying X is lexicographically preferable to Y, isn't it better for the interests of Y to say that Y nevertheless has positive value? I mean, I don't like it when people say things like no amount of animal suffering, however enormous, outweighs any amount of human suffering, however tiny. But I think it is even worse to say that animal suffering doesn't matter at all, and there is no reason to alleviate it even if it could be alleviated at no cost to human welfare.    Maybe your reasoning is more like this: in practice, everything trades off against everything else. So, in practice, there is just no difference between saying "X is lexicographically preferable to Y but Y has positive value", and "Y has no value"? 1. ^ From SEP: "A lexicographic preference relation gives absolute priority to one good over another. In the case of two-goods bundles, A≻B if a1>b1, or a1=b1 and a2>b2. Good 1 then cannot be traded off by any amount of good 2."

I think in practice most people have ethical frameworks where they have lexicographic preferences, regardless of whether they are happy making other decisions using a cardinal utility framework.

I suspect most animal welfare enthusiasts presented with the possibility of organising a bullfight wouldn't respond with "well how big is the audience?". I don't think their reluctance to determine whether bullfighting is ethical or not based on the value of estimated utility tradeoffs reflects either a rejection of the possibility of human welfare or a specieist bi... (read more)

Contra Vasco Grilo on GiveWell may have made 1 billion dollars of harmful grants, and Ambitious Impact incubated 8 harmful organisations via increasing factory-farming?

The post above explores how under the utilitarian hedonistic moral framework, the meat-eater problem may result in GiveWell grants or AIM charities to be net-negative. The post seems to argue that one expected value grounds, one should let children die of malaria because they could end up eating chicken, for example.

I find this argument morally repugnant and want to highlight it. Using some ... (read more)

Showing 3 of 15 replies (Click to show all)
14
Richard Y Chappell🔸
I'd say that it's a (putative) instance of adversarial ethics rather than "ends justify the means" reasoning (in the usual sense of violating deontic constraints). Sometimes that seems OK. Like, it seems reasonable to refrain from rescuing the large man in my status-quo-reversal of the Trolley Bridge case. (And to urge others to likewise refrain, for the sake of the five who would die if anyone acted to save the one.) So that makes me wonder if our disapproval of the present case reflects a kind of speciesism -- either our own, or the anticipated speciesism of a wider audience for whom this sort of reasoning would provide a PR problem? OTOH, I think the meat-eater problem is misguided anyway, so another possibility is just that mistakenly urging against saving innocent people's lives is especially bad. I guess I do think the moral risk here is sufficient to be extra wary about how one expresses concerns like the meat-eater problem. Like Jason, I think it's much better to encourage AW offsets than to discourage GHD life-saving. (Offsetting the potential downsides from helping others seems like a nice general solution to the problem of adversarial ethics, even if it isn't strictly optimal.)

So that makes me wonder if our disapproval of the present case reflects a kind of speciesism -- either our own, or the anticipated speciesism of a wider audience for whom this sort of reasoning would provide a PR problem?

Trolley problems are sufficiently abstract -- and presented in the context of an extraordinary set of circumstances -- that they are less likely to trigger some of the concerns (psychological or otherwise) triggered by the present case. In contrast, lifesaving activity is pretty common -- it's hard to estimate how many times the median per... (read more)

4
PabloAMC 🔸
I agree with most except perhaps the framing of the following paragraph. In my opinion the key difference is that here the bad outcome (eg animal suffering but any other, really), may happen because of decisions taken by the people you are saving. So, in a sense it is not an externally imposed mechanism. The key insight to me is that the children always have the chance to prevent the suffering that follows, people can reason and become convinced, as I was, that this suffering is important and should be prevented. Consequently, I feel strongly against letting innocent people die in these situations. So overall I do not think this has to do with speciesism or cause prioritisation. Incidentally, this repeats with many cultural themes in films and books, that people can change their minds, and that they should be given the chance to. Similarly, it is a common theme that you should not kill innocent people to prevent some bad thing from happening (think Thanos and overpopulation, Herod convicting Jesus to die to prevent greater wrongdoings…). Clearly these are not strong ethical arguments, but I think they contain a grain of truth; and one should probably have a very strong bias against (taboo level) endorsing (not discussing) conclusions that justifies letting innocent people die.

Merry Christmas and happy holidays :)

Reflections on Two Years at EA Germany

I'm stepping down this week after two years as co-director of EA Germany. While I deeply valued the team and helped build successful structures, I stayed too long when my core values and personal fit no longer aligned.

When I joined EAD, I approached it like the other organisations I’ve worked with, planning on staying 5-10 years to create stability during growth and change. My co-director, Sarah, and I aimed to grow EAD quickly and sustainably. But the FTX collapse hit just as I started in November 2022, and the d... (read more)

2
AnonymousTurtle
Thank you for sharing this. Could you clarify what you mean by "my core values [...] no longer aligned."?

A core value of mine is to do good, per EA principles. This means I aim for a sustainable career where my personal fit can have the highest counterfactual impact. This has not been the case in the last few months.

Would it be feasible/useful to accelerate the adoption of hornless ("naturally polled") cattle, to remove the need for painful dehorning?

There are around 88M farmed cattle in the US at any point in time, and I'm guessing about an OOM more globally. These cattle are for various reasons frequently dehorned -- about 80% of dairy calves and 25% of beef cattle are dehorned annually in the US, meaning roughly 13-14M procedures.

Dehorning is often done without anaesthesia or painkillers and is likely extremely painful, both immediately and for some time afterwards... (read more)

8
MichaelStJules
More recent data for US beef cattle (APHIS USDA, 2017, p.iii):

Thanks, that’s encouraging! To clarify, my understanding is that beef cattle are naturally polled much more frequently than dairy cattle, since selectively breeding dairy cattle to be hornless affects dairy production negatively. If I understand correctly, that’s because the horn growing gene is close to genes important for dairy production. And that (the hornless dairy cow problem) seems to be what people are trying to solve with gene editing.

Ho-ho-ho, Merry-EV-mas everyone. It is once more the season of festive cheer and especially effective charitable donations, which also means that it's time for the long-awaited-by-nobody-return of the 🎄✨🏆 totally-not-serious-worth-no-internet-points-JWS-Forum-Awards 🏆✨🎄, updated for the 2024! Spreading Forum cheer and good vibes instead of nitpicky criticism!!
 

Best Forum Post I read this year:

Explaining the discrepancies in cost effectiveness ratings: A replication and breakdown of RP's animal welfare cost effectiveness calculations by @titotal&nb... (read more)

Thank you so much, that means a lot to me!

3
EffectiveAdvocate🔸
As a bit of a lurker, let me echo all of this, particularly the appreciation of @Vasco Grilo🔸. I don't always agree with him, but adding some numbers makes every discussion better!
6
Toby Tremlett🔹
I wish it could be EV-mas every day...  This is great JWS, thanks for writing it! After Forum Wrapped is out in Jan, we should have a list of underrated posts (unsure on exact wording), we'll see how it compares.

Isn't mechinterp basically setting out to build tools for AI self-improvement?

One of the things people are most worried about is AIs recursively improving themselves. (Whether all people who claim this kind of thing as a red line will actually treat this as a red line is a separate question for another post.)

It seems to me like mechanistic interpretability is basically a really promising avenue for that. Trivial example: Claude decides that the most important thing is being the Golden Gate Bridge. Claude reads up on Anthropic's work, gets access to the rel... (read more)

A year ago, I wrote "It's OK to Have Unhappy Holidays" during a time when I wasn’t feeling great about the season myself. That post inspired someone to host an impromptu Christmas Eve dinner, inviting others on short notice. Over vegan food and wine, six people came together to share their feelings about the holidays, reflect on the past year with gratitude, and enjoy a truly magical evening. It’s a moment I’m deeply thankful for. Perhaps this could inspire you this year—to host a gathering or spontaneously reach out to those nearby for a walk, a drink, or a shared meal.

If transformative AI is defined by its societal impact rather than its technical capabilities (i.e. TAI as process not a technology), we already have what is needed. The real question isn't about waiting for GPT-X or capability Y - it's about imagining what happens when current AI is deployed 1000x more widely in just a few years. This presents EXTREMELY different problems to solve from a governance and advocacy perspective.

E.g. 1: compute governance might no longer be a good intervention
E.g. 2: "Pause" can't just be about pausing model development. It should also be about pausing implementation across use cases

Personal reasons why I wished I delayed donations: I started donating 10% of my income about 6 years back when I was making Software Engineer money. Then I delayed my donations when I moved into a direct work path, intending to make up the difference later in life. I don't have any regrets about 'donating right away' back then. But if I could do it all over again with the benefit of hindsight, I would have delayed most of my earlier donations too.

First, I've been surprised by 'necessary expenses'. Most of my health care needs have been in therapy and denta... (read more)

According to this article, CEO shooter Luigi Malgione:

really wanted to meet my other founding members and start a community based on ideas like rationalism, Stoicism, and effective altruism

Doesn't look he was part of the EA movement proper (which is very clear about nonviolence), but could EA principles have played a part in his motivations, similarly to SBF? 

Showing 3 of 5 replies (Click to show all)

I personally think people overrate people's stated reasons for extreme behaviour and underrate the material circumstances of their life. In particular, loneliness

As one counterexample, EA is really rare in humans, but does seem more fueled by principles than situations.

(Otoh, if situations make one more susceptible to adopting some principles, is any really the "true cause"? Like plausibly me being abused as a child made me want to reduce suffering more, like this post describes. But it doesn't seem coherent to say that means the principles are overstated ... (read more)

4
Guive
Well, they could have. A lot of things are logically possible. Unless there is some direct evidence that he was motivated by EA principles, I don't think we should worry too much about that possibility. 
4
Jason
I don't see a viable connection here, unless you make "EA principles" vague enough to cover an extremely wide space (e.g., considering ~consequentialism an "EA principle"). 
Load more