I'm curious what people who're more familiar with infinite ethics think of Manheim & Sandberg's What is the upper limit of value?, in particular where they discuss infinite ethics (emphasis mine):
...Bostrom’s discussion of infinite ethics is premised on the moral relevance of physically inaccessible value. That is, it assumes that aggregative utilitarianism is over the full universe, rather than the accessible universe. This requires certain assumptions about the universe, as well as being premised on a variant of the incomparability argument that we dism
If you maximize expcted value, you should be taking expected values through small probabilities, including that we have the physics wrong or that things could go on forever (or without hard upper bound) temporally. Unless you can be 100% in no infinities, then your expected values will be infinite or undefined. And there are, I think, hypotheses that can't be ruled out and that could involve infinite affectable value.
In response to Carl Shulman on acausal influence, David Manheim said to renormalize. I'm sympathetic and would probably agree with doing some...
I didn't want to read all of @Vasco Grilo🔸's post on the "meat eating" problem and all 80+ comments, so I expanded all the comments and copy/pasted the entire webpage into Claude with the following prompt: "Please give me a summary of the authors argument (dot points, explained simply) and then give me a summary of the kinds of push back he got (dot points, explained simply, thematised, giving me a sense of the concentration/popularity of themes in the push back)"
Below is the result (the Forum team might want to consider how posts with large numbers of co...
I came across this extract from John Stuart Mill's autobiography on his experience of a period when he became depressed and lost motivation in his goal of improving society. It sounded similar to what I hear from time to time of EAs finding it difficult to maintain motivation and happiness alongside altruism, and thought some choice quotes would be interesting to share. Mill's solution was finding pleasure in other pursuits, particularly poetry.
Mill writes that his episode started in 1826, when he was 20 years old - but he had already been a keen utilitari...
Why does distributing malaria nets work? Why hasn't everyone bought a bednet already?
Haven't seen anyone mention RAND as a possible best charity for AI stuff and I guess I'd like to throw their hat in the ring or at least invite people to tell me why I'm wrong. My core claims are approximately:
wrote this as a kind of reflection or metaphor kind-of-inspired by the recent discourse about the animal eating problem. i tried rewriting it as a Legible EA Forum Version but it felt superficial, i'll just leave it like this and ask anyone seeing this to disregard if not interested.
...you are an entity from an abstract, timeless nonexistence. you have been accidentally summoned into a particular world by its inhabitants and physics. some patterns which inhabit this world are manipulating light to communicate faster than any others and studying the space of a
Adult film star Abella Danger apparently took an class on EA at University of Miami, became convinced, and posted about EA to raise $10k for One for the World. She was PornHub's most popular female performer in 2023 and has ~10M followers on instagram. Her post has ~15k likes, comments seem mostly positive.
I think this might be the class that @Richard Y Chappell🔸 teaches?
Thanks Abella and kudos to whoever introduced her to EA!
Has anyone thought about donation swaps for tax-deductible giving (a bit like kidney paired donations)? I feel like a good amount of people would be excited about giving to nonprofits that fall outside of the typical options (eg. LEEP or SWP instead of Givewell or THL), but end up defaulting to the latter because that's the only way for them to make tax-deductible donations to EA-aligned charities in their country.
I would be excited about a mechanism allowing me to make tax-deductible donations to a charity of choice for someone in my country, who wo...
There used to be such a system: https://forum.effectivealtruism.org/posts/YhPWq784eRDr5999P/announcing-the-ea-donation-swap-system It got shut down 7 months ago (see the comments on that post).
I feel pretty disappointed by some of the comments (e.g. this one) on Vasco Grilo's recent post arguing that some of GiveWell's grants are net harmful because of the meat eating problem. Reflecting on that disappointment, I want to articulate a moral principle I hold, which I'll call non-dogmatism. Non-dogmatism is essentially a weak form of scope sensitivity.[1]
Let's say that a moral decision process is dogmatic if it's completely insensitive to the numbers on either side of the trade-off. Non-dogmatism rejects dogmatic moral decision processes.
A central ...
I think in practice most people have ethical frameworks where they have lexicographic preferences, regardless of whether they are happy making other decisions using a cardinal utility framework.
I suspect most animal welfare enthusiasts presented with the possibility of organising a bullfight wouldn't respond with "well how big is the audience?". I don't think their reluctance to determine whether bullfighting is ethical or not based on the value of estimated utility tradeoffs reflects either a rejection of the possibility of human welfare or a specieist bi...
Contra Vasco Grilo on GiveWell may have made 1 billion dollars of harmful grants, and Ambitious Impact incubated 8 harmful organisations via increasing factory-farming?
The post above explores how under the utilitarian hedonistic moral framework, the meat-eater problem may result in GiveWell grants or AIM charities to be net-negative. The post seems to argue that one expected value grounds, one should let children die of malaria because they could end up eating chicken, for example.
I find this argument morally repugnant and want to highlight it. Using some ...
So that makes me wonder if our disapproval of the present case reflects a kind of speciesism -- either our own, or the anticipated speciesism of a wider audience for whom this sort of reasoning would provide a PR problem?
Trolley problems are sufficiently abstract -- and presented in the context of an extraordinary set of circumstances -- that they are less likely to trigger some of the concerns (psychological or otherwise) triggered by the present case. In contrast, lifesaving activity is pretty common -- it's hard to estimate how many times the median per...
Reflections on Two Years at EA Germany
I'm stepping down this week after two years as co-director of EA Germany. While I deeply valued the team and helped build successful structures, I stayed too long when my core values and personal fit no longer aligned.
When I joined EAD, I approached it like the other organisations I’ve worked with, planning on staying 5-10 years to create stability during growth and change. My co-director, Sarah, and I aimed to grow EAD quickly and sustainably. But the FTX collapse hit just as I started in November 2022, and the d...
Would it be feasible/useful to accelerate the adoption of hornless ("naturally polled") cattle, to remove the need for painful dehorning?
There are around 88M farmed cattle in the US at any point in time, and I'm guessing about an OOM more globally. These cattle are for various reasons frequently dehorned -- about 80% of dairy calves and 25% of beef cattle are dehorned annually in the US, meaning roughly 13-14M procedures.
Dehorning is often done without anaesthesia or painkillers and is likely extremely painful, both immediately and for some time afterwards...
Thanks, that’s encouraging! To clarify, my understanding is that beef cattle are naturally polled much more frequently than dairy cattle, since selectively breeding dairy cattle to be hornless affects dairy production negatively. If I understand correctly, that’s because the horn growing gene is close to genes important for dairy production. And that (the hornless dairy cow problem) seems to be what people are trying to solve with gene editing.
Ho-ho-ho, Merry-EV-mas everyone. It is once more the season of festive cheer and especially effective charitable donations, which also means that it's time for the long-awaited-by-nobody-return of the 🎄✨🏆 totally-not-serious-worth-no-internet-points-JWS-Forum-Awards 🏆✨🎄, updated for the 2024! Spreading Forum cheer and good vibes instead of nitpicky criticism!!
Best Forum Post I read this year:
Explaining the discrepancies in cost effectiveness ratings: A replication and breakdown of RP's animal welfare cost effectiveness calculations by @titotal&nb...
Isn't mechinterp basically setting out to build tools for AI self-improvement?
One of the things people are most worried about is AIs recursively improving themselves. (Whether all people who claim this kind of thing as a red line will actually treat this as a red line is a separate question for another post.)
It seems to me like mechanistic interpretability is basically a really promising avenue for that. Trivial example: Claude decides that the most important thing is being the Golden Gate Bridge. Claude reads up on Anthropic's work, gets access to the rel...
A year ago, I wrote "It's OK to Have Unhappy Holidays" during a time when I wasn’t feeling great about the season myself. That post inspired someone to host an impromptu Christmas Eve dinner, inviting others on short notice. Over vegan food and wine, six people came together to share their feelings about the holidays, reflect on the past year with gratitude, and enjoy a truly magical evening. It’s a moment I’m deeply thankful for. Perhaps this could inspire you this year—to host a gathering or spontaneously reach out to those nearby for a walk, a drink, or a shared meal.
If transformative AI is defined by its societal impact rather than its technical capabilities (i.e. TAI as process not a technology), we already have what is needed. The real question isn't about waiting for GPT-X or capability Y - it's about imagining what happens when current AI is deployed 1000x more widely in just a few years. This presents EXTREMELY different problems to solve from a governance and advocacy perspective.
E.g. 1: compute governance might no longer be a good intervention
E.g. 2: "Pause" can't just be about pausing model development. It should also be about pausing implementation across use cases
Personal reasons why I wished I delayed donations: I started donating 10% of my income about 6 years back when I was making Software Engineer money. Then I delayed my donations when I moved into a direct work path, intending to make up the difference later in life. I don't have any regrets about 'donating right away' back then. But if I could do it all over again with the benefit of hindsight, I would have delayed most of my earlier donations too.
First, I've been surprised by 'necessary expenses'. Most of my health care needs have been in therapy and denta...
According to this article, CEO shooter Luigi Malgione:
really wanted to meet my other founding members and start a community based on ideas like rationalism, Stoicism, and effective altruism
Doesn't look he was part of the EA movement proper (which is very clear about nonviolence), but could EA principles have played a part in his motivations, similarly to SBF?
I personally think people overrate people's stated reasons for extreme behaviour and underrate the material circumstances of their life. In particular, loneliness
As one counterexample, EA is really rare in humans, but does seem more fueled by principles than situations.
(Otoh, if situations make one more susceptible to adopting some principles, is any really the "true cause"? Like plausibly me being abused as a child made me want to reduce suffering more, like this post describes. But it doesn't seem coherent to say that means the principles are overstated ...