Joseph_Chu

225 karmaJoined

Bio

Participation
1

An eccentric dreamer in search of truth and happiness for all. Formerly posted on Felicifia back in the day under the name Darklight. Been a member of Less Wrong and involved in Effective Altruism since roughly 2013.

Comments
52

I tried asking ChatGPT, Gemini, and Claude to come up with a formula that converts between correlation space to probability space while preserving the relationship 0 = 1/n. I came up with such a formula a while back, so I figure it shouldn't be hard. They all offered formulas, all of which were shown to be very much wrong when I actually graphed them to check.

My wife and I really, really liked The Good Place. I also got us a copy of How To Be Perfect and thought it was a decent read. Not particularly EA, but very balanced to consider all the major western schools of moral philosophy and give each a fair hearing. I do think it was a bit lacking in covering eastern schools of thought like the role-based ethics of Confucius, but I understand it was targeted towards an english speaking audience.

As a primer on ethics, it's very approachable, but I do think it simplifies some things, and feels ever so slightly biased against consequentialism towards something like virtue ethics, but I'll admit, I'm pro-Utilitarianism, and might myself be biased in the other direction.

From an EA perspective, it may not be the best introduction to us, as I believe there's mention of EA, but it's mostly the view that Peter Singer and his arguments are very demanding and perhaps unreasonably so, albeit, it's a logical and important nudge towards caring and doing more (he hedges a lot in the book).

At the end of the day, the book shies away from deciding which moral theory is more correct, and as such is kinda wishy-washy, choose your own morality from a menu of possibilities, which somewhat disappointed me (but I also understand picking sides would be controversial). I'd still recommend the book to someone relatively unfamiliar with morality and ethics because it is a much friendlier introduction than say a moral philosophy textbook would be.

So, the $5,000 to save a human life actually saves more than one human life. The world fertility rate is currently 2.27 per woman, but expected to decline to 1.8 by 2050 and 1.6 by 2100. Lets assume this trend continues at a rate of -0.2 per 50 years until eventually it reaches zero at 2500. Since it takes two people to have children, we halve these numbers to get an estimate of how many human descendents to expect from a given saved human life each generation. 

If each generation is ~25 years, then the numbers will follow a series like 1.135 + 0.9 + 0.85 + 0.8 ... which works out to 9.685 human lives per $5000, or $516.26 per human life. Human life expectancy is increasing, but for simplicity lets assume 70 years per human life.

70 / $516.26 = 0.13559 human life years per dollar.

So, if we weigh chickens equally with humans, this favours the chickens still.

However, we can add the neuron count proxy to weigh these. Humans have approximately 86 trillion neurons, while chickens have 220 million. That's a ratio of 390.

0.13559 x 390 = 52.88 human neuron weighted life years per dollar.

This is slightly more than 41 chicken life years per dollar. Which, given my many, many simplifying assumptions, would mean that global health is still (slightly) more cost effective.

In the interests of furthering the debate, I'll quickly offer several additional arguments that I think can favour global health over animal welfare.

Simulation Argument

The Simulation Argument says that it is very likely we are living in an ancestor simulation rather than base reality. Given that it is likely human ancestors that the simulators are interested in fully simulating, other non-human animals are likely to not be simulated to the same degree of granularity and may not be sentient.

Pinpricks vs. Torture

This is a trolley problem scenario. It's also been discussed by Eliezer Yudkowsky as the Speck of Dust in 3^^^3 People's Eyes vs. One Human Being Tortured For 50 Years case. It's also been analogously made in the famous short story The Ones Who Walk Away From Omelas by Ursula LeGuin. The basic idea is to question whether scope sensitivity is justified. 

I'll note that a way to avoid this is to adopt Maximin rather than Expected Value as the decision function, as was suggested by John Rawls in A Theory of Justice.

Incommensurability

In moral philosophy there's a concept called incommensurability, that some things are simply not comparable. Some might argue that human and animal experiences are incommensurable, that we cannot know what it is like to be bat, for instance.

Balance of Categorical Responsibilities

There is in philosophies like Confucianism, notions like Filial Piety that support a kind of hierarchy of moral circles, such that family strictly dominates the state and so on. In the extreme, this leads to a kind of ethical egoism that I don't think any altruist would subscribe to, but which seems a common way of thinking among laypeople and conservatives in particular. I don't suggest this option but I mention it as an extreme case.

Utilitarianism in contrast tends to take the opposite extreme of equalizing moral circles to the point of complete impartiality towards every individual, the greatest good for the greatest number. This creates a kind of demandingness that would require us to sacrifice pretty much everything in service of this, our lives devoted entirely to something like shrimp welfare.

Rather than taking either extreme, it's possible to balance things according to the idea that we have separate, categorical responsibilities to ourselves, to our family, to our nation, to our species, and to everyone else, and to put resources into each category so that none of our responsibilities are neglected in favour of others, a kind of meta or group impartiality rather than individual impartiality.

Yeah, I should probably retract the "we need popular support to get things done" line of reasoning.

I think lying to myself is probably, on reflection, something I do to avoid actually lying to others, as described in that link in the footnote. I kind of decide that a belief is "plausible" and then give it some conditional weight, a kind of "humour the idea and give it the benefit of the doubt". It's kind of a technicality thing that I do because I'm personally very against outright lying, so I've developed a kind of alternative way of fudging to avoid hurt feelings and such.

This is likely related to the "spin" concept that I adopted from political debates. The idea of "spin" to me is to tell the truth from an angle that encourages a perception that is favourable to the argument I am trying to make. It's something of a habit, and most probably epistemically highly questionable and something I should stop doing.

I think I also use these things to try to take an intentionally more optimistic outlook and be more positive in order to ensure best performance at tasks at hand. If you think you can succeed, you will try harder and often succeed where if you'd been pessimistic you'd have failed due to lack of resolve. This is an adaptive response, but it admittedly sacrifices some accuracy about the actual situation.

For one's beliefs about what is true to be influenced by anything other than evidence it might be or not be true, is an influence which will tend to diverge from what is true, by definition.

Though, what if I consider the fact that many people have independently reached a certain belief to itself be evidence that that belief might be true?

I would almost certainly add an animal welfare charity to my charitable giving portfolio. 

I previously had the Good Food Institute in the portfolio before financial challenges led me to trim it, so I might bring that back, or do some more research into the most effective animal welfare charity and add it alongside AMF and GiveDirectly as my primary contributions.

Given that it seems a solid majority of EAs on the forum seem to strongly favour animal welfare with very rigorous arguments for it, and my propensity to weigh "wisdom of crowds" majority opinion as evidence towards a given view, I'm actually leaning towards actually doing this.

Sorry for the delayed response.

i'm modelling this as: basic drive to not die -> selects values that are compatible with basic drive's fulfillment.

i've been wondering if humans generally do something like this. (in particular to continue having values/cares after ontological crises like: losing belief in a god, or losing a close other who one was dedicated to protecting.) 

This does seem like a good explanation of what happened. It does imply that I had motivated reasoning though, which probably casts some doubt on those values/beliefs being epistemically well grounded.

in case anyone has similar thoughts: to have the level of altruism to even consider the question is extremely rare. there are probably far better things you can do, than just dying and donating; like earning to give, or direct research, or maybe some third thing you'll come up with. (most generally, the two traits i think are needed for research are intelligence and creativity. this is a creative, unintuitive moral question to ask. and my perception is that altruism and intelligence correlate, but i could be wrong about that, or biased from mostly seeing EAs.)

These words are very kind. Thank you.

I'm starting to think it was a mistake for me to engage in this debate week thing. I just spent a good chunk of my baby's first birthday arguing with strangers on the Internet about what amounts to animals vs. humans. This does not seem like a good use of my time, but I'm too pedantic to resist replying to comments I feel the need to reply to. -_-

In general, I feel like this debate week thing seems somewhat divisive as well. At least, it doesn't feel nice to have so many disagrees on my posts, even if they still somehow got a positive amount of karma.

I really don't have time to make high-effort posts, and it seems like low-effort posts do a disservice to people who are making high-effort posts, so I might just stop.

Oh, you edited your comment while I was writing my initial response to it.

There's not actually any impractical 'ideal-ness' to it. We already can factor in animal preferences, because we already know them, because they reactively express their preference to not be in factory farms.

(Restating your position as this also seems dishonest to me; you've displayed awareness of animals' preferences from the start, so you can't believe that it's intractable to consider them.)

We can infer their preferences not to suffer, but we can't know what their "morality" is. I suspect chickens and most animals in general are very speciesist and probably selfish egoists who are partial to next-of-kin, but I don't pretend to know this.

It's getting late in my time zone, and I'm getting sleepy, so I may not reply right away to future comments.

I do think we should establish our priors based on what other people think and teach us. This is how all humans normally learn anything that is outside their direct experience. A way to do this is to democratically canvas everyone to get their knowledge. That establishes our initial priors about things, given that people can be wrong, but many people are less likely to all be wrong about the same thing. False beliefs tend to be uncorrelated, while true beliefs align with some underlying reality and correlate more strongly. We can then modify our priors based on further evidence from things like direct experience or scientific experiments and analysis or whatever other sources you find informative.

I should clarify, I am not saying we should pretend to have beliefs closer to theirs. I am saying that having such divergent views will make it harder to recruit them as EAs. It would therefore be better for EA as a movement if our views didn't diverge as much. I'm not saying to lie about what we believe to recruit them. That would obviously fail as soon as they figured out what we actually believe, and is also dishonest and lacks integrity.

And I think there can be epistemic compromise. You give the benefit of the doubt to other views by admitting your uncertainty and allowing the possibility that you're wrong, or they're wrong, and we could all be wrong and the truth is some secret third thing. It's basic epistemic humility to agree that we all have working but probably wrong models of the world.

And I apologize for the confusion. I am, as you suggested, still trying to figure out my real position, and coming up with arguments on the spot that mix my internal sentiments with external pressures in ways that may seem incoherent. I shouldn't have made it sound like I was suggesting compromising by deception. Calling things less than ideal and a compromise with reality was a mistake on my part.

I think the most probable reason I worded it that way was that I felt that it wasn't ideal to only give weight to the popular morality of the dominant coalition, which you pointed out the injustice of. Ideally, we should canvas everyone, but because we can't canvas the chickens, it is a compromise in that sense.

Load more