A

Achim

30 karmaJoined
0

Posts
1

Sorted by New
1
Achim
· · 1m read

Comments
28

This is an interesting question. Even if the conditions were not fulfilled for almost all cases, I have not yet seen an answer to this question concerning ethical judgements in cases where these conditions are fulfilled.

When considering this question, the more general point is that the way that different animals are farmed should make some difference in ethical judgement. This post is about quantitative comparisons of suffering, but the differences in farming seem to be neglected. In particular, Brian Tomasik's table on which this post is based ranks different animals by "Equivalent days of suffering caused per kg demanded", but this comparison is strongly driven by column 5, "Suffering per day of life (beef cows = 1)":

"Column 5 represents my best-guess estimates for how bad life is per day for each of type of farm animal, relative to that animal's intrinsic ability to suffer. That is, differences in cognitive sophistication aren't part of these numbers because they're already counted in Column 4. Rather, Column 5 represents the "badness of quality of life" of the animals. For instance, since I think the suffering of hens in battery cages is perhaps 4 times as intense per day as the suffering of beef cows, I put a "1" in the beef-cow cell and "4" in the egg cell."

I don't mind using subjective estimates in such calculations, but note that this assumes that an average day in the life of all of these animals is a day of suffering. This may be the case in factory farming, but I doubt that that is a necessary assumption for alpine pasture. However, if life is good on an average day of a cow in alpine pasture, we would need a negative sign.

You can enter a negative sign in the table. However, you'll get an error message, because the whole table is based on the assumption that "Suffering per day of life" is positive. With this assumption, raising the "Average lifespan (days)" (Column 2) increases the "Equivalent days of suffering caused per kg demanded". If this is the case, then it is good that farmed animals are "killed at a fraction of their natural lifespans".

Moreover, Tomasik writes, "Column 6 is a best-guess estimate of the average pain of slaughter for each animal, expressed in terms of an equivalent number of days of regular life for that animal. For instance, I used "10" as an estimate for broiler chickens, which means I assume that on average, slaughter is as painful as 10 days of pre-slaughter life."

If the animals actually enjoy their life (negative number in column 5), you can still use that column by entering a negative number in column 6; these are the days an animal would forgo if it could avoid being slaughtered. So if we take the numbers in the table for beef and assume that column 5 is -1 (I don't know how to interpret this though, as this is all relative to beef cow suffering), we need to enter -395 in column 6 to get to zero in column 7.

I'd be interested if someone has a more general calculator.

"While from an emotional perspective, I care a ton about our kids' wellbeing, from a utilitarian standpoint this is a relatively minor consideration given I hope to positively impact many beings' lives through my career."

I find this distinction a bit confusing. After all, every hour spent with your kid is probably "relatively minor" compared to the counterfactual impact of that hour on "many beings' lives". So it seems to me that your evaluating personal costs and expected experiences and so on at all only makes sense if the kid's wellbeing is very important to you, or do I misunderstand that?

"Thus, we looked at the child's wellbeing on a very high level - guessing that our children have good chances at a net positive life because they will likely grow up with lots of resources and a good social environment."

Out of interest: Do you consider catastrophic risks to be small enough not to matter, and is there a point at which that would change?

Thanks for the article.

Did aspects of the child's wellbeing, expected life satisfaction, life expectation etc enter your considerations?

Thanks! I read it, it's an interesting post, but it's not "about reasons for his Ai skepticism ". Browsing the blog, I assume I should read this?

Which of David's posts would you recommend as a particularly good example and starting point?

"Also - I'm using scare quotes here because I am very confused who these proposals mean when they say EA community. Is it a matter of having read certain books, or attending EAGs, hanging around for a certain amount of time, working at an org, donating a set amount of money, or being in the right Slacks?"

It is of course a relevant question who this community is supposed to consist of, but at the same time, this question could be asked whenever someone refers to the community as a collective agent doing something, having a certain opinion, benefitting from something etc. For example, you write "They may be interested in community input for their funding, via regranting for example, or invest in the Community". If you can't define the community, you cannot clearly say that someone invested in it. You later speak of "managing the relationship between the community and its most generous funder", but it seems hard to say how this relationship is currently managed if the community is so hard to define.

Which global, technological, political etc developments do you currently find most relevant with regards to parenting choices?

If you don't want to justify your claims, that's perfectly fine, no one is forcing you to discuss in this forum. But if you do, please don't act as if it's my "homework" to back up your claims with sources and examples. I also find it inappropriate that you throw around many accusations like "quasi religious", "I doubt there is any type of argumentation that will convince the devout adherents of the ideology of the incredulity of their beliefs", "just prone to conspiracy theories like QAnon", while at the same time you are unwilling or unable to name any examples for "what experts in the field think about what AI can actually do".

There have been loads of arguments offered on the forum and through other sources like books, articles on other websites, podcasts, interviews, papers etc. So I don't think that what's lacking are arguments or evidence.

 

I'd still be grateful if you could post a link to the best argument (according to your own impression) by some well-respected scholar against AGI risk. If there are "loads of arguments", this shouldn't be hard. Somebody asked for something like that here, and there aren't so many convincing answers, and no answers that would basically end the cause-area comprehensively and authoritatively.

I think the issue is the mentality some people in EA have when it comes to AI. Are people who are waiting for people to bring them arguments to convince them of something really interested in getting different perspectives?

I think so - see footnote 2 of the LessWrong post linked above.

Why not just go look for differing perspectives yourself?

Asking people for arguments is often one of the best ways to look for differing perspectives, in particular if these people have strongly implied that plenty of such arguments exist.

This is a known human characteristic, if someone really wants to believe in something they can believe it even to their own detriment and will not seek out information that may contradict with their beliefs

That this "known human characteristic" strongly applies to people working on AI safety is, up to now, nothing more than a claim.

(I was fascinated by the tales of COVID patients denying that COVID exists even when dying from it in an ICU).

I share that fascination. In my impression, such COVID patients have often previously dismissed COVID as a kind of quasi-religious death cult, implied that worrying about catastrophic risks such as pandemics is nonsense, and claimed that no arguments would convince the devout adherents of the 'pandemic ideology' of the incredulity of their beliefs. 

Therefore, it only seems helpful to debate in this style when you have already formed a strong opinion as to which side is right; otherwise you can always just claim that the other side's reasoning is motivated by religion/ideology/etc. Otherwise, the arguments seem like Bulverism.

I witnessed this lack of curiosity in my own cohort that completed AGISF. ... They are all very nice amicable people and despite all the conversations I've had with them they don't seem open to the idea of changing their beliefs even when there are a lot of holes in the positions they have and you directly point out those holes to them. In what other contexts are people not open to the idea of changing their beliefs other than in religious or other superstitious contexts? Well the other case I can think of is when having a certain belief is tied to having an income, reputation or something else that is valuable to a person. 

I don't work in AI Safety, I am not active in that area, and I am happy when I get arguments that tell me I don't have to worry about things. So I can guarantee that I'd be quite open for such arguments. And given that you imply that the only reasons why these nice people still want to work in AI Safety is that they were quasi-religious or otherwise biased, I am looking forward to your object-level arguments against the field of AI Safety. 

Given this looks very much like a religious belief I doubt there is any type of argumentation that will convince the devout adherents of the ideology of the incredulity of their beliefs. 

 

I'd be interested in whether you actually tried that, and whether it's possible to read your arguments somewhere, or whether you just saw superficial similarity between religious beliefs and the AI risk community and therefore decided that you don't want to discuss your counterarguments with anybody.

Load more