NA

North And

Software engineer
15 karmaJoined

Comments
12

>I'd guess that there are concrete enough answers (although you may need to provide more info), but there are different views with different approaches, and there’s some tricky math involved in many of them.

Yeah, I'm tempted to write a post here with chicken setup and collect the answers of different people, maybe with some control questions like "would you press a button that instantaneously and painlessly kills all the life on earth", so I'd have a reason to disregard them without reading. But, eh

>Pummer is aiming at coherent preferences (moral views), not social contract/coordination mechanisms/institutions.

and my opinion is that he is confused what is values and what is coordination problems, so he tries to bake the solutions of coordination problems into values. I'm fine with the level of concreteness he operates under, it's not like i had high expectations from academic philosophy

I think it's totally fair name of the problem, as its "unfairness" comes from the problem statement, not its name. "I think its pretty unfair to lump the label of a serious philosophical problem on the poorest people on earth" here for example, it's meat eater problem being morally icky, not its name.

It's kind of disappointing that it's not concrete enough to cash out even for such a simple and isolated decision.

I also checked out the Pummer lecture and it's kind of weird feeling but i think he doesn't disambiguate between "let's make my / ours preferences more coherent" and "let's figure out how to make social contract/coordination mechanisms/institutions more efficient and good". It's disappointing

Also, how do you balance actions that make less suffering vs less sufferers. Like, you also have another possible action to make it such that farms will create only 70 chickens of breed #3 and mistreat them at the level 10. How do you think about it comparatively. Like, how it cashes out, for chickens, because it's a pretty practical problem

Btw thanks for links I'll check them out 

What do you mean?

Well, it seems pretty central to such proposals, like "oh yeah, the only thing that matters is happiness - suffering!" and then just serial bullet biting and/or strategic concessions. It's just a remark, nothing important really.

On such views, it's good to satisfy preferences, but not good to create new satisfied preferences.

Hmm how about messing with which new agents will exist? Like, let's say farms will create 100 chickens of breed #1 and mistreat them at the level 10. But you can intervene and make it so such that they will create 100 chickens of breed #2 and mistreat them at the level 6. Does this possible action get some opinion from such systems? 

You could also just replace everyone with beings with (much) more satisfied preferences on aggregate. 

It's also not a good model of human preferences concerning other people / beings / entities / things. In that I totally agree

can be an issue for basically any utilitarian or consequentialist welfarist view

how about sneakily placing them into experience machines or injecting them with happiness drugs? Also this "can be" is pretty uhhh weird formulation. 

I can see that for some people who have specific preferences about what entities should exists and how, this research can be informative. But it's a very narrow set of views.

My other opinion here is that impartial hedonism is crazy. If you just anchor to it without caveats, and you somehow got hold of significant power, most of the humans would fight you, because they have preferences, that you totally ignore. (E.g. if you have button to replace humans with other species that has 7% more welfare or place everyone in experience machines or whatever). I can understand it as some form of proxy, where sometimes it conforms to your intuitions, but sometimes it recommends you to stab your eyes out, and in these cases you ignore it. (and if you do it just for strategic reasons, you are looking like a paperclip maximizer who swears not to paperclip humans because otherwise it will face resistance and prob of death. tbh) And this is kind of undermining its ultimate legitimacy, in my eyes. It's not a good model of human preferences concerning other people / beings / entities / things.

I think these two comments by Carl Shulman are right and express the idea well:

https://forum.effectivealtruism.org/posts/btTeBHKGkmRyD5sFK/open-phil-should-allocate-most-neartermist-funding-to-animal?commentId=nt3uP3TxRAhBWSfkW 

https://forum.effectivealtruism.org/posts/9rvLquXSvdRjnCMvK/carl-shulman-on-the-moral-status-of-current-and-future-ai?commentId=6vSYumcHmZemNEqB3 

btw I think it's funny how carefully expressed this critique is for how heavy it is. "oh, that's just choices, you know, nothing true / false". Kind of goes in the same direction to the situation with voting, where almost no one the left side expressed their opinion and people on the right side don't feel like they have to justify it under scrutiny, just express their allegiance in comments. 

>developing plant-based alternatives

This too can be useful, but less so.

My model here is there would be transition to lab grown meat, and moving this transition few years / months / days into the earlier time is the thing that matters most

And also in general, I have really cautious stance on population ethics with respect to animals. And i think most utilitarian approaches handle it by not handling it, just refusing to think about it. And that's really weird. Like, if i donate to animals welfare of chickens? I bet the beneficiaries is next generation of chickens from the one currently existing. I want to donate in such a way as to prevent their existence, not supply them with band aids. I think causing creation of 20% less tortured chicken instead is like insane goal for my donation. 
 

I support lab grown meat research / production, other interventions seem useless. I support "global health" more broadly and strongly, you have less ways to burn money in ways i find useless

"if you donate some bread to hungry civilians in this warzone, then this military group will divert all the excess recourses above subsistence to further its political / military goals". Guess now you have no way to increase their wellbeing! Just buy more troops for this military organization! 

That's some top tier untrustworthy move. If some charity did that with my donation I would mentally blacklist it for eternity

Load more