Leo

Head of Operations @ Tlön
269 karmaJoined

Comments
50

Topic contributions
3128

Effective altruists subscribe to a version of utilitarianism according to which actions are to be judged by their consequences.

While many EAs subscribe to utilitarianism, many others don't. Andreas Mogensen is just one example. The movement doesn't officially endorse utilitarianism either, as you can see in the objections here

Leo
3
0
0
1

This is the best simple case I have read so far. Well done!

I see, thanks. I guess I would have preferred a more accurate, unambiguous aggregation of everyone’s opinion, to have a clearer sense of the preferences of the community as a whole, but I'm starting to think that it's just me.

As I said last time, trying to quantify agreement/disagreement is much more confusing to determine and to read, than just measuring, out of an extra $100m, how many $ millions people would assign to global health/animal welfare. The banner would go from 0 to 100, and whatever you vote, let say 30m, would mean that 30m should go to one cause and 70m to the other. As it is, just to mention one paradox, if I wholly disagree with the question, it means that I think it wouldn't be better to spend the money on animal welfare than on global health, which in turn could mean a) I want all the extra funding to go to global health, b) I don't agree at all with the statement, because I think it would be better to allocate the money differently, say 10m/90m. Now if you vote as having a 90% of agreement, it could mean b, or it could mean that you almost fully agree for other reasons, for example, because you think there's a 10% chance that you are wrong.

Answer by Leo3
0
0

There's substantial discussion on this topic following Eliezer's take on this.

I think I would prefer to strongly disagree, because I don't want my half agree to be read as if I agreed to some extent with the 5% statement. This is because "half agree" is ambiguous here. People could think that it means 1) something around 2,5% of funding/talent or 2) that 5% could be ok with some caveats. This should be clarified to be able to know what the results actually mean.  

Leo
5
2
1
2

This is a great experiment. But I think it would have been much clearer if the question was phrased as "What percentage of talent+funding should be allocated to AI welfare?", with the banner showing a slider from 0% to 100%. As it is now, if I strongly disagree with allocating 5% and strongly agree with 3% or whatever, I feel like I should still place my icon on the extreme left of the line. This would make it look like I'm all against this cause, which wouldn't be the case.

Load more