Non-zero-sum James

Writer / Editor @ Non-Zero-Sum Games
164 karmaJoined Working (15+ years)Auckland, New Zealand
nonzerosum.games

Bio

Podcaster and Blogger—strong believer in effective altruism and have taken the giving pledge.

My weekly blog and podcast at nonzerosum.games is a world-help site of sorts - focussing on win-win games as essential to facing global issues. I explore game-theoretical approaches to real world issues in an accessible way, using illustrations, simulations and badly drawn graphs.

How others can help me

I am interested in sharing good ideas, discussion, even argument - if you find my work interesting please share it with those you think would be interested. I realise EA is a niche interest, finding those special people requires casting the net wide.

How I can help others

If you would like to write an article to be featured and illustrated on the site I'm open to proposals that are in line with the ethos of the site. Otherwise I hope to help the world by contributing to positive, productive and pro-social solutions to an information-sphere that can otherwise be dominated by negativity and conflict. Please feel free to use any resources on the site, or request I cover a particular topic.

Comments
45

I'm not sure that's how time works, I don't see that scaling makes any difference.

If wealth-building wasn't just wealth-hoarding (which is the most lucrative way of wealth building that we have at present - see Thomas Piketty r>g) this might be a worthwhile approach—thinking about who is a better steward of the investment (I think an excess of paternalism might be playing a part in this assessment) but wealth-hoarding results in greater inequality, which is what we're trying to avoid, surely, with charitable giving.

I agree though that there is probably a sweet-spot.

I think the logic of the article is assuming the your money accrues compound interest only if you hold on to it. But if you give that money to someone else, they too can put it in the bank and accrue interest, the fact that they do something different with it (like staying alive) suggests there is something more valuable in that choice. Before putting money in the bank we often invest in ourselves, like paying for a tertiary education for instance, and that itself pays off more than that money would in the bank. The same logic stands for the person someone donates to. For instance I'm currently paying for a friend's daughter in Africa to go through medical school, that's costing me money now that will result in her supporting her wider family for an entire generation before I die. Their family might be self-sufficient by that time.

I don't think there's a meaningful distinction between this situation on an individual level or a population level. If everyone gives at death, then a huge amount of money that could have been given earlier and could have accrued out-weighed benefits for the recipients stays with the wealthy, and those potential recipients die or fail to benefit from having the money earlier.

It seems obvious on the face of it that saving until you die will leave you the most to give to charity. But this discounts the benefits of having that money earlier for the recipient.

Thank for hearing me. Now that I see your approach is not as hard-line as it seemed to be, I largely agree with your concerns and hopes for the future. I think many of the texts that discuss consequentialism generally do so by straw-manning a very short-term, limited view of consequentialism, and so I'm not surprised that texts you've read on the subject have given you this impression of it.

Playing out those situations for longer or involving more variables generally aligns utilitarianism with our more liberal moral intuitions. For instance the "solution" to the illegal immigration problem omits the entire point of seeking refuge from a country to begin with, that those refugees are hoping for a better life, solving the problem of illegal immigration bars all those people from seeking a better life, intuitively we know this reduces their well-being, and so part of what is so wrong about the solution is this unspoken part of it, which is quantifiable in utilitarian terms.

It's also important to consider why anti-immigration proponents want to keep those people out—out of a fear that those others don't share our values, and so "protecting our values" which is seen as a virtue by many (not me), plays a role in this unsavoury situation.

But as I say in the post, this playing out of the calculus to cover every possibility is not feasible in general, so virtues, or behavioural rules of thumb that generally result in positive outcomes is more practicable. And as I said in the comment, virtue ethics is also a better way to judge character.

I think EA has some major problems especially when it gets into long-termism, and measuring extreme threats that are extremely unlikely against sure threats that are measurably bad.

Okay so, writing this:

Do you think that we can culturally evolve to not need any behavioural guard-rails?

Does not mean I...

believe that moral evolution has stopped right at the customs of your time

If I said "to not need the guard rails we have at the moment" then you would have a point, but the word "any" means any level of guard rails down to anything above zero.

I'm just going to clarify, I am putting forward an inclusive position that says there can be value in the different perspectives (how much value each has can be anything above zero). You are putting forward an exclusive position that says virtue ethics is the only morally relevant position—therefore you must show that consequentialist calculus or deontology has zero benefit. So, all I have to do is show that any level of guard rails are necessary (anything down to zero) for the deontological part of this inclusive position, and all I have to show is that a consequentialist calculus is in any way useful for deriving virtues.

So, please don't strawman the position by stating that I have claimed more than I am claiming.

And in terms of showing that a consequentialist calculus is useful in validating virtue ethics, you've done it yourself. Comparing a virtue ethics approach to a utilitarian approach:

Twenty thousand altruists can save quite a few people from malaria in Africa, but a rational social movement for moral improvement that has an emotional impact on its members in a similar way to how compassionate religions have done so far could reach millions of people.

Why mention the millions of people affected, if consequences aren't important? You are running a utilitarian calculus here to justify the virtue ethics approach, which is exactly what I said utilitarian calculus is useful for.

So, at this point I've questioned if moral evolution could reach the point where there is no need for any guard rails at all (not the current level of guard rails we have at present) and have shown that you yourself use utilitarianism in the way I've suggested we all do.

I do think there is something to learn from Amish and other plain communities, and I support what you say that it might be productive to research what allows for this sort of harmony without the strictures of the legal system. I recognise the benefits of "copy plus modification" in moral evolution. Though I don't think the Amish are the first thing people think of when imagining an open and free utopia—my general impression is that the harmony is born out of an effective brainwashing campaign, but I'm open to research that shows the utility of other mechanisms in the Amish community that might be applied to a free society.

Personally I don't think modern society has exhausted its evolutionary path when we look at global trade and the utilitarian benefits (that can be quantified through the gathering of statistics) of social programs and prison reform that we see in Northern Europe. Modern society isn't a capitalist monolith, it's an evolving system with many open paths. If we measure quantifiable results like recidivism in the prison system, the level of extreme poverty in relation to various policies and trade and (meaningful) measures (not GDP) of wellbeing in terms of the welfare state, then we can evolve a system where more people exist in an interdependent relationship, which incentivises harmony in a way that is not coercive but rather mutually beneficial.

We can only do this though by measuring outcomes (consequences) and it might require us to wind back some outdated virtues that keep us following paths that are unproductive: like the value of 'my personal freedom to bear arms', or 'my righteous revenge in the punishment of the wicked', or 'my in-group exists in a zero-sum relationship to a given out-group'. These values are held by some people, and acting on the these values is seen by some as virtuous. The only way to learn that they are not virtuous is for someone to show them that acting on those values has negative outcomes—which again, requires a consequentialist argument.

So, I am in no way discounting the central utility of virtue ethics, it is the guide for 90% of my day-to-day behaviour, it is meaningful and practical, and is the best way to judge a person's character. My daughter actually made a good point the other day that, on an individual level, judging a person by the virtues they exhibit is a much better way of assessing their character than how much good they've done in the world through a utilitarian calculus, because people have very different capacities for effecting good in the world. For instance, 10 years ago judging Elon Musk on his utility would have measured him as one of the best people in the world—which has been completely turned on its head over the past 10 years (depending on your opinion of Elon Musk of course). Whereas if you'd judged him on his character—being abusive to workers, jumping from relationship to relationship, calling a volunteer worker a pedo-guy with no supporting evidence to millions of followers, buying a McLaren F1 and crashing it immediately etc—then with that assessment you'd have been much less surprised at his "turn".

Anyway, just thought I'd add that I agree with you that virtue ethics is important. Please don't (and this may be an unnecessary pre-emtive request) claim in your reply that I think virtue ethics is somehow irrelevant just because I want to also include other perspectives. 

There are effectively laws protecting against cannibalism, as there are laws against murder, and against abuse or desecration of a corpse.

I'm afraid what's wrong with you is that, like any scholar of the 19th or 18th century, you believe that moral evolution has stopped right at the customs of your time

Where do you get this idea from? I explicitly stated in my comment:

Laws are informed of course by moral evolution, and I advocate for updating rules based on a utilitarian calculus.

I certainly don't think we've arrived at an end-point to moral development, and I also recognise that morals will change depending on what suits our civilisation (if we live in a system that effectively incentivises pro-social behaviour then this will hopefully enable increased freedoms) I'm just saying there's likely a limit in terms of our instincts, and addressing mental hardware on a biological level has its own ethical questions.

To be clear, I'm not a fan of deontology, and I don't think it's useful for deriving values, I think moral actions are largely a function of virtue ethics, but I think that we need some metric by which we measure virtues, and that lies in some form of consequentialism.

However, all these behaviours existed in human cultures of the past, and it wasn't criminal laws that made them disappear

 

I don't see that you have any support for this claim. Laws against cannibalism and child sexual abuse have undoubtedly made those practices less prevalent, that's how laws work. Laws are informed of course by moral evolution, and I advocate for updating rules based on a utilitarian calculus (unlike traditional deontology that gets these values out of a religious text or other such arbitrary source).

So there is only one valid ethic: virtue ethics

How do you determine what are virtues though? If there is not some consideration of consequences?

Do you think that we can culturally evolve to not need any behavioural guard-rails, given that we are not robots, but rather evolved primates whose instincts were shaped for a much more dangerous environment? I mean, it would be nice to believe that, but I don't see much evidence for it. Child abusers still exist in societies where cultural norms assume it to be wrong, child abusers largely know that what they are doing is wrong, and understand why, but are never the less compelled to do it—Don't you think it's a little naive to think that this can be changed simply by continued cultural evolution?

I'm not understanding the distinction you're making between the "experience" and the "response." In my example, there is a needle poking someone's arm. Someone can experience that in different ways (including feeling more or less pain depending on one's mindset). That experience is not distinct from a response, it just is a response.

Now you appear to be using a definition of "response" that is synonymous with "experience". Before you were using "experience" to describe "freaking out" which I would see as a "response" to an "experience" (an action you take after having experienced something). If this is a semantic issue, I don't need you to subscribe to my definitions, just know these are the definitions I'm using, and hopefully my meaning is clear.

However, in the links I included in my previous comment, I suggested there are people who explicitly reject the view that this is all that matters

I find these explicit rejections unconvincing. People often self-report inaccurately. The tendency to value beauty, for instance, is quite easily reducible to pleasure-seeking. We have biologically induced feelings of pleasure that are associated with beauty, that correlate with evolutionary advantages.

I have challenged you to genuinely try this with a moral tenet you and I would agree on. If you are genuinely interested in trying to understand my point, I think this is the best way for you to understand it.

And again, assuming the experience of pain is inescapable, why does it follow that it is necessarily bad?

The inescapable nature of the experience is not what makes it good or bad, otherwise I would have called it "Inescapable experience" and not stipulated "value-laden". A neutral value like the click of a finger is still inescapably value-laden, it's just the value is neutral (zero), and therefore not really relevant when extending into a moral discussion.

I believe I have already provided arguments to support the two questions you've asked, but in short:

  1. It is definitionally bad—if pain and suffering was a good experience we would call it pleasure. I think your confusion might be to do with a sense that a "moral" good is necessarily something that is imposed on a person or action, I disagree with this directionality. My point is that "moral" goods are emergent extrapolations derived from inherently good or bad experiences.
  2. This is the foundational claim of the theory, it cannot be proved, but it can be falsified. I'm saying this is a worthwhile framework of understanding that I believe is consistent with reality, and as such might actually be real. But like with any theory, this can only be provisionally verified by numerous examples of where it is consistent, and if it is inconsistent it should be able to be showed to be so.

I'm not asking you to falsify it, you are welcome to try if you want. I would prefer you took the challenge I've provided, as this will actually help you understand the proposition. I am offering an explanation and a framework that I think has high utility. Whether you adopt it or not is up to you. You don't have to falsify it to reject it.

I can see how encouraging this sort of "cause neutrality" might keep people cognisant of particular programs in a given field that is not, in general, highly ranked for effectiveness where nevertheless that particular program is very effective, perhaps?

I haven't actually observed this issue, the project of EA seems all about beginning neutral and ending up with a hierarchy—if it is swerving away from this approach then that seems antithetical to the general mission.

On a personal note, I generally try to direct my giving towards the least emotive topics (general funds for boring diseases), assuming that there will be an over-supply for more emotive areas.

Load more