U

UriKatz

82 karmaJoined

Posts
1

Sorted by New

Comments
40

Reading the discussions here I cannot shake the intuition that utilitarianism with very big numbers is once again resulting in weird conclusions. AW advocates are basically describing earth as hell with a tiny sanctuary reserved for humans that are better off than average. I need more convincing. While I cannot disagree with the math or data, I think better theories of animal suffering are needed. At what point is a brain sufficiently developed, for example, to experience suffering in a way that is morally relevant, that we should care about? Are there qualitative differences that override all quantitative ones, and if so which are those? All the same, I do not completely disagree because 1) moral circle widening is very important to me; 2) at the end of the day I would not compare causes, but specific interventions. There could very well be a highly effective intervention in the animal space that is better than anything GiveWell does, but I am unaware of it.

You are right that a lot of people believing something doesn’t make it true, but I don’t think that’s what the OP is suggesting. Rather, if a lot of EAs believe enlightenment is possible and reduces suffering, it is strange that they don’t explore it further. I would suggest that your attitude is the reason why. To label it religious, and religion as the antithesis of empirical evidence, is problematic in its on right, but in any case there is plenty of secular interest in this topic, and plenty of empirical research on it. It is also worth considering that the strength of the case for an enlightened future for humanity (once we strip that term of some of the flights of fancy associated with it), is on par with that of humanity’s possible enslavement by AGI. If the latter is worth our time, why isn’t the former?

With regards to the 3rd point above, most of these studies compare meditation, not enlightenment, to other mental health interventions. Their finding that meditation is no better than CBT is not a negative. Since there is no “one size fit all” psychotherapy, having more options should be a net positive for mental health. Also, if meditation practice can lead to something more, even if that thing is not the end of all suffering, and even if it is rare, that increases the value of meditation practice.

I applaud you for writing this post.

There is a huge difference between statement (a): "AI is more dangerous than nuclear war", and statement (b): "we should, as a last resort, use nuclear weapons to stop AI". It is irresponsible to downplay the danger and horror of (b) by claiming Yudkowsky is merely displaying intellectual honesty by making explicit what treaty enforcement entails (not the least because everyone studying or working on international treaties is already aware of this, and is willing to discuss it openly). Yudkowsky is making a clear and precise declaration of what he is willing to do, if necessary. To see this, one only needs to consider the opposite position, statement (c): "we should not start nuclear war over AI under any circumstance". Statement (c) can reasonably be included in an international treaty dealing with this problem, without that treaty loosing all enforceability. There are plenty of other enforcement mechanisms. Finally, the last thing anyone defending Yudkowsky can claim is that there is a low probability we will need to use nuclear weapons. There is a higher probability of AI research continuing, than of AI research leading to human annihilation. Yudkowsky is gambling that by threatening the use of force he will prevent a catastrophe, but there is every reason to believe his threats increase the chances of a similarly devastating catastrophe.

It seems to me that no amount of arguments in support of individual assumptions, or a set of assumptions taken together, can make their repugnant conclusions more correct or palatable. It is as if Frege’s response to Russel’s paradox were to write a book exalting the virtues of set theory. Utility monsters and utility legions show us that there is a problem either with human rationality or human moral intuitions. If not them than the repugnant conclusion does for sure, and it is an outcome of the same assumptions and same reasoning. Personally, I refuse to bite the bullet here which is why I am hesitant to call myself a utilitarian. If I had to bet, I would say the problem lies with assumption 2. People cannot be reduced to numbers either when trying to describe their behavior or trying to guide it. Appealing to an “ideal” doesn’t help, because the ideal is actually a deformed version. An ideal human might have no knowledge gaps, no bias, no calculation errors, etc. but why would their well being be reducible to a function?

(note that I do not dispute that from these assumptions Harsanyi’s Aggregation Theorem can be proven)

the quest for an other-centered ethics leads naturally to utilitarian-flavored systems with a number of controversial implications.

This seems incorrect. Rather, it is your 4 assumptions that “lead naturally” to utilitarianism. It would not be hard for a deontologist to be other-focused simply by emphasizing the a-priori normative duties that are directed towards others (I am thinking here of Kant’s duties matrix: perfect / imperfect & towards self / towards others). The argument can even be made, and often is, that the duties that one has towards one’s self are meant to allow one to benefit others (i.e. skill development). If by other-focused you mean abstracting from one’s personal preferences, values, culture and so forth, deontology might be the better choice, since its use of a-priori reasoning places it behind the veil of ignorance by default.

Only read the TL;DR and the conclusion, but I was wondering why the link between jhana meditation and brain activity matters? Even if we assume materialism, the Path in its various forms (I am intimately familiar with the Buddhist one) always includes other steps, and only taken together do they lead to increased happiness and mental health. My thinking is that we should go in one of two direction: direct manipulation of the brain, or a holistic spiritual approach. This middle way, ironically, seems to leave out the best of both worlds.

I am responding to the newer version of this critique found [here] (https://www.radicalphilosophy.com/article/against-effective-altruism).

Someone needs to steel man Crary's critique for me, because as it stands I find it very weak. The way I understand this article:

  1. The institutional critique - Basically claims 2 things: a) EA's are searching for their keys only under the lamppost. This is a great warning for anyone doing quantitate research and evaluation. EA's are well aware of it and try to overcome the problem as much as possible; b) EA is addressing symptoms rather than underlying causes, i.e. distributing bed-nets instead of overthrowing corrupt governments. This is fair as far as it goes, but the move to tackling underlying causes does not necessarily require abandoning the quantitative methods EA champions, and it is not at all clear that we shouldn't attempt to alleviate symptoms as well as causes.

  2. The philosophical critique - Essentially amounts to arguing that there are people critical of consequentialism and abstract conceptions of reason. More power to them, but that fact in itself does not defeat consequentialism, so in so far as EA relies on consequentialism, it can continue to do so. A deeper dive is required to understand the criticisms in question, but there is little reason for me to assume at this point that they will defeat, or even greatly weaken, consequentialist theories of ethics. Crary actually admits that in academic circles they fail to convince many, but dismisses this because in her opinion it is "a function of ideological factors independent of [the arguments'] philosophical credentials".

  3. The composite critique - adds nothing substantial except to pit EA against woke ideology. I don't believe these two movements are necessarily at odds, but there is a power struggle going on in academia right now, and it is clear which side Crary is on.

  4. EA's moral corruption - EA is corrupt because it supports global capitalism. I am guilty as charged on that count, even as I see capitalism's many, many flaws and the need to make some drastic changes. Still, just like democracy, it is the best of evils until we come up with something better. Working within this system to improve the lives of others and solve some pressing worldwide problems seems perfectly reasonable to me.

As an aside I will mention that attacking "earning to give" without mentioning the concept of replicability is attacking nothing at all. When doing good try to be irreplaceable, when earning money on Wall Street, make sure you are completely replaceable, you might earn a little less but you will minimize your harm.

Finally, it is telling that Crary does not once deal with longtermist ideas.

What would you say are the biggest benefits of being part of an EA faith group?

Load more