I

idea21

31 karmaJoined

Comments
73

The factor of technological advancement must be taken into account. A fully cooperative humanity committed to the elimination of all forms of suffering can get at its disposal technological means with a power as unimaginable today as our current technology could have been unimaginable to the wise Aristotle more than two thousand years ago.

Recently, another forum post—at least—addresses the issue of increasing motivation for altruistic action. 

https://forum.effectivealtruism.org/posts/gWyvAQztk75xQvRxD/taking-ethics-seriously-and-enjoying-the-process

This post referred too to a very enjoyable book about altruistic action, published ten years ago, which I think we should all read.
 

I honestly believe this should be the real, priority long-term question: how to generate a non-political social movement that motivates altruistic action.

I would love for the EA movement, as it exists today, to be able to achieve the ambitious goals of increasing the number of signatories of the GWWC Pledge to "millions"... but I have my doubts that this will be the case, and is it fair to the people who are suffering and require altruistic action to simply wait and see, and not try anything to accelerate the process of increasing altruistic action?

In my view, the first step would be to generate a discussion group on this issue (how can we motivate more people to be active altruists?). I imagine an inevitable conclusion would be, above all, to try to generate a social support movement for donors and those who are hesitant about whether or not to be one. 

 I'm also sure EA would seem less like a personal sacrifice if you were surrounded by EAs. 

The "Alcoholics Anonymous" model ("mutual aid") is the most obvious: individualized support and the creation of small groups ("cells") at the local level. It's absurd not to consider the psychological implications of making such a significant change in your lifestyle from what's conventional.

All of this is independent of the speculation—which I find logical—about the possibility of organizing a "behavioral ideology" (so we don't call it "religion") that offers individuals the option of developing a behavioral style based on benevolence, aggression control, altruistic idealism, and mutual affection, all within the framework of enlightened rationality, which, to certain temperaments at least, might be attractive as a source of "personal happiness" (let's not forget that there are many ways to "be happy"). There are historical precedents for such social movements being viable (why they failed is a topic that deserves deep reflection).

Thank you very much, Jens, for sharing your point of view, which I find extremely valuable.

Moral behavior evolves especially when it is part of a lifestyle (ethos). Compartmentalizing moral behavior is not in keeping with human nature. The most effective long-term approach would undoubtedly be one that focuses primarily on developing a compassionate, benevolent, and enlightened lifestyle that is viable as a social alternative. Veganism and the end of animal abuse would be necessary consequences of this.

However, the opposite is not so true, as there are well-known examples of social initiatives in favor of animal welfare linked to intolerant political ideologies as well as less-than-benevolent personal behavioral styles.

Do you see frameworks like mine as useful inputs to the kind of movement you're describing? Even if AI alignment alone isn't sufficient, could it be necessary? If we get AI right, does that make the human behavioral transformation more achievable?

 

I've done a bit like you and asked an artificial intelligence about the social goals of behavioral psychology. I've proposed two options: either using our knowledge of human behavior to adapt the individual to the society in which they can achieve personal success; or using that knowledge to achieve a less aggressive and more cooperative society.

""within the framework of radical behavioral psychology applied to society, the goal is closer to:

  • Improving society (through environmental and behavioral design) to expand social efficient cooperation and reduce harmful behaviors like aggression.

The first option, "Adapting to the mainstream society in order to get individual success," aligns more closely with general concepts of socialization and adaptation found across various fields of psychology (including social psychology and developmental psychology), but is not the distinct, prescriptive social goal proposed by the behaviorist project for an ideal society.""   (This is "Gemini")

Logically, AI, which lacks prejudice and uses only logic, opts for social improvement... because it starts from the knowledge that human behavior can be improved based on fairly logical and objective criteria: controlling aggression and encouraging efficient cooperation.

Would AI favor a "behavioral ideology" as a strategy for social improvement?

The Enlightenment authors two hundred years ago considered that if astrology had given rise to astronomy and alchemy to chemistry... religion could also give rise to more sophisticated moral strategies for social improvement. What I call "behavioral ideology" is probably what the 19th-century scholar Ernest Renan called "pure religion."

If, starting with an original movement for non-political social change like EA, a broader social movement were launched to design altruistic strategies for improving behavior, it would probably proceed in a similar way to what Alcoholics Anonymous did in its time: through trial and error, once the goals to be achieved (aggression control, benevolence, enlightenment) were firmly established.

Limiting myself to fantasizing, I find such a diversity of available strategies that it is impossible for me to calculate which ones would ultimately be selected. To give an example: the Anabaptist community of the "Amish" is made up of 400,000 people who manage to organize themselves socially without laws, without government, without physical coercion, without judges, without fines, without prisons, or police... (the dream of a Bakunin or a Kropotkin!) How do they do it? Another example is the one Marc Ian Barasch mentions in his book "The Compassionate Life" about the usefulness of a biofeedback program to stimulate benevolent behaviors.

The main contribution I find in AI is that, although you yourself have detected cognitive biases in its various forms, operating on the basis of logical reasoning stripped of prejudices (far from flawed human rationality... laden with heuristics) can facilitate the achievement of effective social goals. 

AI isn't concerned with the future of humanity, but with solving problems. And the human problem is quite simple (as long as we don't prejudge): we are social mammals; Homo sapiens, who, like all social mammals, have been genetically programmed to be competitive and aggressive in the dispute over scarce economic resources (hunting territories, availability of females, etc.). The problem arises when, thanks to technological development... we now have potentially infinite economic resources... What role do instinctive behaviors like aggression, tribalism, or superstition play now? They are now merely handicaps.

Sigmund Freud made it clear in his book: "Civilization is the control of instinct."

However, what would probably be perfectly logical for an Artificial Intelligence may be shocking for today's Westerner: the solution to the human problem will closely resemble the old Christian strategies of "saintliness." (but rationalist). As psychologist Jonathan Haidt has written, "The ancients may not have known much about science, but they were good psychologists."

Thank you very much for the interest shown in your comment and for the opportunity you've given me to explore new perspectives to explain an issue that, in my opinion, could be extremely important and is not being addressed even in an environment that challenges conventions like the EA Community.

I'm curious how you'd operationalize "control of aggression" as a distinct pillar or principle. Would it be:

  • A prohibition (like the inviolable limits in Article VII: "no torture, genocide, slavery")?
  • A positive virtue (cultivating non-aggressive communication, de-escalation)?
  • A systems-level design principle (institutions structured to prevent violent conflict)?
  • Something else?

 

Moral values ​​are the foundation of an "ethics of principles," but the problem with an "ethics of principles" is that it is unrealistic in its ability to influence human behavior. In theory, all moral principles contemplate the control of aggression, but their effectiveness is limited.

Since the beginning of the Enlightenment, the problem has been raised that moral, political, and educational principles lack the power to affect moral behavior that religions do. We must admit, for example, that, despite the commendable efforts of educators, scholars, and politicians, whether liberalism's values ​​of democratic tolerance and respect for the individual can effectively prevail in a given society depends not so much on proposing impeccable moral principles... but on whether that particular society has a particular sociological foundation that makes the psychological implementation of such benevolent and enlightened principles viable in the minds of its citizens. In the end, it turns out that liberal principles only work well in societies with a tradition of Reformed Christianity.

I believe that the emergence for the first time of a social movement like EA, apolitical, enlightened, and focused on developing an unequivocally benevolent human behavioral tendency such as altruism, represents an opportunity to definitively transform the human community in the direction of aggression control, benevolence, and enlightenment.

The answer, in my view, would have to lie in tentatively developing non-political strategies for social change. Two hundred years ago, many Enlightenment thinkers considered creating "secular religions" (what I would call "behavioral ideologies"), but they always remained superficial (rituals, temples, collectivism). A scholar of religions, Professor Loyal Rue, believes that religion is basically "educating emotions." It's about using strategies to internalize "moral values."

In my view, if EA utilitarians want more altruistic works, what they need to do is create more altruistic people. Altruism isn't attractive enough today. Religions are attractive.

In my view, there are a multitude of psychological strategies that, through trial and error, could eventually give rise to a non-political social movement for the spread of non-aggressive, benevolent, and enlightened behavior (a "behavioral ideology"). The example I always have at hand is Alcoholics Anonymous, a movement that emerged a hundred years ago through trial and error, and was carried out by highly motivated individuals seeking behavioral change.

A first step for the EA community would be to establish a social network to support donors in facing the inevitable sacrifices that come with practicing altruism. This same forum already contains accounts of emotional problems ("burnout," for example) among people who practice altruism without the proper psychological support.

But, logically, altruism can be made much more attractive if we frame it within the broader scope of benevolent behavior. The practice of empathy, mutual care, affection, and the development of social skills in the area of ​​aggression control can yield results equal to or better than those found in congregations of the well-known "compassionate religions"... and without any of the drawbacks derived from the irrationalism of ancient religious traditions (evolution is "copy plus modification"). An "influential minority" could then be created capable of affecting moral evolution at a general level.

Considering the current productivity of human labor, a social movement of this type, even if it reached just 0.1% of the world's population, would more than achieve the most ambitious goals of the EA movement. But so far, only 10,000 people have signed the GWWC Pledge.

Which cultural or moral assumptions am I missing?

 

I think something very obvious but extremely important is missing in your " six-pillar Gold Standard of Human Values"  if we want to approach morality as a process of behavioral improvement: the control of aggression.

We should view morality as a strategy for fostering efficient human cooperation. Controlling aggression and developing mutual trust is equivalent to a culture of benevolence. We can observe that today there are ("national") cultures that are less aggressive and more benevolent than others; it has therefore been demonstrated that such patterns of social behavior are manipulable and improvable.

Just as Marxists said that "what leads to a classless society is good," we should also say "what leads to a non-aggressive, benevolent, and enlightened society is good." I add the word "enlightened" because it seems true that, based on religious traditions,some largely non-aggressive and benevolent societies can already be achieved; however, irrationalism entails a general detriment to the common good.

moral values survive because they fit the current environment, but the environment may change at any time, anywhere

 

Don't you think that the primary environmental determinant of moral evolution is prior moral evolution itself?

Social change cannot be advanced unless we extract some understandable guidelines for long-term human development. There is a linear moral evolution, it's evident, in the sense of controlling aggression and developing social strategies for effective cooperation. The first step must be to recognize this evidence, as is the case with Darwinian evolution or heliocentrism.

Short-term pragmatism will never be the long-term solution.

The fact that there is today an apolitical movement for social progress centered on individual commitment to a behavioral trait (altruism) can be interpreted as a milestone in moral evolution. What is missing is for that single behavioral trait ("effective" altruism) to be linked to a set of related behavioral traits (all of them consequences of controlling aggression) to give rise to a cultural alternative.

We are close, it seems to me, to achieving an "ideology of behavior," which would perhaps be the decisive step in the progress of civilization. To deny this possibility seems irrational and a consequence of the weight of prejudice.

What do you all think? Is this evolutionary, 'small steps' approach a robust way to handle the future, or am I missing a crucial piece of the puzzle? 

 

Thank you very much for raising this question. If we consider that in conventional society, altruistic behavior seems extreme and very risky, then it is essential to consider social reality in a somewhat radical way... since the pretensions of objectivity and impartiality always lead to radical conclusions regarding the conventional wisdom of the time.

My suggestion—very briefly—has to do with what appears to be evidence of moral evolution, which seems to more or less coincide with the idea of ​​the "Civilizing Process."

I always start with the simplest example: why did the oppressing class of the 19th century tolerate the gradual emancipation of the disadvantaged classes, as evidenced by the labor movement and the democratization of Western society in general? Why, on the other hand, did the oppressors of ancient Rome have no tolerance whatsoever for rebellious slaves, like Spartacus?

I can't find any explanation for this in the realm of politics, economics, or technology. The only explanation I can think of is that a moral evolution occurred among the oppressing classes. Is this proof that "moral evolution" exists?

And if it has existed and exists, why shouldn't it continue to exist in the future? If so, this should be the primary question: how to promote it.

Focusing EA action on an issue as technically complex as AI safety, on which there isn't even a scientific consensus comparable to the fight against climate change, means that all members of the community make their altruistic action dependent on the intellectual brilliance of their leaders, whose success we could not assess. This is a circumstance similar to that of Marxism, where everything depended on the erudition of the most distinguished scholars of political economy and dialectical materialism.

Utilitarianism implies an ethical commitment of the individual through unequivocal action, in which altruistic means and ends are proportionate. It is unequivocal that there are unfortunate people in poor countries who suffer avoidable suffering and who could be helped with a little of the money we have to spare in rich countries. Altruistic action also implies—and this is extremely important—a visualization of moral change, especially if it is culturally organized in the form of a social movement. Community action for unequivocal moral progress can, in turn, lead to successful long-term action.

I rarely see the community discuss or emphasize how to increase our altruism, and our capacity for altruism, both on an individual and societal level

 

This observation is certainly welcome. Especially since I don't see how it can be utilitarian, from a cost-benefit perspective, to ignore the obvious urgency of having more altruistic people in order to have more altruistic works.

There are historical precedents of large social movements in which the altruistic motivation was considered a vital factor for community development. Unfortunately, this factor had to coexist with others that often distorted it. That is the point that needs to be improved. Evolution is copy plus modification.

Thanks, Kuhan.

Load more