Hi all,
What I find an interesting perspective is to approach ethics from the point of view of a “network.” In our case, a network in which humans (or, more precisely, our intelligences) are the nodes, and the relationships between these intelligences are the edges.
For this network to exist, the nodes need to establish and maintain relationships. This “edge maintenance” can, in turn, be translated into what we call ethics or ethical behaviour. Whatever creates or restores these edges/relationships—and thereby enables the existence of the network—is just, correct, or virtuous. This is because, to make the intelligent nodes physically exist (to keep their substrate intact), the network itself must exist: the nodes are interdependent. One node grows wheat, another harvests it, another bakes bread, another distributes it, etc. Thus, ethics becomes about existence, which is much easier to comprehend.
Once you embrace this network between intelligent nodes, you can also start thinking about all subsequent dependencies in terms of nodes and edges/relationships. This neatly highlights the interdependences of our existence and leads me to formulate the meaning of life as: “Keep alive what keeps us/you alive.” As this becomes the internal logic of this interdependent network.
I’m curious who else finds this perspective interesting, as I believe that using the language of networks and complex systems in this context opens the door to thinking and talking more clearly about intelligence and AI alignment, (inter)national collaboration, (bio)diversity, evolution, etc
Act utilitarians choose actions estimated to increase total happiness. Rule utilitarians follow rules estimated to increase total happiness (e.g. not lying). But you can have the best of both: act utilitarianism where rules are instead treated as moral priors. For example, having a strong prior that killing someone is bad, but which can be overridden in extreme circumstances (e.g. if killing the person ends WWII).
These priors make act utilitarianism more safeguarded against bad assessments. They are grounded in Bayesianism (moral priors are updated the same way as non-moral priors). They also decrease cognitive effort: most of the time, just follow your priors, unless the stakes and uncertainty warrant more complex consequence estimates. You can have a small prior toward inaction, so that not every random action is worth considering. You can also blend in some virtue ethics, by having a prior that virtuous acts often lead to greater total happiness in the long run.
What I described is a more Bayesian version of R. M. Hare's "Two-level utilitarianism", which involves an "intuitive" and a "critical" level of moral thinking.