A

ALN

8 karmaJoined

Posts
1

Sorted by New
1
· · 1m read

Comments
4

Act utilitarians choose actions estimated to increase total happiness. Rule utilitarians follow rules estimated to increase total happiness (e.g. not lying). But you can have the best of both: act utilitarianism where rules are instead treated as moral priors. For example, having a strong prior that killing someone is bad, but which can be overridden in extreme circumstances (e.g. if killing the person ends WWII).

These priors make act utilitarianism more safeguarded against bad assessments. They are grounded in Bayesianism (moral priors are updated the same way as non-moral priors). They also decrease cognitive effort: most of the time, just follow your priors, unless the stakes and uncertainty warrant more complex consequence estimates. You can have a small prior toward inaction, so that not every random action is worth considering. You can also blend in some virtue ethics, by having a prior that virtuous acts often lead to greater total happiness in the long run.

What I described is a more Bayesian version of R. M. Hare's "Two-level utilitarianism", which involves an "intuitive" and a "critical" level of moral thinking.

Sentience Institute (research on digital minds)

Human welfare seems much less neglected than the welfare of factory farm animals. Even just an egg may represent many hours of suffering to produce. If insects are not so much less sentient than humans, their welfare could be a huge deal too.

So I favor animal welfare. But it's even better when it's backed by strategic thinking and a clear theory of impact. The total number of future sentient beings could be many orders of magnitude greater than the number of existing ones. We are unable to "feel how big" those numbers are, but it matters a lot, and it's not virtuous to ignore it. Setting aside uncertainty, it doesn't really make a moral difference whether we're preventing the same amount of suffering in 10 years or in 10 billion years. So, grantors should think deep about how donations could, even indirectly, affect these. Likewise, we can't just assume that AI progress will suddenly stop indefinitely and that society will be the same in 50 years. AI will impact animal advocacy. There may also be some overlap in the coming years between animal welfare and AI welfare advocacy that could be leveraged.

Advertisement from youtubers working on rationality or scientific popularization may be a good way to rapidly expand the community.


For example, there has been a lot of advertisement for Brilliant, an intellectual training website. It's a fine way for youtubers to earn income, while recommending something beneficial for their subscribers and their reputation.


And since these communities are already curious and analytical, it's an interesting audience for seeding effective altruism ideas.