This is a special post for quick takes by ALN. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.
Act utilitarians choose actions estimated to increase total happiness. Rule utilitarians follow rules estimated to increase total happiness (e.g. not lying). But you can have the best of both: act utilitarianism where rules are instead treated as moral priors. For example, having a strong prior that killing someone is bad, but which can be overridden in extreme circumstances (e.g. if killing the person ends WWII).
These priors make act utilitarianism more safeguarded against bad assessments. They are grounded in Bayesianism (moral priors are updated the same way as non-moral priors). They also decrease cognitive effort: most of the time, just follow your priors, unless the stakes and uncertainty warrant more complex consequence estimates. You can have a small prior toward inaction, so that not every random action is worth considering. You can also blend in some virtue ethics, by having a prior that virtuous acts often lead to greater total happiness in the long run.
What I described is a more Bayesian version of R. M. Hare's "Two-level utilitarianism", which involves an "intuitive" and a "critical" level of moral thinking.
Act utilitarians choose actions estimated to increase total happiness. Rule utilitarians follow rules estimated to increase total happiness (e.g. not lying). But you can have the best of both: act utilitarianism where rules are instead treated as moral priors. For example, having a strong prior that killing someone is bad, but which can be overridden in extreme circumstances (e.g. if killing the person ends WWII).
These priors make act utilitarianism more safeguarded against bad assessments. They are grounded in Bayesianism (moral priors are updated the same way as non-moral priors). They also decrease cognitive effort: most of the time, just follow your priors, unless the stakes and uncertainty warrant more complex consequence estimates. You can have a small prior toward inaction, so that not every random action is worth considering. You can also blend in some virtue ethics, by having a prior that virtuous acts often lead to greater total happiness in the long run.
What I described is a more Bayesian version of R. M. Hare's "Two-level utilitarianism", which involves an "intuitive" and a "critical" level of moral thinking.