Two weeks ago I wrote an __introduction to normative decision theory__, yet I haven’t made clear what decision algorithm should be used. Before that, I’d like to give some preliminaries.

Generally speaking, __Newcomb's Problem__ and the __Smoking Lesion Problem__ represent counterexamples to __CDT__ and __EDT__ and show where they differ. But if causal chains and conditional probabilities consider one another then CDT and EDT are equivalent, causing one to choose the __dominance principle__ (assuming probabilities are held constant) and thus two-boxing and smoking.

However, if probabilities update based on the agent's actions, such as in the __meta-Newcomb's problem__ and __Parfit's Hitchhiker__, then CDT=EDT would recommend the maximization of expected utility and thus one-boxing and paying up respectively.

I don't think defining CDT = EDT is useful, so coming up with a new decision theory may be optimal. Without regard to labels or choosing a specific decision theory, I think one should choose the dominance principle if probabilities are held constant and thus choose smoking, two-boxing, etc.

But if probabilities *aren’t *constant, then one should pre-commit to an action that does not necessarily dominate (assuming constant probabilities) in exchange for an outcome that is (second?) best, e.g. pre-committing to pay Parfit’s biker.

This would be the kind of decision theory that recommends that one invoke EDT or __FDT__ when probabilities can be updated by the agent, and invoke CDT otherwise. This would be the kind of decision theory that smokes, one-boxes, and doesn’t pay the biker ex-post, but “chooses to pay the biker ex-ante.” In other words, this would be the kind of decision theory that recommends decisions that maximize expected utility.

Of course, ideas are more important than labels, but labels still remain useful, and if I were to choose a label for such a decision procedure, given its relation to whether probabilities remain constant or update based on decisions, I would choose the name Bayesian Decision Theory (BDT).

Further thoughts and comments on decision theory are welcome.

I quite like this post. I think though that your conclusion, to use CDT when probabilities aren't affected by your choice and use EDT when they are affected, is slightly strange. As you note, CDT gives the same recommendations EDT in cases where your decision affects the probabilities, so it sounds to me like you would actually follow CDT in all situations (and only trivially follow EDT in the special cases where EDT and CDT make the same recommendations).

I think there's something to pointing out that CDT in fact recommends one boxing wherever your action can affect what is in the boxes, but I think you should be more explicit about how you prefer CDT.

I think near the end of the post you want to call it Bayesian decision theory. That's a nice name, but I don't think you need a new name, especially because causal decision theory already captures the same idea, is well known, and points to the distinctive feature of this view: that you care about causal probabilities rather than probabilities that use your own actions as evidence when they make no causal difference.

When you say "This would be the kind of decision theory that smokes, one-boxes, and doesn’t pay the biker ex-post, but “chooses to pay the biker ex-ante.” In other words, this would be the kind of decision theory that recommends decisions that maximize expected utility." I find this an odd thing to say, and perhaps a bit misleading, because that's what both EDT and CDT already do, they just have different conceptions of what expected utility is.

A quick clarification: I mean that "maximize expected utility" is what both CDT and EDT do, so saying "In other words, this would be the kind of decision theory that recommends decisions that maximize expected utility" is perhaps misleading