We can adjust the risk per unit of reward or the reward per unit of risk.
In the absence of credible, near-term, high-likelihood existential risks and in the absence of being path-locked on an existential trajectory, I would rather adjust the reward per unit of risk.
I also suspect that the most desirable paths to improving the value of futures where we survive will come with a host of advancements that allow us to more effectively combat risks anyway. Yes, I'm sure there are some really dumb ways to improve the value of futures, such that we're metric hacking more than anything or taking excessive risks, but assuming we have a modicum of sense at all (and we do), I'm comfortable.
I may update my number once I see the actual % it's chosen after I post.
We can adjust the risk per unit of reward or the reward per unit of risk.
In the absence of credible, near-term, high-likelihood existential risks and in the absence of being path-locked on an existential trajectory, I would rather adjust the reward per unit of risk.
I also suspect that the most desirable paths to improving the value of futures where we survive will come with a host of advancements that allow us to more effectively combat risks anyway. Yes, I'm sure there are some really dumb ways to improve the value of futures, such that we're metric hacking more than anything or taking excessive risks, but assuming we have a modicum of sense at all (and we do), I'm comfortable.
I may update my number once I see the actual % it's chosen after I post.