This is a special post for quick takes by Liam Robins. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.
Sorted by Click to highlight new quick takes since:

Consider adopting the term o-risk.

William MacAskill has recently been writing a bunch about how if you’re a Long-Termist, it’s not enough merely to avoid the catastrophic outcomes. Even if we get a decent long-term future, it may still fall far short of the best future we could have achieved. This outcome — of a merely okay future, when we could have had a great future — would still be quite tragic.

Which got me thinking: EAs already have terms like x-risk (for existential risks, or things which could cause human extinction) and s-risk (for suffering risks, or things which could cause the widespread future suffering of conscious beings). But as far as I’m aware, there isn’t any term to describe the specific kind of risk that MacAskill is worried about here. So I propose adding the term o-risk: A risk that would make the long-term future okay, but much worse than it would optimally be.

might want to check out this (only indirectly related but maybe useful). 

https://forum.effectivealtruism.org/posts/zuQeTaqrjveSiSMYo/a-proposed-hierarchy-of-longtermist-concepts
 

Personally don't mind o-risk think it has some utility but s-risk ~somewhat seems like it still works here. An O-risk is just a smaller scale s-risk no?
 

Curated and popular this week
Relevant opportunities