I'm planning to spend time on the afternoon (UK time) of Wednesday 2nd September answering questions here (though I may get to some sooner). Ask me anything!
A little about me:
- I work at the Future of Humanity Institute, where I run the Research Scholars Programme, which is a 2-year programme to give space for junior researchers (or possible researchers) to explore or get deep into something
- (Applications currently open! Last full day we're accepting them is 13th September)
- I've been thinking about EA/longtermist strategy for the better part of a decade
- A lot of my research has approached the question of how we can make good decisions under deep uncertainty; this ranges from the individual to the collective, and the theoretical to the pragmatic
- e.g. A bargaining-theoretic approach to moral uncertainty; Underprotection of unpredictable statistical lives compared to predictable ones; or Defence in depth against human extinction
- Recently I've been thinking around the themes of how we try to avoid catastrophic behaviour from humans (and how that might relate to efforts with AI); how informational updates propagate through systems; and the roles of things like 'aesthetics' and 'agency' in social systems
- I think my intellectual contributions have often involved clarifying or helping build more coherent versions of ideas/plans/questions
- I predict that I'll typically have more to say to relatively precise questions (where broad questions are more likely to get a view like "it depends")
Do you think Ellsberg preferences and/or uncertainty/ambiguity aversion are irrational?
Do you think it's a requirement of rationality to commit to a single joint probability distribution, rather than use multiple distributions or ranges of probabilities?
Related papers:
Roughly yes. They might even exactly match the fully rational behaviour on some dimension under consideration, but in so doing be a worse approximation overall to full rationality.
I think a proper study of full rationality and boundedly rational actors would look at limits of behaviour as you impose weaker and weaker computational constraints. I think that it could be really useful to understand which properties of ... (read more)