I'm planning to spend time on the afternoon (UK time) of Wednesday 2nd September answering questions here (though I may get to some sooner). Ask me anything!
A little about me:
- I work at the Future of Humanity Institute, where I run the Research Scholars Programme, which is a 2-year programme to give space for junior researchers (or possible researchers) to explore or get deep into something
- (Applications currently open! Last full day we're accepting them is 13th September)
- I've been thinking about EA/longtermist strategy for the better part of a decade
- A lot of my research has approached the question of how we can make good decisions under deep uncertainty; this ranges from the individual to the collective, and the theoretical to the pragmatic
- e.g. A bargaining-theoretic approach to moral uncertainty; Underprotection of unpredictable statistical lives compared to predictable ones; or Defence in depth against human extinction
- Recently I've been thinking around the themes of how we try to avoid catastrophic behaviour from humans (and how that might relate to efforts with AI); how informational updates propagate through systems; and the roles of things like 'aesthetics' and 'agency' in social systems
- I think my intellectual contributions have often involved clarifying or helping build more coherent versions of ideas/plans/questions
- I predict that I'll typically have more to say to relatively precise questions (where broad questions are more likely to get a view like "it depends")
I've heard many people express the view that in EA, and perhaps especially in longtermism:
1. Does all of those claims seem true to you?
2. If so, do you expect this to remain true for a long time, or do you think we're already moving rapidly towards fixing it? (E.g., maybe there are a lot of people already "in the pipeline", reducing the need for new people to enter it.)
3. Do you think there are other ways to potentially address this problem (if it exists) that deserve more attention or that I didn't mention above?
4. Do you think RSP, or things like it, are especially good ways to address this problem (if it exists)?
I've heard the view that more EAs should consider being research assistants to seemingly highly skilled EA researchers[1], both for their own learning and to improve those researchers' productivity. Is this what you mean?
I didn't deliberately exclude mention of this from my above comment; I just didn't think to include it. And now that you mention it (or something similar), I'd be interested in Owen's take on this as well :)
[1] One could of course also do this for highly skilled non-EA researchers working in relevant areas. I just haven't heard that suggested as often; I'm not sure if there are good reasons for that.