Thanks for all the questions, all - I’m going to wrap up here! Maybe I'll do this again in the future, hopefully others will too!
Hi,
I thought that it would be interesting to experiment with an Ask Me Anything format on the Forum, and I’ll lead by example. (If it goes well, hopefully others will try it out too.)
Below I’ve written out what I’m currently working on. Please ask any questions you like, about anything: I’ll then either respond on the Forum (probably over the weekend) or on the 80k podcast, which I’m hopefully recording soon (and maybe as early as Friday). Apologies in advance if there are any questions which, for any of many possible reasons, I’m not able to respond to.
If you don't want to post your question publicly or non-anonymously (e.g. you're asking “Why are you such a jerk?” sort of thing), or if you don’t have a Forum account, you can use this Google form.
What I’m up to
Book
My main project is a general-audience book on longtermism. It’s coming out with Basic Books in the US, Oneworld in the UK, Volante in Sweden and Gimm-Young in South Korea. The working title I’m currently using is What We Owe The Future.
It’ll hopefully complement Toby Ord’s forthcoming book. His is focused on the nature and likelihood of existential risks, and especially extinction risks, arguing that reducing them should be a global priority of our time. He describes the longtermist arguments that support that view but not relying heavily on them.
In contrast, mine is focused on the philosophy of longtermism. On the current plan, the book will make the core case for longtermism, and will go into issues like discounting, population ethics, the value of the future, political representation for future people, and trajectory change versus extinction risk mitigation. My goal is to make an argument for the importance and neglectedness of future generations in the same way Animal Liberation did for animal welfare.
Roughly, I’m dedicating 2019 to background research and thinking (including posting on the Forum as a way of forcing me to actually get thoughts into the open), and then 2020 to actually writing the book. I’ve given the publishers a deadline of March 2021 for submission; if so, then it would come out in late 2021 or early 2022. I’m planning to speak at a small number of universities in the US and UK in late September of this year to get feedback on the core content of the book.
My academic book, Moral Uncertainty, (co-authored with Toby Ord and Krister Bykvist) should come out early next year: it’s been submitted, but OUP have been exceptionally slow in processing it. It’s not radically different from my dissertation.
Global Priorities Institute
I continue to work with Hilary and others on the strategy for GPI. I also have some papers on the go:
- The case for longtermism, with Hilary Greaves. It’s making the core case for strong longtermism, arguing that it’s entailed by a wide variety of moral and decision-theoretic views.
- The Evidentialist’s Wager, with Aron Vallinder, Carl Shulman, Caspar Oesterheld and Johannes Treutlein arguing that if one aims to hedge under decision-theoretic uncertainty, one should generally go with evidential decision theory over causal decision theory.
- A paper, with Tyler John, exploring the political philosophy of age-weighted voting.
I have various other draft papers, but have put them on the back burner for the time being while I work on the book.
Forethought Foundation
Forethought is a sister organisation to GPI, which I take responsibility for: it’s legally part of CEA and independent from the University, We had our first class of Global Priorities Fellows this year, and will continue the program into future years.
Utilitarianism.net
Darius Meissner and I (with help from others, including Aron Vallinder, Pablo Stafforini and James Aung) are creating an introduction to classical utilitarianism at utilitarianism.net. Even though ‘utilitarianism’ gets several times the search traffic of terms like ‘effective altruism,’ ‘givewell,’ or ‘peter singer’, there’s currently no good online introduction to utilitarianism. This seems like a missed opportunity. We aim to put the website online in early October.
Centre for Effective Altruism
We’re down to two very promising candidates in our CEO search; this continues to take up a significant chunk of my time.
80,000 Hours
I meet regularly with Ben and others at 80,000 Hours, but I’m currently considerably less involved with 80k strategy and decision-making than I am with CEA.
Other
I still take on select media, especially podcasts, and select speaking engagements, such as for the Giving Pledge a few months ago.
I’ve been taking more vacation time than I used to (planning six weeks in total this year), and I’ve been dealing on and off with chronic migraines. I’m not sure if the additional vacation time has decreased or increased my overall productivity, but the migraines have decreased it by quite a bit.
I am continuing to try (and often fail) to become more focused in what work projects I take on. My long-run career aim is to straddle the gap between research communities and the wider world, representing the ideas of effective altruism and longtermism. This pushes me in the direction of prioritising research, writing, and select media, and I’ve made progress in that direction, but my time is still more split than I'd like.
Agree on "going well" being under-defined. I was mostly using that for brevity, but probably more confusion than it's worth. A definition I might use is "preserves the probability of getting to the best possible futures", or even better if it increases that probability. Mainly because from an EA perspective (even if people are around) if we've locked in a substantially suboptimal moral situation, we've effectively lost most possible value - which I'd call x-risk.
The main point was fairly object-level - Will's beliefs imply it's near 1% likelihood of AGI in 100 years, or near 99% likelihood of it "not reducing the probability of the best possible futures", or some combination like <10% likelihood of AGI in 100 years AND even if we get it, >90% likelihood of it not negatively influencing the probability of the best possible futures. Any of these sound somewhat implausible to me, so I'm curious for the intuition behind whichever one Will believes.
Def agree. Things-like-this shouldn't be approached with a 50-50 prior - throw me in another century & I think <5% likelihood of AGI, the Industrial Revolution, etc is very reasonable on priors. I just think that probability can shift relatively quickly in response to observations. For the industrial revolution, that might be when you've already had the agricultural revolution (so a smallish fraction of the population can grow enough food for everyone), you get engines working well & relatively affordably, you have large-scale political stability for a while s.t. you can interact peacefully with millions of other people, you have proto-capitalism where you can produce/sell things & reasonably expect to make money doing so, etc. At that point, from an inside view, it feels like "we can use machines & spare labor to produce a lot more stuff per person, and we can make lots of money off producing a lot of stuff, so people will start doing that more" is a reasonable position. So those would shift me from single digits or less, to at least >20% on the industrial revolution in that century, probably more but discounting for hindsight bias. (I don't know if this is a useful comparison, just using since you mentioned & does seem similar in some ways where base rate is low, but it did eventually happen).
For AI, these seem relevant: when you have a plausible physical substrate, have better predictive models for what the brain does (connectionism & refinements seem plausible & have been fairly successful over the last few decades despite being unpopular initially), start to see how comparably long-evolved mechanisms work & duplicate some of them, reach super-human performance on some tasks historically considered hard/ requiring great intelligence, have physical substrate reaching scales that seem comparable to the brain, etc.
In any case, these are getting a bit far from my original thought, which was just which of those situations w.r.t. AGI does Will believe & some intuition for why
I'd usually want to modify my definition of "well" to "preserves the probability of getting to the best possible futures AND doesn't increase the probability of the worst possible futures", but that's a bit more verbose.