In the most recent episode of the 80,000 Hours podcast, Rob Wiblin and Ajeya Cotra from Open Phil discuss "the challenge Open Phil faces striking a balance between taking big ideas seriously, and not going all in on philosophical arguments that may turn out to be barking up the wrong tree entirely.
"They also discuss:
- Which worldviews Open Phil finds most plausible, and how it balances them
- Which worldviews Ajeya doesn’t embrace but almost does
- How hard it is to get to other solar systems
- The famous ‘simulation argument’
- When transformative AI might actually arrive
- The biggest challenges involved in working on big research reports
- What it’s like working at Open Phil
- And much more"
I'm creating this thread so that anyone who wants to share their thoughts on any of the topics covered in this episode can do so. This is in the spirit of MichaelA's suggestion of posting all EA-relevant content here.
I'm not finished yet with the whole episode, but I didn't understand the part about fairness agreements and the veil of ignorance that Rob and Ajeya were talking about as a way to figure out how much money to allocate per worldview. This was the part from 00:27:50 to 00:41:05. I think I understood the outlier opportunities principle though.
I've re-read the transcript once to try and understand it more but I still don't. I also googled about the Veil of Ignorance, and it started to make more sense, but I still don't understand the fairness agreements part. Is there a different article that explains what Ajeya meant by that? Or can someone explain it in a better way? Thanks!
(It turns out I was slightly mistaken in my other comment - there actually are a few public written paragraphs on the idea of fairness agreements in one section of a post by Holden Karnofsky in 2018.)