Hide table of contents

Next week for The 80,000 Hours Podcast I'll be interviewing Joe Carlsmith, Senior Research Analyst at Open Philanthropy.

Joe's did a BPhil in philosophy at Oxford University and is a prolific writer on topics both philosophical and practical (until recently his blog was called 'Hands and Cities' but it's all now collected on his personal site.

What should I ask him?

Some things Joe has written which we could talk about include:

33

0
0

Reactions

0
0
New Answer
New Comment

4 Answers sorted by

There seems to be two different conceptual models for AI risk.

The first is a model like in his report "Existential risk from power-seeking AI", in which he lays out a number of things, which, if they happen, will cause AI takeover.

The second is a model (which stems from Yudkowsky & Bosteom, and more recently in Michael Cohen's work https://www.lesswrong.com/posts/XtBJTFszs8oP3vXic/?commentId=yqm7fHaf2qmhCRiNA ) where we should expect takeover by malign AGI by default, unless certain things happen.

I personally think the second model is much more reasonable. Do you have any rebuttal?

See also Nate Soares arguing against Joe’s conjunctive breakdown of risk here, and me here.

Would he take the 51:49 bet repeatedly, as "maximise EV" might suggest? Why / why not?

(I skimmed some of his series on EV but want to reread.)

https://joecarlsmith.com/2022/03/18/on-expected-utility-part-2-why-it-can-be-ok-to-predictably-lose

In "Against neutrality...," he notes that he's not arguing for a moral duty to create happy people, and it's just good "others things equal." But, given that the moral question under opportunity costs is what practically matters, what are his thoughts on this view?: "Even if creating happy lives is good in some (say) aesthetic sense, relieving suffering has moral priority when you have to choose between these." E.g., does he have any sympathy for the intuition that, if you could either press a button that treats someone's migraine for a day or one that creates a virtual world with happy people, you should press the first one?

(I could try to shorten this if necessary, but worry about the message being lost from editorializing.)

Since releasing his report on existential risk from power-seeking AI, his estimate of AI x-risk has increased from 5% to >10%. Why?

Curated and popular this week
Relevant opportunities