In particular, for someone who is very hard to convince and that addresses all of the objections a well-educated, rational person may have
In particular, for someone who is very hard to convince and that addresses all of the objections a well-educated, rational person may have
I like the AI Alignment Wikipedia page because it provides an overview of the field that's well-written, informative, and comprehensive.
I think it's a very good explainer of the "orthodox" Ai safety position.
I think it would be unlikely to change the mind of a skeptic, however. It relies way too much on just relaying the opinions of Ray Kurzweil and Nick Bostrom, and Kurzweil in particular is very easy to dismiss based on his wildly overconfident predictions (in the article, they state that we are on the "verge" of drexler-style nanofactories, which should arrive "by the 2020's", which has not aged well).
There is almost no engaging with many obvious objections, and because it w... (read more)
I don't know if it addresses all the objections one may have, but the two part Wait But Why series (Part 1, Part 2) was what finally did it for me and I think is wonderfully written.
If you forced me to give numbers, I'd put the odds of catastrophe (~1 billion dead) at 1 in a thousand, and the odds of extinction at 1 in 500 thousand. Essentially, there are several plausible paths for a catastrophe to occur, but almost none for extinction. I don't put too much stock in the actual numbers though, as I don't think forecasting is actually useful for unbounded, long-term predictions.