https://www.fhi.ox.ac.uk/wp-content/uploads/Reframing_Superintelligence_FHI-TR-2019-1.1-1.pdf
This paper produced by Future of Humanity Institute is fairly heavy for me to digest, but I think it reaches conclusions similar to a profound concern I have:
- "Intelligence" does not necessarily need to have anything to do with "our" type of intelligence, where we steadily build on historic knowledge; indeed this approach naturally falls prey to preferring "hedgehogs" (as compared to "foxes" in the hedgehogs v foxes compairson in Tetlock's "superintelligence") - who are worse than random at predicting the future;
- With latest version of Alpha Zero which quickly reached superintelligent levels with no human intervention in 3 different game domains we have to face the uncomfortable truth that AI has already far surpassed our own level of intelligence.
- that corporations as legal person and with profit maximising at their core (a value orthogonal to values that cause humanity to thrive) could rapidly become extremely dominant with this type of AI used across all tasks they are required to perform.
- this represents a real deep and potentially existential threat that the EA community should take extremely seriously. It is also at the core of the increasingly systemic failure of politics
- that this is particularly difficult for the EA community to accept given the high status they place on their intellectual capabilities (and that status is a key driver in our limbic brain so will constantly play tricks on us)
- but that unless EAs are far more intelligent than Kasparov and Sedol and all those that play these games this risk should be taken very seriously.
- that potentially the prime purpose of politics should thus be to ensure that corporations act in a way that is value aligned with the communities they serve, including international coordination as necessary.
- I will give £250 to a charity of the choice of the first party who is able to come up with a flaw in my argument that is not along the lines of "you are too stupid to understand".
If we avoid this dystopian near future of "superintelligent multi-level marketing" I hope the future will be more like this suggested by Steven Strogatz https://www.nytimes.com/2018/12/26/science/chess-artificial-intelligence.html which would leave the key remaining challenge being one of creating a mechanism for ensuring value alignment....
"But envisage a day, perhaps in the not too distant future, when AlphaZero has evolved into a more general problem-solving algorithm; call it AlphaInfinity. Like its ancestor, it would have supreme insight: it could come up with beautiful proofs, as elegant as the chess games that AlphaZero played against Stockfish. And each proof would reveal why a theorem was true; AlphaInfinity wouldn’t merely bludgeon you into accepting it with some ugly, difficult argument.
For human mathematicians and scientists, this day would mark the dawn of a new era of insight. But it may not last. As machines become ever faster, and humans stay put with their neurons running at sluggish millisecond time scales, another day will follow when we can no longer keep up. The dawn of human insight may quickly turn to dusk.
Suppose that deeper patterns exist to be discovered — in the ways genes are regulated or cancer progresses; in the orchestration of the immune system; in the dance of subatomic particles. And suppose that these patterns can be predicted, but only by an intelligence far superior to ours. If AlphaInfinity could identify and understand them, it would seem to us like an oracle.
We would sit at its feet and listen intently. We would not understand why the oracle was always right, but we could check its calculations and predictions against experiments and observations, and confirm its revelations. Science, that signal human endeavor, would reduce our role to that of spectators, gaping in wonder and confusion.
Maybe eventually our lack of insight would no longer bother us. After all, AlphaInfinity could cure all our diseases, solve all our scientific problems and make all our other intellectual trains run on time. We did pretty well without much insight for the first 300,000 years or so of our existence as Homo sapiens. And we’ll have no shortage of memory: we will recall with pride the golden era of human insight, this glorious interlude, a few thousand years long, between our uncomprehending past and our incomprehensible future."