https://www.fhi.ox.ac.uk/wp-content/uploads/Reframing_Superintelligence_FHI-TR-2019-1.1-1.pdf
This paper produced by Future of Humanity Institute is fairly heavy for me to digest, but I think it reaches conclusions similar to a profound concern I have:
- "Intelligence" does not necessarily need to have anything to do with "our" type of intelligence, where we steadily build on historic knowledge; indeed this approach naturally falls prey to preferring "hedgehogs" (as compared to "foxes" in the hedgehogs v foxes compairson in Tetlock's "superintelligence") - who are worse than random at predicting the future;
- With latest version of Alpha Zero which quickly reached superintelligent levels with no human intervention in 3 different game domains we have to face the uncomfortable truth that AI has already far surpassed our own level of intelligence.
- that corporations as legal person and with profit maximising at their core (a value orthogonal to values that cause humanity to thrive) could rapidly become extremely dominant with this type of AI used across all tasks they are required to perform.
- this represents a real deep and potentially existential threat that the EA community should take extremely seriously. It is also at the core of the increasingly systemic failure of politics
- that this is particularly difficult for the EA community to accept given the high status they place on their intellectual capabilities (and that status is a key driver in our limbic brain so will constantly play tricks on us)
- but that unless EAs are far more intelligent than Kasparov and Sedol and all those that play these games this risk should be taken very seriously.
- that potentially the prime purpose of politics should thus be to ensure that corporations act in a way that is value aligned with the communities they serve, including international coordination as necessary.
- I will give £250 to a charity of the choice of the first party who is able to come up with a flaw in my argument that is not along the lines of "you are too stupid to understand".
Hi Kit - Happy New Year!
Thanks for that, yes I hope a more digestible summary will be produced. I am not intending to be hostile at all, I am just very worried about the AI issue, just I see it as a different issue from that highlighted by the EA community, much more like that highlighted in the paper, hence my purpose in highlighting it.
I think humanity are not particularly generally intelligent, just they become programmed/conditioned to be relatively good at a number of tasks necessary to survive in their environment (eg. a baby chucked into the rainforest will not survive as long as all of the much less intelligent animals that live there). Indeed my worry is we are surprisingly stupid and manipulable as a broad group - as a species our driving motivators (fear, status) generally create the narrative in our cognitive consciousness, and our "blind spot" is the belief that we are much smarter than we are. in the US and the UK the political process has become paralysed as seemingly logical statements apparently talking to our conscious brain are actually playing to our deep subconscious motivators, creating a ridiculous tribalism far removed from any form of logic.
We perhaps "feel" intelligent as we create complex intellectual frameworks that explain things in detail, but this is really a process of "mapping the territory". Its hollowness is shown in eg. Shogi, Chess and Go by Alpha Zero, since despite the many thousands of years of academic study poured into these subjects that has repeatedly mapped the territory this was blown aside by a self improving algorithm working out "what fits". Maps in the real world might be good talking points but are simply nowhere near accurate enough at a human level of intelligibility.
As an investment banker I never had much interest in mapping the territory (despite being logical) but I was interested in "the best way to get from here to there avoiding the obstacles" (I did not care how as long as it works). And this is how life in generality (outside of academia) is - "how can i profit maximise doing x, without breaking any laws (better still if i find a clever way around the laws". And with increasingly powerful self improving algorithms this ends up with the kind of dystopia shown in this video from Yuval Noah Harari and Tristan Harris - "supercomputers" (superintelligence) pointed at our brains. https://www.youtube.com/watch?v=v0sWeLZ8PXg
In all of this I know it is hard for EAs to properly engage - status is gained in any community (which is a powerful deep motivator) by largely agreeing with the norms of that community - and I know my views are far from normal in this community so status is gained by rejecting what I say. But as we share the same deep values - we want the world to be the best place it can be - (which is something very much other than making the most amount of money possible for already rich shareholders) - and I have huge belief in the potential and need for the EA community you will forgive me if I keep trying.