Marc Andreessen's Why AI Will Save the World has rapidly gained readership, benefiting from his 1.2 Million followers on Twitter. In the piece, he employs many underhanded insults about the AI Safety community and does a poor analysis of Millennialism.  His piece also falls into the trap of "AI won't have intention and therefore wont want to kill us = no need to consider x-risk from AGI" There is so much wrong about this argument, but I would love to hear the EA communities' responses to this piece. I am hoping to engage Andreessen via an interview or debate in the future but for now would really love to hear the EA and AI Safety communities' gut checks and arguments to different points made in his piece. 

No doubt similar arguments are likely to be leveled and sharing effective responses seems to be high value for communication purposes. 




Sorted by Click to highlight new comments since:

Just as I believe the risks from AI are overblown, so too are the potential benefits. In particular, the following paragraph is absurd:

I even think AI is going to improve warfare, when it has to happen, by reducing wartime death rates dramatically. Every war is characterized by terrible decisions made under intense pressure and with sharply limited information by very limited human leaders. Now, military commanders and political leaders will have AI advisors that will help them make much better strategic and tactical decisions, minimizing risk, error, and unnecessary bloodshed.

Inflicting bloodshed on your enemy is a large part of how wars are won. AI advisers might reduce blood spilled on your side, but it will more than make up for it with the more accurate killing of people on the other side of the battlefield. 

In general, Marc views AI as some perfect, flawless being that never makes any mistakes, which is not how actual software, or actual intelligence, actually works. I think AI will be a positive for humanity (eventually), but the techno-utopian dream will never entirely materialise.  

I wrote a piece about a flawed argument of his here, where he basically implicitly groups AI with other safe technologies in order to prove AI Safety. Hope this could be helpful to you!

Curated and popular this week
Relevant opportunities