On LessWrong, where there are some good comments: https://www.lesswrong.com/posts/KnQs55tjxWopCzKsk/the-ai-timelines-scam
The fear is that Others (DeepMind, China, whoever) will develop AGI soon, so We have to develop AGI first in order to make sure it's safe, because Others won't make sure it's safe and We will. Also, We have to discuss AGI strategy in private (and avoid public discussion), so Others don't get the wrong ideas. (Generally, these claims have little empirical/rational backing to them; they're based on scary stories, not historically validated threat models)
The claim that others will develop weapons and kill us with them by default implies a moral claim to resources, and a moral claim to be justified in making weapons in response.
Thank you so much for posting this. It is nice to see others in our community willing to call it like it is.
To be fair to MIRI (who I'm guessing are the organization in question), this lie is industry standard even among places that don't participate in the "strong AI" scam. Not just in how any data-based algorithm engineering is 80% data cleaning while everyone pretends the power is in having clever algorithms, but also in how startups pretend use human labor to pretend they have advanced AI or how short self-driving car timelines are a major part of Uber's value proposition.
The emperor has no clothes. Everyone in the field likes to think they are aware of this fact already when told, but it remains helpful to point it out explicitly at every opportunity.
This seems like selective presentation of the evidence. You haven't talked about AlphaZero or generative adversarial networks, for instance.
80% by what metric? Is your claim that Facebook could find your face in a photo using logistic regression if it had enough clean data? (If so, can you show me a peer-reviewed paper supporting this claim?)
Presumably you are saying something like: "80% of the human labor w... (read more)