Hi, this is my first post and I apologize if this question is too subjective, in which case I'll take it down. Ok here goes:
I'm personally starting to feel an accelerating, slightly visceral sense of fear at the increasing pace of news about AI breakthroughs that seem mere years from causing mass unemployment among white collar and blue collar workers alike (everything from automated artistry to automated burger-making). My wife & I have been incredibly blessed with two adorable toddlers so far, and if they eat healthily, exercise, and benefit from the arrival of regenerative medical technology such as stem cell therapies, it seems quite reasonable that they'll live for at least 110 years if not much more (I hope even 1,000's of years at least). Even taking the base case as 110 years, it seems a near-certainty that a transformative and super-dangerous AGI Singularity or Intelligence Explosion will occur while they are alive. Since I obviously deeply love our kids, I think about this a lot, and since I work in this field and am well-aware of the risks, I tend to think that the Singularity is the #1 or #2 threat to my young children's lives, together with nuclear war.
I also can't help but wonder what jobs they will be able to find on the job market that aren't yet taken over by AI, by the time they graduate from college in 20 years or more.
I wish my fears were unfounded, but I'm well acquainted with the various dangers of both x-risks and s-risks associated with unaligned, hacked, or corrupted AGI. I help run a startup called Preamble which works to reduce AGI s-risk and x-risk, and as part of our civic engagement efforts I've spent some years working with folks in the US military to raise awareness about AGI x-risks, especially those associated with 'Skynet' systems (hypothetical systems called Nuclear Command Automation systems, which would be deeply stupid to ever build, even for the nation that built them). The author of the following article, Prof. Michael Klare, is a good friend, and he sought my advice while he was planning this piece, so it represents a good synthesis of our views: https://www.armscontrol.org/act/2020-04/features/skynet-revisited-dangerous-allure-nuclear-command-automation He and I, along with other friends and allies of ours, have recently been grateful to see that some of our multi-year, long-shot civic engagement efforts have borne fruit! Most exciting are these two US government statements:
(1) In March 2021, the National Security Commission on AI (NSCAI) included a couple lines in their official Report to Congress which, for the first time, briefed Congress about the importance of value alignment technology as a field of technology, and one which the US should invest in as a way to reduce AGI risk: "Advances in AI, including the mastery of more general AI capabilities along one or more dimensions, will likely provide new capabilities and applications. Some of these advances could lead to inflection points or leaps in capabilities. Such advances may also introduce new concerns and risks and the need for new policies, recommendations, and technical advances to assure that systems are aligned with goals and values, including safety, robustness and trustworthiness. The US should monitor advances in AI and make necessary investments in technology and give attention to policy so as to ensure that AI systems and their uses align with our goals and values."
(2) In Oct 2022, the Biden administration's 2022 Nuclear Posture Review (NPR) became the first ever statement by the US Federal government explicitly prohibiting any adoption of Nuclear Command Automation by the US: "In all cases, the United States will maintain a human “in the loop” for all actions critical to informing and executing decisions by the President to initiate and terminate nuclear weapon employment."
I'm extremely grateful that the US has finally banned Skynet systems! Now, we at Preamble and in the arms control community are trying to find allies within China so as to convince them to make a similar ban of Skynet systems in their jurisdiction. That would also open the door for our nations to have a dialogue on how to avoid being tricked into going to war, by an insane terrorist group using cyberattacks and misinformation to cause what is called a Catalytic Nuclear War (a war that neither side wanted, that was caused by trickery from a 3rd "catalytic" party). https://mwi.usma.edu/artificial-intelligence-autonomy-and-the-risk-of-catalytic-nuclear-war/
All of us in the AGI safety community are working hard to prevent bad outcomes, but it feels like the years are starting to slip away frighteningly quickly on what might be the wick of the candle of human civilization, if we don't get 1,000 details right to ensure everything goes perfectly according to plan when the superintelligence is born. Not only do we have to solve AI alignment, but we also have to perfectly solve software and hardware supply chain security; otherwise we can't trust the software to actually do what the pixels on the screen say that the source code says. http://www.cs.cmu.edu/~rdriley/487/papers/Thompson_1984_ReflectionsonTrustingTrust.pdf
I'm sorry if I'm rambling but I just wanted to convey an overall sense and impression of an emotion and see if others were feeling the same. I dread that our civilization is hurtling at 100MPH towards an impassable cliff, and it's starting to give me a sense of visceral fear. It really does seem like OpenAI, and the companies they are inspiring, are flooring the gas pedal and I was just wondering if anyone else is feeling scared. Thank you.
You’re absolutely right. Unless tax policy catches up fast, stuff like the robots that replace fast food chefs is taking money out of the little guy’s wallet and right into the hands of the wealthiest business moguls who no longer have to pay human wages.
This fundamental issue is addressed very well in an excellent book you might love to check out, called Taxing Robots, by Prof. Xavier Oberson, a Swiss economist. Here’s the book on Amazon: https://a.co/d/eWjvuWE and here’s a summary: https://en.empowerment.foundation/amp/taxing-robots-by-xavier-oberson-professor-at-geneva-university-attorney-at-law-1
After getting even a page in, the core premise of the book seemed so obvious in retrospect, but hasn’t caught on as a possible solution: we need to fix the fact that algorithms and robots don’t pay income tax! Income tax disincentivizes human labor, thus effectively subsidizing robots! This needs to be fixed!
There are two possible solutions:
Left-wing approach: tax algorithmic labor at a similar or higher rate as human labor
Right-wing approach: repeal income tax! Make entitlement cuts to help fix the budget but also add back lost tax revenue by making so-called “Pigouvian” taxes on harmful activities like pollution.
Though my politics lean a bit more left, I think this is an area where republicans have the ideological advantage, as getting rid of income tax and standing up a new carbon tax is doable, Whereas in the dem’s solution, you need to somehow define what is labor-saving automation in the tax code, which seems really hard to define fairly due to the influence of special interests.
Though I voted for Obama and Biden, I would happily vote for DeSantis if he ran on repealing income tax and fixing the budget gap in other ways that don’t penalize human workers!