I would add Unkown Killer Robots from Netflix.
I found it a compelling and eye-opening exploration of the potential dangers posed by autonomous weapons systems. The film investigates the rapid development of AI-powered drones, missiles, and other unmanned military technologies, and raises serious concerns about their lack of human oversight, potential for accidents and misuse, and the broader implications for global stability and the future of warfare.
What I found most compelling was the film's in-depth look at the technical capabilities of these systems and the concerning trends towards increased autonomy and lethality. The experts interviewed make a convincing case that without strong international regulation, we could be headed towards an arms race in AI-powered weapons that could spiral out of control.
One such article is "Strategies for Learning from Failure" by Amy C. Edmondson, published in the Harvard Business Review. This article discusses how organizations can learn from failure through activities such as detection, analysis, and experimentation. It also talks about the importance of creating a culture of psychological safety where failure is not always associated with blame.
Another article that might be of interest to you is "Organizations Can’t Change If Leaders Can’t Change with Them" by Ron Carucci, also published in the Harvard Business Review. This article discusses the importance of personal transformation for leaders in order for organizational change to be successful.
Based on your criteria, one book that might interest you is "Exploring Universal Basic Income: A Guide to Navigating Concepts, Evidence, and Practices" published by the World Bank in 2020. This book provides a comprehensive review of the evidence on universal basic income and unconditional cash transfers. It discusses the potential benefits and challenges of implementing a UBI program and provides an overview of current UBI pilots and experiences.
One book that might interest you is "Other Minds: The Octopus, the Sea, and the Deep Origins of Consciousness" by Peter Godfrey-Smith. This book explores the evolution of large brains and complex behavior in octopuses and their relatives (cuttlefish and squid) and compares human beings with our most remarkable animal relatives.
Thank you for this post. I just recently heard the first time of cluster headaches and OPIS work in general. It really looks promising to me. Did OPIS also look into the research for capsaicin? For example https://youtu.be/bSeYYXlH1MQ
I would estimate my disagreement at roughly 90% to 95%.
Default human values are largely indifferent or actively hostile to the suffering of non-human animals.
Humanity currently oversees massive amounts of animal suffering through factory farming &habitat destruction.
If an AGI would be perfectly aligned to make things "go well" for humans, it will likely prioritize human flourishing, economic growth + resource acquisition. If human preferences do not drastically shift toward minimizing animal suffering, an AGI will have no inherent reason to protect animals, & might simply optimize the systems that currently exploit them.
A scenario where AGI goes exceptionally well for humans often includes escaping Earth, avoiding extinction & engaging in massive space colonization. From my perspective, this is a prime driver of astronomical suffering (s-risks).
Humans often romanticize nature. If humanity uses AGI to terraform other planets or seed life across the galaxy, they might intentionally or accidentally spread wild animal suffering on an astronomical scale.
A highly advanced, human-aligned AGI might run countless simulations of Earth's evolutionary history for scientific or entertainment purposes. Tomasik has written extensively on the catastrophic moral implications if these simulated animals possess sentience & experience pain. That's not so improbable given enough time imo.
Suffering-focused ethics prioritizes the prevention and reduction of extreme suffering over the promotion of happiness or human survival at all costs.
A future that "goes well" for humanity typically implies human survival, joy, and unfettered expansion.
For me a future only goes objectively "well" if the total amount of extreme suffering is minimized. Therefore, a human utopia built alongside, or simply ignoring, the continuous suffering of biological or digital animals would be viewed as a profound moral failure.
The small percentage of agreement would stem from the idea that if humans are wiped out by a misaligned AGI, animals might also be destroyed in the process (e.g., if the AGI harvests all biological matter on Earth). If AGI goes well for humans, animals at least avoid that specific instrumental convergence scenario. Furthermore, human prosperity could eventually lead to moral circle expansion, where humans use AGI to actively intervene in nature to reduce wild animal suffering, but I view this as highly contingent.