Technoprogressive, biocosmist, rationalist, defensive accelerationist, longtermist
Thiel describing a 2024 conversation with Elon Musk and Demis Hassabis, where Elon is saying "I'm working on going to mars, it's the most important project in the world" and Demis argues "actually my project is the most important in the world; my superintelligence will change everything, and it will follow you to mars". (This is in the context of Thiel's long pivot from libertarianism to a darker strain of conservativism / neoreaction, having realized that that "there's nowhere else to go" to escape mainstream culture/civilization, that you can't escape to outer space, cyberspace, or the oceans as he once hoped, but can only stay and fight to sieze control of the one future (hence all these musings about carl schmidtt and etc that make me feel wary he is going to be egging on J D Vance to try and auto-coup the government).
FTR: while Thiel has already claimed this version before, the more common version (e.g. here, here, here from Hassabis' mouth, and more obliquely here in his lawsuit against Altman) is that Hassabis was warning Musk about existential risk from unaligned AGI, not threatening him with his own personally aligned AGI. However, this interpretation is interestingly resonant with Elon Musk's creation of OpenAI being motivated by fear of Hassabis becoming an AGI dictator (a fear his co-founders apparently shared). It is certainly an interesting hypothesis that Thiel and Musk engineered together for a decade both the AGI race and global democratic backsliding wholly motivated by a same single one-sentence possible slight by Hassabis in 2012.
The overwhelming majority of Manhattan Project scientists, as well as the Undersecretary of the Navy, believed there should be a warning shot. It makes total sense from a game theory perspective to do warning shots when you believe your military advantage has significantly increased in a way that significantly change their own calculus.
Somewhere I remember Thiel explicitly explaining this (ie, saying "we need to repair the intergenerational compact so all these young people stop turning socialist"), but unfortunately I don't remember where he said this so I don't have a link.
https://www.techemails.com/p/mark-zuckerberg-peter-thiel-millennials
Current LLMs already have some level of biological capabilities and near-zero contribution to cumulative GDP growth. The assertion that "there's a huge gulf between capabilities that can get you ~10% cumulative GDP growth and capabilities that can kill billions of people" seems to imply believing biological capabilities will scale orders of magnitude less than capabilities in every other field required to contribute to GDP, and I see absolutely no evidence to believe that.
there's a huge gulf between capabilities that can get you ~10% cumulative GDP growth and capabilities that can kill billions of people
This is not clear to me and my impression is that most AI safety people would disagree with this statement as well, considering the high generality of AI capabilities.
Just a month ago, Anthropic and the rest of the industry were celebrating what looked like a landmark victory. Alsup had ruled that using copyrighted books to train an AI model — so long as the books were lawfully acquired — was protected as “fair use.” This was the legal shield the AI industry has been banking on, and it would have let Anthropic, OpenAI, and others off the hook for the core act of model training.
But Alsup split a very fine hair. In the same ruling, he found that Anthropic’s wholesale downloading and storage of millions of pirated books — via infamous “pirate libraries” like LibGen and PiLiMi — was not covered by fair use at all. In other words: training on lawfully acquired books is one thing, but stockpiling a central library of stolen copies is classic copyright infringement.
Maybe organizations could avoid problem 3 by setting up a system to get public input on their projects so they can avoid doing projects that locals don’t want? But expand this out, and at that point you’re basically running (part of) a government - after all, aggregating people’s preferences into decisions is essentially what governments do. (After all, “locals” aren’t a homogeneous group with uniform preferences.) And then you definitely run into all the usual problems with preference aggregation, and you certainly are trying to replace (part of) the local government’s role.
"Preference aggregation" is also what civil society (e.g. associations, free newspapers, labor unions, environmental groups) does. Unless Acemoglu has abandoned social liberalism while I haven't looked, I am fairly confident he wouldn't consider all civil society to be "trying to replace (part of) the local government's role". So funding civil society is potentially another broad class of interventions that would fit all those desiderata (like @huw's, it falls under the broader category of "building local capacity").
Yeah, I think the problem is surveying experts for their p(doom) isn't something that has been done with climate experts AFAICT. (I'll let you decide over whether this should be done or whether Mitchell is right and this methodology is bad to begin with.) But he stated the IPCC is planning to more extensively discuss degrowth in future reports.
I've known EAs who have been all-consumed by abstract guilt. It has never led them to producing the greatest good for the greatest number. At best it led them to being chronically depressed and unable to do any stable work. At worst it has led to highly net negative actions like joining a cult.