"So, for x-risk to be high, many people (e.g. lab employees, politicians, advisors) have to catastrophically fail at pursuing their own self-interest."
I don't think this obviously follows.
Firstly, because the effect of not doing unsafe AI things yourself is seldom that no one else does them, it's more of a tragedy of the commons type situation right? Especially if there is one leading lab that is irrationally optimistic about safety, which doesn't seem to require that low a view of human rationality in general.Â
Secondly, someone like Musk might have a value system where they care a lot about personally capturing the upside of getting to personally aligned superintelligence first, and then they might do dangerous things for the same reason that a risk neutral person will take a 90% chance of instant death and a 10% chance of living to be 10 million over the status quo.Â
"On the other hand, it's difficult to take seriously the idea that secular intellectuals who find the Singularity and some of its loudest advocates a bit silly and some of the related ideas pushed a bit sus are covertly defending a particular side of a centuries old debate in Christian theology"
I think the implicit claim is more "they have absorbed a lot of contestable ideas from a particular intellectual tradition that began with certain parts of Christian theology [and remember theology was politics in early modern Europe to a quite high degree], which now just seem commonsense to them, but are in fact highly contestable, and which now seem what being "left-wing" is to them, but which in fact historically have in many cases been associated with the right for hundreds of years". But I agree it probably does function better as a bit of a sophisticated troll, interesting as the historical claims are.Â
"it really seemed to me like there was a relationship between the rise of Calvinism (specifically Dutch Calvinism) and various proto-TESCREAL concepts like capitalism"
This one of the most famous (though contested) claims in the entire history of sociology/intellectual history/economic history: https://en.wikipedia.org/wiki/The_Protestant_Ethic_and_the_Spirit_of_Capitalism
"But UNRWA doesn't seem like a high integrity organization, and I seriously doubt donating to them is the best way to help the people of Gaza. "
Almost none of the things you cite are relevant to whether access for UNRWA being allowed is particularly likely to reduce the hunger currently in Gaza, relative to access for other aid agencies which seems a very big part of what determines whether it is "the best way". I actually don't think donations to UNRWA will help because there is no chance in hell of Israel letting them in, and it would be better to try to get them to let in MSF or some other aid agency instead, but that is a separate point.Â
I guess you could hold that UNRWA are genuinely a major factor in keeping the conflict going, and that this means that marginal further funding for them has a non-negligible  but I think that is extremely implausible: Hamas would exist with or without UNRWA, and presumably whoever the major providers of schools in Gaza are they will teach in a way roughly compatible with Hamas' demands and current Palestinian public opinion. I expect the marginal impact of donation to UNRWA or UNRWA access to Gaza to feed people for a few days on the conflict to be zero by any mechanism other than one that goes directly through the effects of more Gazans being fed by literally any organization.
Out of interest, do you think Israel should do more to let in other aid organizations, like say MSF, than they are currently doing?Â
I think it's a bit misleading to say EA philosophy "lacks rigor", because it could be taken to imply it falls below some sort of known disciplinary standard of reasoning/evidence that at least some other philosophy reaches. I don't think this is even close to being true. EA philosophy to me means mostly "Bostrom, Ord and MacAskill's academic papers, and stuff that came out of the Global Priorities Institute". And that stuff has been published in very good journals over and over again. Even MIRI's unorthodox ideas about decision theory have been written-up and published in a very good philosophy journal! EA philosophy is about as academically mainstream as philosophy gets. It's true a large majority of academic philosophers disagree with at least some of it, but that is also true of any comparable rival body of philosophical work.Â
I think you could have strengthened your argument here further by talking about how even in Dario's op-ed opposing the ban on state-level regulation of AI, he specifically says that regulation should be "narrowly focused on transparency and not overly prescriptive or burdensome". That seems to indicate opposition to virtually any regulations that would actually directly require doing anything at all to make models themselves safer. It's demanding that regulations be more minimal than even the watered-down version of SB 1047 that Anthropic publicly claimed to support.Â