Wow, you've read a lot! My intro text to effective altruism (sort of) was Peter Singer's The Life You Can Save, published in 2009, but it's probably redundant with a lot of the stuff you've already read and know.
If you're interested in reading more about longtermism, the Oxford University Press anthology Essays on Longtermism: Present Action for the Distant Future, published in August, is free to read online, both on the website and as a PDF. Some of the essays changed my mind, some I saw major flaws with, and overall I now have a harsh view of longtermism because scholarship like Essays on Longtermism has failed to turn up much that's interesting or important.
An epistemology/philosophy of science book I love that isn't directly about EA at all but somehow seems to keep coming up in discussions in and around EA is The Beginning of Infinity by the physicist David Deutsch. Deutsch's TED Talk is a good, quick introduction to the core idea of the book. Deutsch's hour-long interview on the TED Interview is a good preview of what's in the book and a good introduction to Deutsch's ideas and worldview.
This book is absolutely not part of the EA "canon", nor is it a book that a large percentage of people in EA have read, but I think it's a book that a large percentage of people in EA should read. Deutsch's ideas about inductivism and AGI are the ones that are most clearly, directly relevant to EA.
I won't say that I know Deutsch's ideas are correct — I don't — but I really appreciate his pushback against inductivism and against deep learning as a path to AGI, and I appreciate the level of creativity and originality of his ideas.
The big asterisk or question mark I would put over Julia Galef's work is that she co-founded the Center for Applied Rationality (CFAR). Galef left CFAR in 2016, so she may not be responsible for the bad stuff that happened at CFAR. At least around 2017-2019, the stories about what happened at CFAR are really bad. One of the CFAR co-founders described how CFAR employees would deliberately, consciously behave in deceptive, manipulative ways toward their workshop participants in order to advance CFAR's ideas about existential risk from AI. The most stomach-churning thing of all is that CFAR organized a summer camp for kids where, according to one person who was involved, things were even worse than at CFAR itself. I don't know the specifics of what happened at the summer camp, but I hate the idea that kids may have been harmed in some way by CFAR's work.
Galef may not be responsible at all for any of this, but I think it's interesting how much of a failure this whole idea of "rationality training" turned out to be, and how unethically and irrationally the people in key roles involved in this project behaved.
I think the source you mention is talking about people deceiving themselves
Idk man I think this summary is a few shades more alarming than the post you are taking as evidence.