“All that was great in the past was ridiculed, condemned, combated, suppressed — only to emerge all the more powerfully, all the more triumphantly from the struggle.”
―Nikola Tesla
“They laughed at Columbus, they laughed at Fulton, they laughed at the Wright brothers. But they also laughed at Bozo the Clown.”
―Carl Sagan
There is a modest tension between showing appropriate respect for the expert consensus on a topic (or the mainstream view, or majority opinion, or culturally evolved social norms, or whatever) and wanting to have creative and innovative thoughts about that topic. It's possible to be too deferential to expert consensus (etc.), but it's also possible to be too iconoclastic. Where people in effective altruism err, they tend to err in being too iconoclastic.
The stock market is a good analogy. The market is wrong all the time, in the sense that, for example, companies rocket to large valuations and then come crashing down, not because the fundamentals changed, particularly, but because the market comes to think that the company was a bad bet in the first place. (The work of equity research analysts and investigative reporters with non-consensus/counter-consensus hunches is valuable here.) So, the market is wrong all the time, but beating the market is famously hard. The question is not whether the market often makes mistakes that could be capitalized on (it certainly does), the question is whether you, specifically, can spot the mistakes before the market does, and not make mistakes of your own that outweigh everything else. I think a similar thing is true with ideas in general.
The consensus view is wrong all the time, society is wrong all the time, about all sorts of things. Like the stock market, expert communities and society at large have ways of integrating criticism and new thinking. There is some sort of error correcting process. The question for anyone pursuing an iconoclastic hunch is not whether society or a particular community of experts has shown it is fallible (it has, of course), the question is whether you can correct an error over and above what the error correcting process is already doing, without introducing even more error yourself.
The danger doesn't lie simply in having non-consensus/counter-consensus views — everyone should probably have at least a few — the danger is in having too many about too many things, in being far too confident in them, and in rejecting mainstream institutions and error correcting processes. With the stock market, it takes a lot of research to support one non-consensus/counter-consensus bet. At a certain point, if someone tries to take too many bets, they're going to be stretched too thin and will not be able to do adequate research on each one. I think something similar is true with ideas in general. You need to work to engage with the consensus view properly and make a strong criticism of it. People who reject too many consensuses too flippantly are bound to be wrong about a lot. I think maybe people mistake the process of research, which takes time and hard work, which has maybe led them to be right about something that most people disagreed with once or twice, with having some sort of superpower that allows them to see mistakes everywhere without putting in due diligence.
There's also an important difference between working to improve or contribute to institutions by participating in the error correction processes versus rejecting them and being an outsider or even calling for them to be torn down. I think there's something quietly heroic but typically unglamorous about participating in the error correcting process. The people who have the endurance for it usually seem to be motivated by something deeper than getting to be right or settling scores. It comes from something more selfless and spiritual than that. By contrast, when I see people eagerly take the role of outsider or someone who wants to smash institutions, it often seems like it's based in some kind of fantasy of will to power.
One potential route for people in effective altruism who want to reform the world's ideas is publishing papers in academic journals and participating in academic conferences. I believe it was in reference to ideas about artificial general intelligence (AGI) and AI safety and alignment that the economist Tyler Cowen once gave the advice, "Publish, publish, publish." The philosopher David Thorstad, who writes the blog Reflective Altruism about effective altruism, has also talked about the merits of academic publishing in terms of applying rigour and scrutiny to ideas, in contrast to overreliance on blogs and forums. I do wish we lived in a world where academia was less expensive and more accessible, but that's a complaint about how widely the academic process can be applied, not about how important it is.
Anything that can get a person outside the filter bubble/echo chamber/ideological bunker of effective altruism will surely help. Don't forget the basics of good epistemic practice: talking to people who disagree with you, who think differently than you, and putting yourself in an emotional state where you can really consider what they're saying and potentially change your mind. Unfortunately, the Internet has almost no venues for this sort of thing, because the norm is aggressive clashes and brutal humiliation contests. Real life offers much better opportunities, but I'm worried that too much time spent on the Internet (and really Twitter deserves an outsized share of the blame) is making people more dogmatic, combative, and dismissive of others' opinions in real life, too. I don't find this to be an easy problem myself and I don't have clean, simple advice or solutions. (If you do, let me know!)
A related topic is systemic or structural critiques of institutions, including fields in academia. This is not exactly the same idea as everything I've discussed so far, although it is relevant. I think you can make good points about systemic or structural critiques of many institutions, including government (e.g. is it properly representational, do all votes have equal power?), journalism (e.g. does widespread negativity bias have harmful effects such as making people feel more hopeless or more anxious?), and certain academic fields. I have an undergraduate degree in philosophy, so philosophy is the field I'm most familiar with. I think the practice of philosophy could be improved in some ways, such as leaning toward plain English, showing less deference to or reverence for big names in the history of philosophy (such as Kant, Hegel, and Heidegger), and being more mindful of what questions in philosophy are important and why, rather than get drawn into "How many angels can dance on the head of a pin?"-type debates.
You can make legitimate critiques of institutions or academic fields, like philosophy, and in some way, the force of those critiques makes those institutions or fields less credible. For example, I take academic philosophy somewhat less seriously than I would if it didn't have the problems I experienced with unclear language, undue deference to or reverence for big names, or distraction by unimportant questions. But if I decide academic philosophy is a temple that needs to be torn down and I try to create my own version of philosophy outside of academia from the ground up, on balance, what I come up with is going to be far worse than the thing I'm trying to replace. Trying to reinvent everything from scratch is a forlorn project.
So, what applies to ideas also applies to institutions that handle ideas. Systemic or structural reform of institutions is good and probably needed in many cases, but standing outside of institutions and trying to create something better is going to fail 99.9%+ of the time. I detect in effective altruism too much of an impulse too often from too many people to want to reinvent everything from scratch. Why not apply effective altruism's deeply seeing Eye of Sauron to corporations, to governments, to nations, to world history, and, in roughly the words said to me by someone at an effective altruist organization many years ago, solve all the world's problems in priority sequence?[1] There is a nobility in getting one to three things right that are non-consensus/counter-consensus and giving that contribution to the world. There is also an ignobility in becoming overconfident from a few wins and, rather than being realistic about the limitations of human beings, spreading yourself far too thin and making too many contrarian bets on too many things, and being surely wrong about most of them.
Getting something right, which most people get wrong, that the world needs to know about takes love, care, attention, and hard work. Doing even one thing like that is an achievement and something to be proud of. That's disciplined iconoclasm and it's beautiful. It moves the world forward. The other thing, the undisciplined thing, is in an important sense the opposite of that and its mortal enemy, although the two are easily confused and conflated. The undisciplined thing is to think because you were contrarian and right once or a few times, or just because, for whatever reason, you feel incredibly confident in your opinions, that everyone else must be wrong about everything all the time. That is actually a tragic perspective from which to inhabit life. It is often a refuge of the wounded. It's an impoverished perspective, mean and meagre.
The heroism of the disciplined iconoclast contrasted with the tragedy of the omni-contrarian evokes for me what the Franciscan mystic Richard Rohr calls the distinction between the soul and the ego, or the true self and the false self. Ironically enough, great deeds are better accomplished by people not concerned with their greatness, and the highest forms of well-being in life, such as love, come from setting aside, at least partially, one's self-interest. Will to power is not the way. The way is love for the thing itself, regardless of external reward or recognition. That kind of selfless love is often unglamorous, sometimes boring, and definitely hard to do, but it's undoubtedly the best thing on Earth.
I think many people like myself once detected something nearly divine in effective altruism's emphasis on sacrificing personal consumption to help the world's poorest people, not for any kind of recognition or external reward, but just to do it. That is an act of basically selfless love. Given that point of comparison, you can understand why many people feel unsatisfied and uneasy with a transition toward discussing what is essentially science fiction at expensive conference venues. Where is the love or the selflessness in that?
I think excessive contrarianism is best understood as ego or as stemming from an ego problem. It's about not accepting one's fundamental human limitations, which everyone has, and it's about being overly attached to winning, being right, gaining status, gaining recognition, and so on, rather than letting those things go as much as humanly possible in favour of focusing on love for the thing itself. I think this is always a tragic story because people are missing the point of what life is about. It's also a tragic story because every investigation of why someone is unable to let go of ego concerns seems to ultimately trace back to someone, typically a child, who deserved love and didn't get it, and found the best way they could to cope with that unbearable suffering.
When I ask myself why some people seem to need effective altruism to be something more than it could possibly realistically be, or why they seem to want to tear down all human intellectual achievement and rebuild it from scratch, I have to wonder if they would feel the same way if they felt they could be loved exactly as they are. Could it be, at bottom, that it's been about love all along?
- ^
In the original version of this post, I said it was an employee of the Centre for Effective Altruism. On reflection, it was more likely an employee or volunteer from the Local Effective Altruism Network or LEAN.
(I changed this sentence and added this footnote on November 18, 2025 at 3:10 PM Eastern.)

I do think EA is a bit too critical of academia and peer review. But despite this, most of the top 10 most highly published authors in peer-reviewed journals in the global catastrophic risk field have at least some connection with EA.
I think where academic publishing would be most beneficial for increasing the rigour of EA’s thinking would be AGI. That’s the area where Tyler Cowen said people should “publish, publish, publish”, if I’m correctly remembering whatever interview or podcast he said that on.
I think academic publishing has been great for the quality of EA’s thinking about existential risk in general. If I imagine a counterfactual scenario where that scholarship never happened and everything was just published on forums and blogs, it seems like it would be much worse by comparison.
Part of what is important about academic publishing is exposure to diverse viewpoints in a setting where the standards for rigour are high. If some effective altruists started a Journal of Effective Altruism and only accepted papers from people with some prior affiliation with the community, then that would probably just be an echo chamber, which would be kind of pointless.
I liked the Essays on Longtermism anthology because it included critics of longtermism as well as proponents. I think that’s an example of academic publishing successfully increasing the quality of discourse on a topic.
When it comes to AGI, I think it would be helpful to see some response to the ideas about AGI you tend to see in EA from AI researchers, cognitive scientists, and philosophers who are not already affiliated with EA or sympathetic to its views on AGI. There is widespread disagreement with EA’s views on AGI from AI researchers, for example. It could be useful to read detailed explanations of why they disagree.
Part of why academic publishing could be helpful here is that it’s a commitment to serious engagement with experts who disagree in a long-form format where you’re held to a high standard, rather than ignoring these disagreements or dismissing them with a meme or with handwavy reasoning or an appeal to the EA community’s opinion — which is what tends to happen on forums and blogs.
EA really exists in a strange bubble on this topic, its epistemic practices are unacceptably bad, scandalously bad — if it’s a letter grade, it’s an F in bright red ink — and people in EA could really improve their reasoning in this area by engaging with experts who disagree without the intent to dismiss or humiliate them, but to actually try to understand why they think what they do and seriously consider if they’re right. (Examples of scandalously bad epistemic practices include many people in EA apparently never once even hearing that an opposing point of view on LLMs scaling to AGI even exists, despite it being the majority view among AI experts, let alone understanding the reasons behind that view, some people in EA openly mocking people who disagree with them, including world-class AI experts, and, in at least one instance, someone with a prominent role who responded to an essay on AI safety/alignment that expressed an opposing opinion without reading it, just based on guessing what it might have said. These are the sort of easily avoidable mistakes that predictably lead to having poorly informed and poorly thought-out opinions, which, of course, are more likely to be wrong as a result. Obviously these are worrying signs for the state of the discourse, so what's going on here?)
Only weird masochists who dubiously prioritize their time will come onto to forums and blogs to argue with people in EA about AGI. The only real place where different ideas clash online — Twitter — is completely useless for serious discourse, and, in fact, much worse than useless, since it always seems to end up causing polarization, people digging in on opinions, crude oversimplification, and in-group/out-group thinking. Humiliation contests and personal insults are the norm on Twitter, which means people are forming their opinions not based on considering the reasons for holding those opinions, but based on needing to “win”. Obviously that’s not how good thinking gets done.
Academic publishing — or, failing that, something that tries to approximate it in terms of the long-form format, the formality, the high standards for quality and rigour, the qualifications required to participate, and the norms of civility and respect — seems the best path forward to get that F up to a passing grade.
AGI is a subset of global catastrophic risks, so EA-associated people have extensively academically published on AGI - I personally have about 10 publications related to AI.
I agree that those links are examples of not good epistemics. But in the example of not being aware that the current paradigm may not scale to AGI, this is commonly discussed in EA, such as here and by Carl Shulman (I think here or here). I would be interested in your overall letter grades for epistemics. My quick take would be:
Ideal: A+
Less Wrong: A
EA Forum: A- (not rigorously referenced, but overall better calibrated to reality and what is most important than academia, more open to updating)
Academia: A- (rigorously referenced, but a bias towards being precisely wrong rather than approximately correct, which actually is related to the rigorously referenced part. Also a big bias towards conventional topics.)
In-person dialog outside these spaces: C
Online dialog outside these spaces: D
Somewhat different point being made here. Publications on existential risk from AI generally just make some assumptions about AGI with some probability, maybe deferring to some survey or forecast. What I meant is academic publishing about the object-level, technical questions around AGI. For example, what the potential obstacles are to LLMs scaling to AGI. Things like that.
That's interesting. I really don't get the impression that this concept is commonly discussed in EA or something people are widely aware of — at least not beyond a surface level. I searched for "paradigm" in the Daniel Kokotajlo interview and was able to find it. This is actually one of the only discussions of this question I've seen in EA beyond a surface gloss. So, thank you for that. I do think Daniel Kokotajlo's arguments are incredibly hand-wavy though. To give my opinionated, biased summary:
I'd appreciate a pointer of what to look for in the Carl Schulman interviews, if you can remember a search term that might work. I searched for "paradigm" and "deep learning" and didn't turn up anything.
This is a fun game!
Ideal: A+
LessWrong: F, expelled from school, hopefully the parents find a good therapist (counselling for the whole family is recommended)[1]
EA Forum: maybe a B- overall, C+ if I'm feeling testy, encompassing a wide spectrum from F to A+, overall story is quite mixed, hard to give an average (there are many serious flaws, including quite frequently circling the wagons around bad ideas or to shut down entirely legitimate and correct criticism, disagreement, or the pointing out of factual errors)
Academia: extremely variable from field to field, journal to journal, and institution to institution, so hard to give a single letter grade that encompasses the whole diversity and complexity of academia worldwide, but, per titotal's point, given that academia encompasses essentially all human scientific achievement, from the Standard Model of particle physics to the modern synthesis in evolutionary biology to the development of cognitive behavioural therapy in psychology, it's hard to say it could be anything other than an A+
In-person dialogue outside these spaces: extremely variable, depends who you're talking to, so I don't know how to give a letter grade since, in theory, this includes literally everyone in the entire world; I strive to meet and know people who I can have great conversations with, but a random person off the street, who knows (my favourite people I've ever talked to, A+, my least favourite people I've ever talked to, F)
Online dialog outside these spaces: quite terrible in general, if you're thinking of platforms like Twitter, Reddit, Bluesky, Instagram, TikTok, and so on, so, yeah, probably a D for those places, but YouTube stands out as a shining star — not necessarily on average or in aggregate, since YouTube is so vast that feels inestimable — but the best of YouTube is incredibly good, including the ex-philosopher ContraPoints, the wonderful science communicator Hank Green, the author and former graduate student in film studies Lindsay Ellis, and at least a few high-quality video podcasts (which often feature academic guests) and academic channels that upload lectures and panels, who I'm proud to give an A+ and a certificate for good attendance
LessWrong is not only an abyss of irrationality and delusion, it's also quite morally evil. The world would be much better off — and most of all, its impressionable victims — if it stopped existing and everyone involved found something better to do, like LARPing or writing sci-fi.
Daniel said "I would say that there’s like maybe a 30% or 40% chance that something like this is true, and that the current paradigm basically peters out over the next few years."
It might have been Carl on the Dwarkesh podcast, but I couldn't easily find a transcript. But I've heard from several others (maybe Paul Christiano?) that they have 10-40% chance that AGI is going to take much longer (or is even impossible), either because the current paradigm doesn't get us there, or because we can't keep scaling compute exponentially as fast as we have in the last decade once it becomes a significant fraction of GDP.
Yes, Daniel Kokotajlo did say that, but then he also said if that happens, all the problems will be solved fairly quickly anyway (within 5-10 years), so AGI will be only be delayed from maybe 2030 to 2035, or something like that.
Overall, I find his approach to this question to be quite dismissive of possibilities or scenarios other than near-term AGI and overzealous in his belief that either scaling or sheer financial investment (or utterly implausible scenarios about AI automating AI research) will assuredly solve all roadblocks on the way to AGI in very short order. This is not really a scientific approach, but just hand-waving conceptual arguments and overconfident gut intuition.
So, because he doesn't really think the consequences of even fundamental problems with the current AI paradigm could end up being particularly significant, I give Kokotajlo credit for thinking about this idea in the first place (which is like saying I give a proponent of the covid lab leak hypothesis credit for thinking about the idea that the virus could have originated naturally), but I don't give him credit for a particularly good or wise consideration of this issue.
I'd be very interested in seeing the discussions of these topics from Carl Schulman and/or Paul Christiano you are remembering. I am curious to know how deeply they reckon with this uncertainty. Do they mostly dismiss it and hand-wave it away like Kokotajlo? Or do they take it seriously?
In the latter case, it could be helpful for me because I'd have someone else to cite when I'm making the argument that these fundamental, paradigm-level considerations around AI need to be taken seriously when trying to forecast AGI.
Here are some probability distributions from a couple of them.
Thanks. Do they actually give probability distributions for deep learning being the wrong paradigm for AGI, or anything similar to that?
It looks Ege Erdil said 50% for that question, or something close to that question.
Ajeya Cotra said much less than 50%, but she didn't say how much less.
I didn't see Daniel Kokotajlo give a number in that post, but then we have the 30-40% number he gave above, on the 80,000 Hours Podcast.
The probability distributions shown in the graphs at the top of the post are only an indirect proxy for that question. For example, despite Kokotajlo's percentage being 30-40%, he still thinks that will most likely only slow down AGI by 5-10 years.
I'm just looking at the post very briefly and not reading the whole thing, so I might have missed the key parts you're referring to.
I think you're confusing "hard work" with the disclosure of wisdom.
Take a look at the history of philosophy and you'll find plenty of hard work in medieval scholasticism, or in Marxist dialectical materialism. Heidegger was one of the greatest philosophers... and he was a Nazi and organized book burnings. Sartre supported Stalinism. They were true scholars who worked very hard, with extraordinary intellectual capacity and commendable academic careers.
Wisdom is something else entirely. It stems from an unbiased perspective and risks breaking with paradigms. "Effective Altruism" might be close to this. For the first time, there's a movement for social change centered on a behavioral trait, detached from old traditions and political constraints.
This is a very strange critique. The claim that research takes hard work does not logically imply a claim that hard work is all you need for research. In other words, to say hard work is necessary for research (or for good research) not does imply it is sufficient. I certainly would never say that it is sufficient, although it is necessary.
Indeed, I explicitly discuss other considerations in this post, such as the "rigour and scrutiny" of the academic process and what I see as "the basics of good epistemic practice", e.g. open-minded discussion with people who disagree with you. I talk about specific problems I see in academic philosophy research that have nothing to do with whether people are working hard enough or not. I also discuss how, from my point of view, ego concerns can get in the way, and love for research itself — and maybe I should have added curiosity — seems to be behind most great research. But, in any case, this post is not intended to give an exhaustive, rigorous account of what constitutes good research.
If picking examples of academic philosophers who did bad research or came to bad conclusions is intended to discredit the whole academic enterprise, I discussed that form of argument at length in the post and gave my response to it. (Incidentally, some members of the Bay Area rationalist community might see Heidegger's participation in the Nazi Party and his involvement in book burnings as evidence that he was a good decoupler, although I would disagree with that as strongly as I could ever disagree about anything.)
I think accounting for bias is an important part of thinking and research, but I see no evidence that effective altruism is any better at being unbiased than anyone else. Indeed, I see many troubling signs of bias in effective altruist discourse, such as disproportionately valuing the opinion of other effective altruists and not doing much to engage seriously and substantively with the opinions of experts who are not affiliated with effective altruism.
I think effective altruism is as much attached to intellectual tradition and as much constrained by political considerations as pretty much anything else. No one can transcend the world with an act of will. We are all a part of history and culture.
Though I don't claim that EAs are without bias, I think there's lots of evidence that they have less bias than non-EAs. For instance, most EAs genuinely ask for feedback, many times specifically asking for critical feedback, and typically update their opinions when they think it is justified. Compare the EA Forum with typical chat spaces. EAs also more often admit mistakes, such as on many EA-aligned organization websites. When making a case for funding something, EAs often will include reasons for not funding something. EAs often include credences, which I think helps clarify things and promotes more productive feedback. EAs tend to have a higher standard of transparency. EAs take counterfactuals seriously, rather than just trying to claim the highest number for impact.
These are all good points and genuine examples of virtuous behaviours often exemplified by people in EA, but we've got to ask what we're comparing to. Typical chat spaces, like Reddit? Good grief, of course the EA Forum is better than that! But the specific point of comparison I was responding to was academia.
You said:
So that's why I compared to non-EAs. But ok, let's compare to academia. As you pointed out, there are many different parts of academia. I have been a graduate student or professor at five institutions, but only two countries and only one field (engineering, though I have published some outside of engineering). As I said in the other comment, academia is much more rigorously referenced than the EA Forum, but the disadvantage of this is that academia pushes you to be precisely wrong, rather than approximately correct. P hacking is a big deal in social sciences academia, but not really in engineering, and I think EAs are more aware of the issues than the average academic. Of course there's a lot of diversity within EA and on the EA Forum. Perhaps one comparison that could be made is the accuracy of domain experts versus super forecasters (many of whom are EAs and it's similar to the common EA condition of a better calibrated generalist). I've heard people argue both sides, but I would say they are similar accuracy in most domains. I think that EA is quicker to update, one example being taking COVID seriously in January and February 2020-at least my experience much more seriously than academia was taking it. As for the XPT, I'm not sure how to characterize it, because I would guess that the higher percent of the GCR experts were EA than of the super forecasters. At least in AI, the experts had a better track record of predicting faster AI progress, which was generally the position of EAs. As for getting to the truth in new areas, academia has some reputation for discussing strange new ideas, but I think EA is significantly more open to it. It has certainly been my experience publishing in AI and other GCR (and other people's experience publishing in AI) that it's very hard to find a journal fit for strange new ideas (e.g. versus publishing in energy, which I've published dozens of papers on as well). I think this is an extremely important part of epistemics. I think the best combination is subject matter expertise combined with techniques to reduce bias. And I think you get that combination with subject matter experts in EA. And it's true that many other EAs then defer to those EAs who have expertise in the particular field (maybe you disagree with what counts as subject matter expertise and association with EA/using techniques to reduce bias, but I would count people like Yudkowsky, Christiano, Shulman, Bostrom, Cotra, Kokotajlo, and Hanson (though he has long timelines)). So I guess I would expect that on the question of AI timelines, the average EA would be more accurate than the average academic in AI.
Yes, I said "anyone else", but that was in the context of discussing academic research. But if we were to think more broadly and adjust for demographic variables like level of education (or years of education) and so on, as well as maybe a few additional variables like how interested someone is in science or economics, I don't really believe that people in effective altruism would do particularly better in terms of reducing their own bias.
I don't think people in EA are, in general or across the board, particularly good at reducing their bias. If you do something like bring up a clear methodological flaw in a survey question, there is a tendency of some people to circle the wagons and try to deflect or downplay criticism rather than simply acknowledge the mistake and try to correct it.
I think some people (not all and not necessarily most) in EA sometimes (not all the time and not necessarily most of the time) criticize others for perceived psychological bias or poor epistemic practices and act intellectually superior, but then make these sort of mistakes (or worse ones) themselves, and there's often a lack of self-reflection or a resistance to criticism, disagreement, and scrutiny.
I worry that perceiving oneself as intellectually superior can lead to self-licensing, that is, people think of themselves as more brilliant and unbiased than everyone else, so they are overconfident in their views and overly dismissive of legitimate criticism and disagreement. They are also less likely to examine themselves for psychological bias and poor epistemic practices.
But what I just said about self-licensing is just a hunch. I worry that it's true. I don't know whether it's true or not.
I have a very hard time believing that the average or median person in EA is more aware of issues like P hacking (or the replication crisis in psychology, or whatever) than the average or median academic working professionally in the social sciences. I don't know why you would think that.
I'm not sure exactly what you were referencing Eliezer Yudkowsky as an example of — someone who is good at reducing his own bias? I think Yudkowsky has shown several serious problems with his epistemic practices, such as:
Maybe aware is not the right word now. But I do think that EAs updated more quickly that the replication crisis was a big problem. I think this is somewhat understandable, as the academics have strong incentives to get a statistically significant result to publish papers, and they also have more faith in the peer review process. Even now, I would guess that EAs have more appropriate skepticism of social science results than the average social science academic.
I think his epistemics have gone downhill in the last few years as he has been stressed out that the end of the world is nigh. However, I do think he is much more aware of biases than the average academic, and has, at least historically, updated his opinions a lot, such as early on realizing that AI might not be all positive (and recognizing he was wrong).
Thank you for clarifying your views and getting into the weeds.
I don't know how you would go about proving that to someone who (like me) is skeptical.
The sort of problems with Yudkowsky's epistemic practices that I'm referring to have existed for much longer than the last few years. Here's an example from 2017. Another significant example from around 2015-2017 is that he quietly changed his view from skeptical of deep learning as a path to AGI and still leaning toward symbolic AI or GOFAI as the path of AGI to all-in on deep learning, but never publicly explained why.[1] This couldn't be more central to his life's work, so that's very odd.
This blog post from 2015 criticizes some of the irrationalities in Yudkowsky's Sequences, which were written in 2006-2009.
If you go back to Yudkowsky's even earlier writings from the late 1990s and early 2000s, some of the very same problems are there.
So, really, these are problems that go back at least 7 years or so and arguably much longer than that, even as long as about 25 years.
Around 2015-2017, I talked to Yudkowsky about this in a Facebook group about AI x-risk, which is part of why I remember it so vividly.
Another example of how EA is less biased is the EA-associated news sources. Improve the News is explicitly about separating fact from opinion, and Future Perfect and Sentinel news focus on more important issues, e.g. malaria and nuclear war, rather than plane crashes.
I am not familiar with the other two things you mentioned, but I'm very familiar with Future Perfect and overall I like it a lot. I think it was a good idea for Vox to start that.
But Future Perfect is a small subset of what Vox does overall, and what Vox does — mainly explainer journalism, which is important, and which Vox does well — is just one part of news overall.
Future Perfect is great, but it's also kind of just publishing articles about effective altruism on a news site — not that there's anything wrong with that — rather than an improvement on the news overall.
If you put the Centre for Effective Altruism or the attendees of the Meta Coordination Forum or the 30 people with the most karma on the EA Forum in charge of running the New York Times, it would be an utter disaster. Absolute devastation, a wreck, an institution in ruins.
Of course, I agree that EA contains extravagant, Byzantine, and biased approaches, influenced by all sorts of traditions. But there is one approach that is original, unique, and that opens a window for social change. In the world of conventional academic knowledge, there is nothing but highly intelligent people trying to build successful careers.
The critique of "undisciplined iconoclasm" is welcome. There is never enough improvement when there is so much to gain.
And "love" is a real phenomenon, a part of human behavior, that deserves analysis and understanding. It is not ornamental, nor a vague idealistic reference, nor a "reductio ad absurdum".
This is pretty weird thing to say. You understand that "academic knowledge" encompasses basically all of science, right? I know plenty of academics, and I can't think of anyone I know IRL that is not committed to truthseeking, often with signficantly more rigor than is found in effective altruism.
Obviously, I was not referring to the empirical sciences, but, as is clear from the context, to the social sciences, which have a certain capacity to influence moral culture.
You have the impression that the work of academic professionals is rigorously focused on the truth. I think that there are some self-evident truths about social progress that are not currently being addressed in academia.
I don't think that EA is a complete ideology today, but its foundation is based on a great novelty: conceiving social change from a trait of human behavior (altruism).
I'm not sure I'm able to follow anything you're trying to say. I find your comments quite confusing.
I don't agree with your opinion that academia is nothing but careerism and, presumably, that effective altruism is something more than that. I would say effective altruism and academia are roughly equally careerist and roughly equally idealistic. I also don't agree that effective altruism is more epistemically virtuous than academia, or more capable of promoting social change, or anything like that.