I'm a computational physicist, I generally donate to global health. I am skeptical of AI x-risk and of big R Rationalism, and I intend explaining why in great detail.
The size of a typical computer virus is on the order of a few megabytes or less. This makes them very easy to share around and download without anybody noticing.
In contrast, the full version of deepseek-R1 takes up 400 gigabytes, which could take several hours to download on a typical household computer, and would not fit on a typical laptop computer. Deepseek is nowhere near the state of the art as far as AI goes, and we could expect future AI to be orders of magnitude bigger than this.
Therefore, it is unlikely that future AI systems will be able to hide themselves in any way comparable to computer viruses.
Generally when people turn away from traditional media, they instead get their information from social media, podcasters, youtubers, and influencers, all of which have even lower standards for scientific truth than the media does. This is how anti-vax conspiracy theories spread. I don't think it'll be particularly hard for the meat industry to turn a large segment of these alternative information ecosystems against cultured meat.
There are ample openings for the attacks on either side politically: To the right wing they can claim that cultured meat is attacking traditional values and culture, while to the left-wing they can claim that cultured meat is a monopolistic big-corporation enterprise.
It could be a fun experiment to see how different wording affects the correlation with karma, for example. Would you get the result if you asked it to evaluate "logical and empirical rigor"? What if you asked about simpler things like how "well structured" or "articulate" the articles are? You could maybe get a sense for what aspects of writing are valued in the forum.
Interesting work! It's fascinating that the "Egrecore" analysis is essentially likening EA to a religion, it reads like it was written by an EA critic. Maybe it was influenced by the introduction of a mystical term like "egrecore", or perhaps external criticism read by the chatbots have seeped in.
I am skeptical of the analysis of "epistemic quality". I don't think chatbots are very good at epistemology, and frankly most humans aren't either. I worry that you're actually measuring other things, like the tone or complexity of language used. These signfifiers would also correlate with forum karma.
I wonder if the question about "holistic epistemic quality" is influencing this: this does not appear to be a widely used term in the wider world. Would a more plain language question give the same results?
Applying extra scrutiny to AI generated text is entirely rational, and I encourage people to continue doing so. It used to be that if a text was long and structured, that you could be assured that the writer had some familiarity with the topic they were writing on, and that they had put some degree of intellectual effort and rigor into the article.
With content written in the AI tone, that is no longer the case: we can't tell if you put in lots of thought and rigour into the article, or if you just threw a 10 word prompt into chatGPT and copy pasted what it said.
The internet is currently being flooded with AI spam that has zero substance or value, but is superficially well written and structured. It is your responsibility to distinguish yourself from the slop.
I feel that similar reasoning could have been applied to historically successful protest movements in their early stages. The civil rights movement didn't start with the march on Washington, it started small and got bigger, and the participants risked their health in their protests. More recently I think the climate activist movement has achieved an immense amount of good through their tactics.
I don't actually believe that AI x-risk is a serious problem at the moment, so I don't support this particular protest. However I want to protect the principle of protesting being good: I want people who think there is a serious danger to be willing to actively protest that danger, not wait around passively for the media to give them permission to do so.
I would say the main people "shaping AGI" are the people actually building models at frontier AI companies. It doesn't matter how aligned "AI safety" people are if they don't have a significant say on how AI gets built.
I would not say that "almost all" of the people at top AI companies exemplify EA-style values. The most influential person in AI is Sam Altman, who has publicly split with EA after EA board members tried to fire him for being a serial liar.