This is a special post for quick takes by Midtermist12. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.
Sorted by Click to highlight new quick takes since:

My recent post about optimizing personal impact got significant engagement—but the most upvoted comments focused on detecting AI involvement rather than engaging with the framework itself.

I used AI as a thinking partner: developing ideas, structuring arguments, drafting, revising. At every step, I was reading, evaluating, choosing what to keep or cut. The AI helped me articulate insights I wouldn't have had time to polish otherwise. With a full-time job, the alternative was no post at all.

What frustrates me isn't just the response—it's the pattern. Suspecting AI seems to give people permission to demand comprehensive counterarguments and critique tone in ways they wouldn't otherwise. My post was meant to provoke thought about an underdiscussed topic. It got scrutinized like an academic paper—but only because it "felt AI-ish." This isolated demand for rigor doesn't happen to similarly provocative posts without that AI flavor.

More concerning: How many people are watching this and thinking "I have insights to share, but I only have an hour, not a weekend. Better not post at all"?

The person with deep domain expertise who needs help structuring their argument. The non-native English speaker with brilliant insights. The parent with 30 minutes to contribute something valuable. They're watching this and learning: unless you can craft perfect prose yourself, you'll face heightened scrutiny for using tools to help.

AI is a powerful tool for developing and expressing ideas—like having an editor who helps you think through arguments and find clearer ways to express them. We should engage with ideas based on their merit, not their production method. Otherwise we're gatekeeping who gets to contribute.

(This quick take was drafted with AI assistance—I used it to help structure my thoughts, refine arguments, and improve clarity. At every step, I was evaluating and directing the output. The ideas, frustrations, and positions are entirely mine. I'm unapologetic about using tools that help me contribute to important discussions within my time constraints.)

I have a lot of sympathy towards being frustrated at knee-jerk bias against AI usage. I was recently banned from r/philosophy on first offense because I linked a post that contained an AI-generated image and a (clearly-labelled) AI summary of someone else's argument[1]. (I saw that the subreddit had rules against AI usage but I foolishly assumed that it only applied to posts in the subreddit itself). I think their choice to ban me was wrong, and deprived them of valuable philosophical arguments that I was able to make[2] in other subreddits like r/PhilosophyOfScience. So I totally get where you're coming from with frustration.

And I agree that AI, like typewriters, computers, calculators, and other tools, can be epistemically beneficial in allowing people who otherwise don't have the time to make arguments to develop them. 

Nonetheless I think you're wrong in some important ways.

Firstly, I think you're wrong to believe that perception of AI ought only to cause us to be skeptical of whether to engage with some writing, and it is "pure prejudice" to apply a higher bar to writing after reading it conditional upon whether it's AI. I think this is an extremely non-obvious claim, and I currently think you're wrong. 

To illustrate this point, consider two other reasons I might apply greater scrutiny to some content I see:

  1. An entire essay is written in Comic Sans
  2. I learned that a paper's written by Francisca Gino

If an essay is written in Comic Sans (a font often adopted by unserious people), we might initially suspect that the essay's not very serious, but after reading it, we should withdraw any adverse inferences we make about the essay simply due to font. This is because we believe (by stipulation) that an essay's font can tell us whether an essay is worth reading, but cannot provide additional data after reading the essay. In Pearlian terms, reading the essay "screens off" any information we gain from an essay's font.

I think this is not true for learning that a paper is written by Francisca Gino. Since Francisca Gino's a known data fraudster, even after carefully reading a paper by her, or at least with the same level of care I usually apply to reading psychology papers, I should continue to be more skeptical of her findings than after reading the same paper written by a different academic. I think this is purely rational, rather than an ad hominem argument, or "pure prejudice" as you so eloquently put it.

Now, is learning whether an essay is written (or cowritten) by AI a signal more akin to learning that an essay is written in Comic Sans, or closer to learning that it's written by Francisca Gino? Reasonable people can disagree here, but at the very least the answer's extremely non-obvious, and you haven't actually substantiated why you believe it's the former, when there are indeed good reasons to believe it's the latter. 

In brief: 

  1. AI hallucination -- while AIs may intentionally lie less often than Harvard business professors, they still hallucinate at a higher rate than i'm comfortable with seeing on the EA Forum.
  2. AI persuasiveness -- for the same facts and levels of evidence, AIs might be more persuasive than most human writers. To the extent this additional persuasiveness is not correlated with truth, we should update negatively accordingly upon seeing arguments presented by AIs.
  3. Authority and cognition -- If I see an intelligent and well-meaning person present an argument with some probably-fillable holes, that they allude to but do not directly address in the writing, I might be inclined to give them a benefit of a doubt and assumed they've considered the issue and decided it wasn't worth going into in a short speech or essay. However, this inference is much more likely to go wrong if an essay is written with AI assistance. I alluded to this point in my comment on your other top-level post but I'll mention it again here.
    1. I think it's very plausible, for example, that if you took the time to write out/type out your comment here yourself, you'd have been able to recognize my critique for yourself, and it wouldn't have been necessary for me to dive into it.
  1. ^

    I still defend this practice. I think the alternative of summarizing other people's arguments in your own words has various tradeoffs but a big one is that you are injecting your own biases into the summary before you even start critiquing it.

  2. ^

    Richard Chappell was also banned temporarily, and has a more eloquent defense. Unlike me he's an academic philosopher (TM)

In the case of the author with the history of fraud, you are applying prejudice, albeit perhaps appropriately so. 

 

You raise stronger points than I've yet heard on this subject, though I still think that if you read some kind of content and find it compelling on its merits, there is still a strong case to apply at least similar scrutiny regardless of whether there are signs of AI use. Although I still think there is too much knee-jerk sentiment on the matter, you've given me something to think about. 

Applying extra scrutiny to AI generated text is entirely rational, and I encourage people to continue doing so. It used to be that if a text was long and structured, that you could be assured that the writer had some familiarity with the topic they were writing on, and that they had put some degree of intellectual effort and rigor into the article.  

With content written in the AI tone, that is no longer the case: we can't tell if you put in lots of thought and rigour into the article, or if you just threw a 10 word prompt into chatGPT and copy pasted what it said. 

The internet is currently being flooded with AI spam that has zero substance or value, but is superficially well written and structured. It is your responsibility to distinguish yourself from the slop. 

I agree that using AI flavor as a means to determine whether content is worth further consideration makes sense. If AI flavor is a strong proxy for garbage, than it makes sense to consider stop reading after you detect AI flavor

What does not make sense is, after you have decided to read and evaluate the content, to designate it inferior simply because of the process that generated it. This is essentially pure prejudice.

An example would be: I evaluated X content and determined Y deficiencies. Because I believe the content was entirely human-generated, I assess these deficiencies as relatively minor. Alternatively, I evaluated A content and determined B deficiencies which are substantially similar to Y, but because I suspect that the content was AI generated, I place greater weight on said deficiencies.

I empathise but strongly disagree. AI has lowered the costs of making superficially plausible but bad content. The internet is full of things that are not worth reading and people need to prioritise.

Human written writing has various cues that people are practiced at identifying that indicate bad writing, and this can often be detected quickly, eg seeming locally incoherent, bad spelling, bad flow, etc. These are obviously not perfect heuristics, but convey real signal. AI has made it much easier to avoid all these basic heuristics, without making it much easier to have good content and ideas. Therefore, if AI wrote a text, it will be more costly to identify bad quality than if a human wrote it - AI text often looks good at first glance but is BS when you look into it deeply

People are rationally responding to the information environment they find themselves in. If cheap tests are less effective, conditional on the text being AI written, then you should be more willing to judge it or ditch it entirely if you conclude it was AI written. Having higher standards of rigour is just rational

See my response to titotal. 

Identifying signs of AI and using this as a reason not to spend further time assessing is rational for the reasons you and titotal state. But such identification should not effect one's evaluations of content (allocating karma, up voting, or more extremely, taking moderation actions) except insofar as it otherwise actually lowers the quality of the content. 

If AI as source effects your evaluation process (in assessing the content itself, not in deciding whether to spend time on it) this is essentially pure prejudice. It's similar to the difference between cops incorporating crime statistics in choosing whether to investigate a young black male for homicide and a judge deciding to lower the standard of proof on that basis. Prejudice in the ultimate evaluation process is simply unjust and erodes the epistemic commons. 

Innocent until proven guilty is a fine principle for the legal system, but I do not think it is obviously reasonable to apply it to evaluating content made by strangers on the internet. It is not robust to people quickly and cheaply generating new identities, and new questionably true content. Further, the whole point of the principle is that it's really bad to unjustly convict people, along with other factors like wanting to be robust to governments persecuting civilians. Incorrectly dismissing a decent post is really not that bad.

Feel free to call discriminating against AI content prejudice if you want, but I think this is a rational and reasonable form of prejudice and disagree with the moral analogy you're trying to draw by using that word and example

I've made a pretty clear distinction here that you seem to be eliding: 

  1. Identifying AI content and deciding on that basis it's not worth your time
  2. Identifying AI content and judging that content differently simply because it is AI generated (where that judgment has consequences) 

The first is a reasonable way to protect your time based on a reliable proxy for quality. The second is unfair and poisoning of the epistemic commons. 

I'm drafting new rules and norms around AI usage at the moment. It's especially difficult because of this critique - i.e. AI can genuinely help people express ideas when they otherwise wouldn't have the time. 

However, there is an effect where clearly AI generated text causes me (and other readers) to stop reading because the majority of AI generated text on the internet is low quality/ overlong/ contains too few ideas/ etc... 

You can get around this by removing clear signs of AI writing (for example, condense this page and put it in your system prompt), or rewriting the AI's writing in your own words (when I write the EA Newsletter, I often write a bad draft, get AI to rewrite it, and then rewrite it again myself, using some good elements from the AI version). 

The bottom line for me is that if a post is good quality and contains valuable ideas, it doesn't matter who (or what) wrote it. But many AI-written posts (especially ones written without custom stylistic prompts) would (currently) be better off as a series of bullet-points written by the prompter, not the AI. 

 

I think the answer is not to have specific rules regarding the use of AI to get around the understandable prejudice of you and other readers regarding such content, but rather to evaluate the content generated on its own merits. It is understandable for people to use proxies that have an imperfect causal relationship to quality to evaluate where to spend their time, but codifying this prejudice seems quite pernicious.

I am curious if your assessment (I should have had this post as a series of bullet points), would apply to my recent post or this quick take (this response to you is unaided by any AI). I don't know that a custom prompt is needed, particularly when there is significant back and forth between the AI (or multiple LLMs, as I did in this post) and the human.

https://forum.effectivealtruism.org/posts/u9WzAcyZkBhgWAew5/your-sacrifice-portfolio-is-probably-terrible

I think this particular quick take would have benefitted from being shorter - for example just the first two paragraphs get across your main point. Maybe another sentence to represent the corollary point about chilling effects for other AI users. I don't mean all posts should be bulletpoints, just that I often see AI written content which was clearly generated based on a few bulletpoints worth of info, and would have been better off remaining as such (not sure your post was in that category, it was well received as it was). 

I'd always recommend custom prompting your AI, it does a lot to make the tone more sensible, and can work well to force it to be concise especially. 

BTW- The current rough plan is not to ban AI content and indeed to evaluate it based on its merits. I'm mostly wondering what to do about the middle ground content which is valuable, but a bit too taxing for the reader purely because it is written by AI.

I appreciate that you have a different judgment call regarding conciseness. When I was reviewing it, I thought there were a number of distinct points that warranted discussion: initial observation re celebrated comments criticizing AI, discussion of the process and counterfactual, isolated demand for rigor, effect of criticism in chilling contributions, illustration of this chilling, and the point that we should evaluate based on quality, not provenance or process.

I am glad that the plan is not to categorically ban AI content, but creating a extra scrutiny on the grounds of moderation (de jure, rather than de facto disparate treatment) does not make much sense to me. 

On second thought, AI significantly reduces the costs for the writers and in the pure human context, the costs for the writer are something of a safeguard against the overproduction of bad content (i.e., if the writer wastes the readers' time, he/she is wasting their own). I would still think a light touch would be prudent, given how effective AI can be to help proliferate good ideas/insights.

Curated and popular this week
Relevant opportunities