This is a special post for quick takes by hmijail. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.
Sorted by Click to highlight new quick takes since:

When is it acceptable to cite an LLM?

I couldn't find any discussion or consensus about this, so I'll ask. 

I am surprised to find seemingly well-thought-out articles openly citing LLMs as research. For contrast, no one (who knows how to use Wikipedia) would cite Wikipedia: you are supposed to go to its sources. That's why [citation needed] is a thing: you don't want to build on unsubstantiated BS. So I find it hard to believe that people would cite infamously hallucination/BS-prone LLMs instead, expecting some reasoning built on that to be accepted.

If the author did check for actual sources, they would surely cite them directly instead of citing the LLM, right? E.g., if Claude says X and cites source Y, then either Y is a valid source and then why care for Claude's X; or Y is not valid and then of course why care for Claude's X. In both cases, X is useless.

And if the author couldn't be bothered to check this, why should anyone care for their reasoning? 

I find it particularly surprising in a community worried about AI. Are authors really swallowing whole whatever an LLM says, and thus being its conduit, without checking if it's manipulating them?

EDIT: Citaception: Turns out that LLMs also don't know how to use Wikipedia even when in "research" mode: they are happy to quote and build on any claim X in there, even when the citations in the very same Wikipedia refute X. 

I think you should get the LLM to give you the citation and then cite that (ideally after checking it yourself).

I don't cite LLMs for objective facts.

In casual situations I think it's basically okay to cite an LLM if you have a good sense of what sorts of facts LLMs are unlikely to hallucinate, namely, well-established facts that are easy to find online (because they appear a lot in the training data). But for those sorts of facts, you can turn on LLM web search and it will find a reliable source for you and then you can cite that source instead.

I think it's okay to cite LLMs for things along the lines of "I asked Claude for a list of fun things to do in Toronto and here's what it came up with".

If an Anthropic data scientist in a high-profile legal case can be hoodwinked by bad citations, I don't think that it is realistic at all to think that anyone can have a "good sense of what sorts of facts LLMs are unlikely to hallucinate".

And I thought we all have heard about lists of fun things to do full of non-existent restaurants in the way to non-existent towns?

Curated and popular this week
Relevant opportunities