I haven't read more than the abstract of the referenced article, as access is restricted, but the abstract suggests that the reason that people deny the existence of animal minds is due to their dissonance between not wanting to cause harm to things that have minds and enjoying eating meat. Imposing offsets on oneself would seem to reconcile this dissonance: you enjoy an activity that causes harm and because you have this aversion to harm you couple that with an activity that has commensurate harm reduction.
Although it is possible that meat consumption with offsets could have a corrosive effect on one's perception of animals as moral patients, I would not infer this from the linked article, based on the psychological mechanisms described in the abstract.
In the case of the author with the history of fraud, you are applying prejudice, albeit perhaps appropriately so.
You raise stronger points than I've yet heard on this subject, though I still think that if you read some kind of content and find it compelling on its merits, there is still a strong case to apply at least similar scrutiny regardless of whether there are signs of AI use. Although I still think there is too much knee-jerk sentiment on the matter, you've given me something to think about.
I've made a pretty clear distinction here that you seem to be eliding:
The first is a reasonable way to protect your time based on a reliable proxy for quality. The second is unfair and poisoning of the epistemic commons.
See my response to titotal.
Identifying signs of AI and using this as a reason not to spend further time assessing is rational for the reasons you and titotal state. But such identification should not effect one's evaluations of content (allocating karma, up voting, or more extremely, taking moderation actions) except insofar as it otherwise actually lowers the quality of the content.
If AI as source effects your evaluation process (in assessing the content itself, not in deciding whether to spend time on it) this is essentially pure prejudice. It's similar to the difference between cops incorporating crime statistics in choosing whether to investigate a young black male for homicide and a judge deciding to lower the standard of proof on that basis. Prejudice in the ultimate evaluation process is simply unjust and erodes the epistemic commons.
Thanks for sharing these other posts.
We have fairly different beliefs regarding replaceability of staff by orgs with funding (depending, of course on the scarcity of the labor supply), but you can certainly apply the framework this post endorses with a wide range of discounting job impact due to replaceability.
I agree that using AI flavor as a means to determine whether content is worth further consideration makes sense. If AI flavor is a strong proxy for garbage, than it makes sense to consider stop reading after you detect AI flavor
What does not make sense is, after you have decided to read and evaluate the content, to designate it inferior simply because of the process that generated it. This is essentially pure prejudice.
An example would be: I evaluated X content and determined Y deficiencies. Because I believe the content was entirely human-generated, I assess these deficiencies as relatively minor. Alternatively, I evaluated A content and determined B deficiencies which are substantially similar to Y, but because I suspect that the content was AI generated, I place greater weight on said deficiencies.
I appreciate that you have a different judgment call regarding conciseness. When I was reviewing it, I thought there were a number of distinct points that warranted discussion: initial observation re celebrated comments criticizing AI, discussion of the process and counterfactual, isolated demand for rigor, effect of criticism in chilling contributions, illustration of this chilling, and the point that we should evaluate based on quality, not provenance or process.
I am glad that the plan is not to categorically ban AI content, but creating a extra scrutiny on the grounds of moderation (de jure, rather than de facto disparate treatment) does not make much sense to me.
On second thought, AI significantly reduces the costs for the writers and in the pure human context, the costs for the writer are something of a safeguard against the overproduction of bad content (i.e., if the writer wastes the readers' time, he/she is wasting their own). I would still think a light touch would be prudent, given how effective AI can be to help proliferate good ideas/insights.
I don't know that the contrast between climate and animal offsets is so strong. The harm caused by consuming animal products is also indirect in most cases in relation to the consumer: the animal consumed is already dead and has already suffered whatever harms the factory farming system inflicted on it, so your action harms it no further. The harm you are actually doing are increasing the demand for the product.
The set of people eating meat that choose to offset and those that choose not to probably have a very different psychological environment regarding the animals consumed, I would think.