I’d be wary of doing that, because this is what many educators have tried doing ever since Chat GPT was proven… problematic for academia. I don’t think it’s all that reliable as LLMs aren’t super deterministic, and are more likely to provide false positives rather than disagree with your assumptions. Plus, it’s also an issue that some people just write a bit like LLMs for multiple reasons. It doesn’t help that LLMs are constantly tweaked to encourage/discourage certain habits as and when they’re complained about online.
The “It’s not X… it’s Y” emphasis is an LLM smell because it’s repetitive, just like the use of the emdash – and that’s sort of how you can sniff out an LLM by vibes alone. If the rhetorical or grammatical tool is overused, especially in contexts where it doesn’t fit the structure of the writing, it just becomes obvious that the article was generated by a machine. Keeping an eye out for out-of-place facts and combinations of ideas that don’t make sense on further inspection (kind of like when you read a clock in you dream and it never says the same time twice) helps as well.
In the end, a well-edited article might be written by LLMs and modified until it looks “legit” but the real test is if the author has proven their ability to engage with the topic at hand. If they provide citations, double-check facts and proof-read they’re more likely to get away with LLM-written content, and even if not they might still produce something that’s useful to others.
I’d be wary of doing that, because this is what many educators have tried doing ever since Chat GPT was proven… problematic for academia. I don’t think it’s all that reliable as LLMs aren’t super deterministic, and are more likely to provide false positives rather than disagree with your assumptions. Plus, it’s also an issue that some people just write a bit like LLMs for multiple reasons. It doesn’t help that LLMs are constantly tweaked to encourage/discourage certain habits as and when they’re complained about online.
The “It’s not X… it’s Y” emphasis is an LLM smell because it’s repetitive, just like the use of the emdash – and that’s sort of how you can sniff out an LLM by vibes alone. If the rhetorical or grammatical tool is overused, especially in contexts where it doesn’t fit the structure of the writing, it just becomes obvious that the article was generated by a machine. Keeping an eye out for out-of-place facts and combinations of ideas that don’t make sense on further inspection (kind of like when you read a clock in you dream and it never says the same time twice) helps as well.
In the end, a well-edited article might be written by LLMs and modified until it looks “legit” but the real test is if the author has proven their ability to engage with the topic at hand. If they provide citations, double-check facts and proof-read they’re more likely to get away with LLM-written content, and even if not they might still produce something that’s useful to others.