‘But there is a difference between recognising AI use and proving its use. So I tried an experiment. … I received 122 paper submissions. Of those, the Trojan horse easily identified 33 AI-generated papers. I sent these stats to all the students and gave them the opportunity to admit to using AI before they were locked into failing the class. Another 14 outed themselves. In other words, nearly 39% of the submissions were at least partially written by AI.‘
Article archived: https://web.archive.org/web/20251125225915/https://www.huffingtonpost.co.uk/entry/set-trap-to-catch-students-cheating-ai_uk_691f20d1e4b00ed8a94f4c01


I’ve been using it for a personal project, and it’s been wonderful.
It hasn’t written a word for me. But it’s been really damn helpful as a research assistant. I can have it provide lists of unexplained events by location, or provide historical details about specific things in about 5 seconds.
And for quicky providing editing advice, where to punch up the language, what I can cut, or communicate more clearly. And I can do that without begging a person for days to read.
Is it always perfect? Not at all, but it definitely helps overall, when you make it clear to be honest, and not sugar-coat things. It’s definitely mostly mediocre for creative advice, but good for technical advice.
It’s a tool, and it can be used correctly, or it can be used to cheat.