‘But there is a difference between recognising AI use and proving its use. So I tried an experiment. … I received 122 paper submissions. Of those, the Trojan horse easily identified 33 AI-generated papers. I sent these stats to all the students and gave them the opportunity to admit to using AI before they were locked into failing the class. Another 14 outed themselves. In other words, nearly 39% of the submissions were at least partially written by AI.‘
Article archived: https://web.archive.org/web/20251125225915/https://www.huffingtonpost.co.uk/entry/set-trap-to-catch-students-cheating-ai_uk_691f20d1e4b00ed8a94f4c01


I had a couple classes in college where the grade was 8% homework, 42% midterm, and 42% final exam. Feels a bit more balanced
I think we should also be adjusting the criteria we use for grading. Information accuracy should be weighted far more heavily, and spelling/grammar being de-prioritized. AI can correct bad spelling and grammar, but it’s terrible for information accuracy
also bad at synthesizing new ideas… however, it is likely that future models will be better at those things.
then whole situation sucks and I’m glad I’m out of uni.