‘But there is a difference between recognising AI use and proving its use. So I tried an experiment. … I received 122 paper submissions. Of those, the Trojan horse easily identified 33 AI-generated papers. I sent these stats to all the students and gave them the opportunity to admit to using AI before they were locked into failing the class. Another 14 outed themselves. In other words, nearly 39% of the submissions were at least partially written by AI.‘

Article archived: https://web.archive.org/web/20251125225915/https://www.huffingtonpost.co.uk/entry/set-trap-to-catch-students-cheating-ai_uk_691f20d1e4b00ed8a94f4c01

  • korazail@lemmy.myserv.one
    link
    fedilink
    English
    arrow-up
    14
    ·
    edit-2
    2 hours ago

    From later in the article:

    Students are afraid to fail, and AI presents itself as a saviour. But what we learn from history is that progress requires failure. It requires reflection. Students are not just undermining their ability to learn, but to someday lead.

    I think this is the big issue with ‘ai cheating’. Sure, the LLM can create a convincing appearance of understanding some topic, but if you’re doing anything of importance, like making pizza, and don’t have the critical thinking you learn in school then you might think that glue is actually a good way to keep the cheese from sliding off.

    A cheap meme example for sure, but think about how that would translate to a Senator trying to deal with more complex topics… actually, on second thought, it might not be any worse. 🤷

    Edit: Adding that while critical thinking is a huge part. it’s more of the “you don’t know what you don’t know” that tripped these students up, and is the danger when using LLM in any situation where you can’t validate it’s output yourself and it’s just a shortcut like making some boilerplate prose or code.