🍹Early to RISA 🧉@sh.itjust.worksM to Greentext@sh.itjust.works · 2 days agoAnon finds a botsh.itjust.worksimagemessage-square109fedilinkarrow-up1971arrow-down12
arrow-up1969arrow-down1imageAnon finds a botsh.itjust.works🍹Early to RISA 🧉@sh.itjust.worksM to Greentext@sh.itjust.works · 2 days agomessage-square109fedilink
minus-squareatthecoast@feddit.nllinkfedilinkarrow-up6·2 days agoIf you then train new bots on the generated content, the models will degrade yes?
minus-squarefrog@feddit.uklinkfedilinkarrow-up5·1 day agoIf you look a lot of new posts, they are actually highly upvoted old posts. So bots probably stay the same.
minus-squarePrimeMinisterKeyes@leminal.spacelinkfedilinkEnglisharrow-up2arrow-down4·2 days agoThe bots know what is bot content and what is not. Actual users don’t.
minus-squareLvxferre [he/him]@mander.xyzlinkfedilinkarrow-up10·1 day ago The bots know what is bot content and what is not. Probably not. It’s way easier to generate bot content than to detect it. Unless they’re coming from the same group, but I find this unlikely.
If you then train new bots on the generated content, the models will degrade yes?
If you look a lot of new posts, they are actually highly upvoted old posts. So bots probably stay the same.
The bots know what is bot content and what is not.
Actual users don’t.
Probably not. It’s way easier to generate bot content than to detect it. Unless they’re coming from the same group, but I find this unlikely.