I mean, fair. You’re preaching to the choir there. People look back on the ‘old’ internet with some incredible rose-tinted glasses, 100%.
So as for “sloppy” models? We are probably less than a year out before influencers start bragging that they love ShitGPT 7 because “it isn’t pretentious. It talks like me”. Sorry “Others disingenuous. ShitGPT speak truth gooder. Like, comment, subscribe and use my affilly linkle”
Have you seen the AI acolyte youtubers? Or /r/OpenAI? We’re already there, heh, and it’s even weirder than that.
I mean, fair. You’re preaching to the choir there. People look back on the ‘old’ internet with some incredible rose-tinted glasses, 100%.
Have you seen the AI acolyte youtubers? Or /r/OpenAI? We’re already there, heh, and it’s even weirder than that.
…That being said, there is an earnest interest in non “sloppy” models and training. For instance, there’s this longrunning thread, trying to dig through old releases and find the one that’s least deep-fried (as they are increasingly getting): https://huggingface.co/jukofyork/creative-writing-control-vectors-v3.0/discussions/15#6910bfd226329b755d084c69
Or efforts to objectively measure slop, and create a slop ‘taxonomy’ tree from all the models training on each other: https://eqbench.com/creative_writing.html