once we get the AI mess sorted
The main problem with LLMs is how they are being used and implemented. Especially using a language model as some sort of truth and facts device, when it is inherently not, and never will be. But they are being used in that way because of money, so that doesn’t seem to be something that is ever going to change.
To be fair, if everyone was getting their facts from AI instead of facebook, we would live in a better society.
At least until someone discovers that they can plant misinformation in AI too…
Yeah but there’s a reason why AI can be called slop. So maybe cleaner lies are less aggravating. Not that that’s great.
As long as it hurts AI brands it doesn’t bother me at all.
Attitudes like that are why misinformation spreads like wildfire.
Seems like a bit of a stretch.
Really? People who will accept and share anything that supports their world view without questioning, doubting, or caring is it’s true… Doesn’t sound a bit familiar?
It’s a special exception.
Everybody has their special exception. What makes you think you’re unique?
My mom told me.