• 4am@lemmy.zip
    link
    fedilink
    English
    arrow-up
    7
    ·
    8 hours ago

    Yeah the problem with LLMs is they’re far too easy to anthropomorphize. It’s just a word predictor, there is no “thinking” going on. It doesn’t “feel” or “lie”, it doesn’t “care” or “love”, it was just trained on text that had examples of conversations where characters did express those feelings; but it’s not going to statistically determine how those feelings work or when they are appropriate. All the math will tell it is “when input like this, output like this and this” with NO consideration to external factors that made those responses common in the training data.