• Caribou@slrpnk.net
    link
    fedilink
    English
    arrow-up
    4
    ·
    7 days ago

    I knew Grok is awful and was going to recommend the worst climate deniers as the world’s leading sources in climate change, but the subsection of the article called ‘Grok promoted the use of outrage to increase virality of content about climate’ is actually insane. They feed users misinformation and then draft up a post for them to share, but not before asking if they need help adding violent imagery or emotional outrage to the post first. Disgusting.

  • perestroika@slrpnk.net
    link
    fedilink
    arrow-up
    3
    ·
    7 days ago

    Chatbots have a built-in tendency for sycophancy - to affirm the user and sound supportive, at the cost of remaining truthful.

    ChatGPT went through its sycophancy scandal recently and I would have hoped they’d have added weight to finding credible and factual sources, but apparently they haven’t.

    To be honest, I’m rather surprised that Meta AI didn’t exhibit much sycophancy. Perhaps they’re simply somewhat behind the others in their customization curve - an language model can’t be sycophant if it can’t figure out the biases of its user or remember them until the relevant prompt.

    Grok, being a creation of a company owned by Elon Musk, has quite predictably been “softened up” the most - to cater to the remaining user base of Twitter. I would expect the ability of Grok to present an unbiased and factual opinion degrade further in the future.

    Overall, my rather limited personal experience with LLMs suggests that most language models will happily lie to you, unless you ask very carefully. They’re only language models, not reality models after all.