AI chatbots' personalised answers risk inflaming conspiracy and misinformation, as investigation shows climate disinformation shared to sceptic user personas
Chatbots have a built-in tendency for sycophancy - to affirm the user and sound supportive, at the cost of remaining truthful.
ChatGPT went through its sycophancy scandal recently and I would have hoped they’d have added weight to finding credible and factual sources, but apparently they haven’t.
To be honest, I’m rather surprised that Meta AI didn’t exhibit much sycophancy. Perhaps they’re simply somewhat behind the others in their customization curve - an language model can’t be sycophant if it can’t figure out the biases of its user or remember them until the relevant prompt.
Grok, being a creation of a company owned by Elon Musk, has quite predictably been “softened up” the most - to cater to the remaining user base of Twitter. I would expect the ability of Grok to present an unbiased and factual opinion degrade further in the future.
Overall, my rather limited personal experience with LLMs suggests that most language models will happily lie to you, unless you ask very carefully. They’re only language models, not reality models after all.
Chatbots have a built-in tendency for sycophancy - to affirm the user and sound supportive, at the cost of remaining truthful.
ChatGPT went through its sycophancy scandal recently and I would have hoped they’d have added weight to finding credible and factual sources, but apparently they haven’t.
To be honest, I’m rather surprised that Meta AI didn’t exhibit much sycophancy. Perhaps they’re simply somewhat behind the others in their customization curve - an language model can’t be sycophant if it can’t figure out the biases of its user or remember them until the relevant prompt.
Grok, being a creation of a company owned by Elon Musk, has quite predictably been “softened up” the most - to cater to the remaining user base of Twitter. I would expect the ability of Grok to present an unbiased and factual opinion degrade further in the future.
Overall, my rather limited personal experience with LLMs suggests that most language models will happily lie to you, unless you ask very carefully. They’re only language models, not reality models after all.