The BBC and the European Broadcasting Union have produced a large study of how well AI chatbots handle summarising the news. In short: badly.
The researchers asked ChatGPT, Copilot, Gemini, and Perplexity about current events. 45% of the chatbot answers had at least one major issue. 31% were seriously wrong and 20% had major inaccuracies, from hallucinations or outdated sources. This is across multiple languages and multiple countries.
The AI distortions are “significant and systemic in nature.”



You know it’s an unfixable problem because the AI boosters are trying so hard to gaslight everyone into thinking that this is a feature, not a bug.
“You don’t actually want an AI that doesn’t hallucinate — that would take away its cReAtIvItY!”
It’s not wrong to think of it that way, but at the same time, there’s a very good question of why you want a model capable of creative writing summarising your news to begin with.
It’s basically like an overstuffed kitchen gadget.