The BBC and the European Broadcasting Union have produced a large study of how well AI chatbots handle summarising the news. In short: badly.
The researchers asked ChatGPT, Copilot, Gemini, and Perplexity about current events. 45% of the chatbot answers had at least one major issue. 31% were seriously wrong and 20% had major inaccuracies, from hallucinations or outdated sources. This is across multiple languages and multiple countries.
The AI distortions are “significant and systemic in nature.”



You might want to read the actual report then.
You’ll find that the second study was conducted in May/June 2025 and you’ll find the model versions, which were the available free options at the time (page 20)
Also the sourcing errors found where not based on the question which source was selected (aka a bias in sourcing as you seem to imply) but the report explicitly states this:
GPT 4o and Gemini Flash were not “heavily outdated” at the time when the study was conducted, because these were the provided models in the free version which they used (page 20 and page 62).
The goal of the study is not to find the best performing model or to compare the performance of different models, but to use the publicly available AI offerings like a normal consumer would be able to. You might get better results by using a paid pro model or a specialized model of some kind but that’s not the point here.