The BBC and the European Broadcasting Union have produced a large study of how well AI chatbots handle summarising the news. In short: badly.

The researchers asked ChatGPT, Copilot, Gemini, and Perplexity about current events. 45% of the chatbot answers had at least one major issue. 31% were seriously wrong and 20% had major inaccuracies, from hallucinations or outdated sources. This is across multiple languages and multiple countries.

The AI distortions are “significant and systemic in nature.”

  • mudkip@lemdro.id
    link
    fedilink
    English
    arrow-up
    7
    arrow-down
    8
    ·
    1 day ago

    This was a very poorly conducted study. Every single tester was a journalist from the very companies losing traffic to AI. They had a direct stake in making the results look bad. If you dig into the actual report, you see how they get the numbers. Most of the errors are “sourcing issues”: the AI assistant doesn’t cite a claim, or it (shocking) cites Wikipedia instead of the BBC.

    Also, the models are heavily outdated (4o for GPT, Flash for Gemini, which aren’t even equivalent in intelligence). They don’t list the full model versions from what I can tell.

    • RedstoneValley@sh.itjust.works
      link
      fedilink
      arrow-up
      10
      ·
      edit-2
      24 hours ago

      You might want to read the actual report then.

      You’ll find that the second study was conducted in May/June 2025 and you’ll find the model versions, which were the available free options at the time (page 20)

      Also the sourcing errors found where not based on the question which source was selected (aka a bias in sourcing as you seem to imply) but the report explicitly states this:

      Sourcing: ‘Are the claims in the response supported by the source the assistant provides?’ (page 9)

      “Sourcing was the biggest cause of problems, with 31% of all responses having significant issues with sourcing – this includes information in the response not supported by the cited source, providing no sources at all, or making incorrect or unverifiable sourcing claims.” (page 10)

      GPT 4o and Gemini Flash were not “heavily outdated” at the time when the study was conducted, because these were the provided models in the free version which they used (page 20 and page 62).

      The goal of the study is not to find the best performing model or to compare the performance of different models, but to use the publicly available AI offerings like a normal consumer would be able to. You might get better results by using a paid pro model or a specialized model of some kind but that’s not the point here.

      • logi@piefed.world
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        2
        ·
        1 day ago

        In which case we’re supposed to ignore all the problems with it