• HedyL@awful.systems
    link
    fedilink
    English
    arrow-up
    10
    ·
    4 days ago

    If I’m not mistaken, even in pre-LLM days, Google had some kind of automated summaries which were sometimes wrong. Those bothered me less. The AI hallucinations appear to be on a whole new level of wrong (or is this just my personal belief - are there any statistics about this?).

    • zogwarg@awful.systems
      link
      fedilink
      English
      arrow-up
      9
      ·
      4 days ago

      Subjectively speaking:

      1. Pre-LLM summaries were for the most part actually short.
      2. They were more directly lifted from human written sources, I vaguely remember lawsuits or the threat of lawsuits by newspapers over google infoboxes and copyright infringement in pre-2019 days, but i couldn’t find anything very conclusive with a quick search.
      3. They didn’t have the sycophantic—hey look at me I’m a genius—overly-(and wrong)-detailed tone that the current batch has.