• paequ2@lemmy.today
    link
    fedilink
    English
    arrow-up
    15
    ·
    4 days ago

    never bother to verify the chatbots’ output at all

    I feel like this is happening.

    When you’re an expert in the subject matter, it’s easier to notice when the AI is wrong. But if you’re not an expert, it’s more likely that everything will just sound legit. Or you won’t be able to verify it yourself.

    • HedyL@awful.systems
      link
      fedilink
      English
      arrow-up
      12
      ·
      4 days ago

      But if you’re not an expert, it’s more likely that everything will just sound legit.

      Oh, absolutely! In my field, the answers made up by an LLM might sound even more legit than the accurate and well-researched ones written by humans. In legal matters, clumsy language is often the result of facts being complex and not wanting to make any mistakes. It is much easier to come up with elegant-sounding answers when they don’t have to be true, and that is what LLMs are generally good at.