• Rothe@piefed.social
    link
    fedilink
    English
    arrow-up
    8
    ·
    11 hours ago

    once we get the AI mess sorted

    The main problem with LLMs is how they are being used and implemented. Especially using a language model as some sort of truth and facts device, when it is inherently not, and never will be. But they are being used in that way because of money, so that doesn’t seem to be something that is ever going to change.

    • MareOfNights@discuss.tchncs.de
      link
      fedilink
      arrow-up
      3
      ·
      11 hours ago

      To be fair, if everyone was getting their facts from AI instead of facebook, we would live in a better society.

      At least until someone discovers that they can plant misinformation in AI too…

  • Bone@lemmy.world
    link
    fedilink
    arrow-up
    9
    ·
    12 hours ago

    Yeah but there’s a reason why AI can be called slop. So maybe cleaner lies are less aggravating. Not that that’s great.