• Railcar8095@lemmy.world
    link
    fedilink
    arrow-up
    2
    arrow-down
    3
    ·
    12 days ago

    I’m going for a hot take here. The issue is not using AI (barring privacy/confidentiality), but that it wasn’t thoroughly validated.

    I consider AI as a new trainee. Everything has to be validated by an experienced person.

    • sunzu2@thebrainbin.org
      link
      fedilink
      arrow-up
      9
      ·
      12 days ago

      1st year makes predictable mistakes that are easier to review.

      Llm will fucking fake shit and it is way harder to spot unless you resp the entire fucking thing yourself, which a that point loses all value from cost perspective.

      AI is useful as support tool for middle level and senior pros since they can catch the lies as they happen

    • Auth@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      ·
      11 days ago

      The issue is still using AI as its sensitive data and shouldnt be given to 3rd parties.

    • ramble81@lemmy.zip
      link
      fedilink
      arrow-up
      2
      ·
      11 days ago

      Yup. That’s the biggest problem I see with AI. People don’t realize it’s fallible. They’re so busy wanting shortcuts and reduced work time that they’re ignoring the accuracy.

      That’s part of why youre starting to see reports of AI not saving any time or money, you’re basically shifting it from the creation phase to the validation phase.