• Scratch@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      38
      ·
      1 day ago

      Waow, look at all the money we saved with layoffs. Buy our stock!

      Waow, look at all the growth we’re experiencing, we have to hire more developers. Buy our stock!

  • floquant@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    46
    ·
    1 day ago

    Or, you know, you could just build a good search engine and let users scroll 15 seconds in the first result to find what they’re looking for.

  • Devolution@lemmy.world
    link
    fedilink
    English
    arrow-up
    53
    ·
    1 day ago

    We fire humans to prop up AI. But AI is so not there yet that we need humans to double check.

    🙃

  • 12212012@z.org
    link
    fedilink
    arrow-up
    31
    ·
    1 day ago

    AI doesn’t hallucinate. It’s a fancy marketing term for when AI confidently does something in error.

    The tech billionaires would have a harder time getting the mass amounts of people that don’t understand interested if they didn’t use words like hallucinate.

    It’s a data center, not a psychiatric patient

    • Deestan@lemmy.world
      link
      fedilink
      English
      arrow-up
      12
      ·
      1 day ago

      Agree, the term is misleading.

      Talking about hallucinations lets us talk about undesired output as a completely different thing than desires output, which implies it can be handled somehow.

      The problem it the LLM can only ever output bullshit. Often the bullshit is decent and we call it output, and sometimes the bullshit is wrong and we call it hallucination.

      But it’s the exact same thing from the LLM. You can’t make it detect it or promise not to make it.

      • underisk@lemmy.ml
        link
        fedilink
        English
        arrow-up
        7
        ·
        1 day ago

        You can’t make it detect it or promise not to make it.

        This is how you know these things are fucking worthless because the people in charge of them think they can combat this by using anti hallucination clauses in the prompt as if the AI would know how to tell it was hallucinating. It already classified it as plausible output by creating it!

        • Deestan@lemmy.world
          link
          fedilink
          English
          arrow-up
          5
          ·
          1 day ago

          They try to do security the same way, by adding “pwease dont use dangerous shell commands” to the system prompt.

          Security researchers have dubbed it “Prompt Begging”

          • underisk@lemmy.ml
            link
            fedilink
            English
            arrow-up
            3
            ·
            edit-2
            1 day ago

            “On two occasions I have been asked, – “Pray, Mr. Babbage, if you put into the machine wrong figures, will the right answers come out?” … I am not able rightly to apprehend the kind of confusion of ideas that could provoke such a question”

            Its been over a hundred years since this quote and people still think computers are magic.

    • GamingChairModel@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 day ago

      It’s a fancy marketing term for when AI confidently does something in error.

      How can the AI be confident?

      We anthropomorphize the behaviors of these technologies to analogize their outputs to other phenomena observed in humans. In many cases, the analogy helps people decide how to respond to the technology itself, and that class of error.

      Describing things in terms of “hallucinations” tell users that the output shouldn’t always be trusted, regardless of how “confident” the technology seems.

      • hobovision@mander.xyz
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 day ago

        Humans will anthropomorphize damn near anything. We’ll say shit like “hydrogen atoms want to be with oxygen so bad they get super excited and move around a lot when they get to bond”. I don’t think characterizing the language output of an LLM using terms that describe how people speak is a bad thing.

        “Hallucination” on the other hand is not even close to describing the “incorrect” bullshit that comes out of LLMs as opposed to the “correct” bullshit. The source of using “hallucination” to describe the output of deep neural networks kind of started with these early image generators. Everything it output was a hallucination, but eventually these networks got so believable that sometimes they could output realistic, and even sometimes factually accurate, content. So the people who wanted these neural nets to be AI would start to only call the bad and unbelievable and false outputs as hallucinations. It’s not just anthropomorphizing it, but implying that it actually does something like thinking and has a state of mind.

  • Sculptus Poe@lemmy.world
    link
    fedilink
    English
    arrow-up
    21
    arrow-down
    1
    ·
    edit-2
    1 day ago

    I’m not anti-AI at all, but their LLM definitely isn’t ready for the top of a google search as if it is real information. Of course, posting promoted search results at the top of the searches as if it was a real result already devalued them. They at least need the LLM result to be an opt-in option with caveats. I would probably opt-in but I would like off to be the default.

    • Hello Hotel@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 day ago

      Ive had many times where the LLM “spoils” the answer, my field of work requires me to search for exact pieces of text written by humans, it will pull those pieces of text and put them front and center, surrounded by text it wrote that never gets read.

  • A_norny_mousse@feddit.org
    link
    fedilink
    English
    arrow-up
    7
    ·
    1 day ago

    Everybody must jump onto the AI train no matter how often it derails!

    So who profits?

    The unholy alliance of tech giants and government.

    Who loses?

    Everybody else. This is US tax money being thrown into a money burning machine.

  • bitjunkie@lemmy.world
    link
    fedilink
    English
    arrow-up
    8
    ·
    1 day ago

    It’s trained on their SERPs that have been steadily getting more useless for 20 years. Of course its answers suck.

    • shalafi@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      3
      ·
      1 day ago

      I’ve seen 100 shitty job postings for rating AI results. It’s rather complicated and pays pennies.