• Log in | Sign up@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    ·
    2 days ago

    I think it’s lemmy users. I see a lot more LLM skepticism here than in the news feeds.

    In my experience, LLMs are like the laziest, shittiest know-nothing bozo forced to complete a task with zero attention to detail and zero care about whether it’s crap, just doing enough to sound convincing.

    • someacnt@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 day ago

      Wdym, I have seen researchers using it to aid their research significantly. You just need to verify some stuff it says.

      • Log in | Sign up@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 day ago

        Verify every single bloody line of output. Top three to five are good, then it starts guessing the rest based on the pattern so far. If I wanted to make shit up randomly, I would do it myself.

        People who trust LLMs to tell them things that are right rather than things that sound right have fundamentally misunderstood what an LLM is and how it works.

        • someacnt@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 day ago

          It’s not that bad, the output isn’t random. Time to time, it can produce novel stuffs like new equations for engineering. Also, verification does not take that much effort. At least according to my colleagues, it is great. Also works well for coding well-known stuffs, as well!

          • Log in | Sign up@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            1 day ago

            It’s not completely random, but I’m telling you it fucked up, it fucked up badly, time after time, and I had to check every single thing manually. It’s correctness run never lasted beyond a handful. If you build something using some equation it invented you’re insane and should quit engineering before you hurt someone.

    • Melvin_Ferd@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      1
      ·
      edit-2
      2 days ago

      😆 I can’t believe how absolutely silly a lot of you sound with this.

      LLM is a tool. It’s output is dependent on the input. If that’s the quality of answer you’re getting, then it’s a user error. I guarantee you that LLM answers for many problems are definitely adequate.

      It’s like if a carpenter said the cabinets turned out shit because his hammer only produces crap.

      Also another person commented that seen the pattern you also see means we’re psychotic.

      All I’m trying to suggest is Lemmy is getting seriously manipulated by the media attitude towards LLMs and these comments I feel really highlight that.

      • Log in | Sign up@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        1
        ·
        edit-2
        2 days ago

        If that’s the quality of answer you’re getting, then it’s a user error

        No, I know the data I gave it and I know how hard I tried to get it to use it truthfully.

        You have an irrational and wildly inaccurate belief in the infallibility of LLMs.

        You’re also denying the evidence of my own experience. What on earth made you think I would believe you over what I saw with my own eyes?

        • Melvin_Ferd@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          edit-2
          1 day ago

          Why are you giving it data. It’s a chat and language tool. It’s not data based. You need something trained to work for that specific use. I think Wolfram Alpha has better tools for that.

          I wouldn’t trust it to calculate how many patio stones I need to build a project. But I trust it to tell me where a good source is on a topic or if a quote was said by who ever or if I need to remember something but I only have vague pieces like old timey historical witch burning related factoid about villagers who pulled people through a hole in the church wall or what was a the princess who was skeptic and sent her scientist to villages to try to calm superstitious panic .

          Other uses are like digging around my computer and seeing what processes do what. How concepts work regarding the think I’m currently learning. So many excellent users. But I fucking wouldn’t trust it to do any kind of calculation.

          • Log in | Sign up@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            1 day ago

            Why are you giving it data

            Because there’s a button for that.

            It’s output is dependent on the input

            This thing that you said… It’s false.

            • Melvin_Ferd@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              ·
              edit-2
              21 hours ago

              There’s a sleep button on my laptop. Doesn’t mean I would use it.

              I’m just trying to say you’re saying the feature that everyone kind of knows doesn’t work. Chatgpt is not trained to do calculations well.

              I just like technology and I think and fully believe the left hatred of it is not logical. I believe it stems from a lot of media be and headlines. Why there’s this push From media is a question I would like to know more. But overall, I see a lot of the same makers of bullshit yellow journalism for this stuff on the left as I do for similar bullshit on the right wing spaces towards other things.

              • Log in | Sign up@lemmy.world
                link
                fedilink
                English
                arrow-up
                1
                ·
                15 hours ago

                Again with dismissing the evidence of my own eyes!

                I wasn’t asking it to do calculations, I was asking it to put the data into a super formulaic sentence. It was good at the first couple of rows then it would get stuck in a rut and start lying. It was crap. A seven year old would have done it far better, and if I’d told a seven year old that they had made a couple of mistakes and to check it carefully, they would have done.

                Again, I didn’t read it in a fucking article, I read it on my fucking computer screen, so if you’d stop fucking telling me I’m stupid for using it the way it fucking told me I could use it, or that I’m stupid for believing what the media tell me about LLMs, when all I’m doing is telling you my own experience, you’d sound a lot less like a desperate troll or someone who is completely unable to assimilate new information that differs from your dogma.

                • Melvin_Ferd@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  edit-2
                  14 hours ago

                  What does “I give it data to put in a formulaic sentence.” mean here

                  Why not just share the details. I often find a lot of people saying it’s doing crazy things and never like to share the details. It’s very similar to discussing things with Trump supporters who do the same shit when pressed on details about stuff they say occurs. Like the same “you’re a troll for asking for evidence of my claim” that trumpets do. It’s wild how similar it is.

                  And yes asking to do things like iterate over rows isn’t how it works. It’s getting better but that’s not what it’s primarily used for. It could be but isn’t. It only catches so many tokens. It’s getting better and has some persistence but it’s nowhere near what its strength is.

                  • Log in | Sign up@lemmy.world
                    link
                    fedilink
                    English
                    arrow-up
                    1
                    ·
                    13 hours ago

                    I would be in breach of contract to tell you the details. How about you just stop trying to blame me for the clear and obvious lies that the LLM churned out and start believing that LLMs ARE are strikingly fallible, because, buddy, you have your head so far in the sand on this issue it’s weird.

                    The solution to the problem was to realise that an LLM cannot be trusted for accuracy even if the first few results are completely accurate, the bullshit well creep in. Don’t trust the LLM. Check every fucking thing.

                    In the end I wrote a quick script that broke the input up on tab characters and wrote the sentence. That’s how formulaic it was. I regretted deeply trying to get an LLM to use data.

                    The frustrating thing is that it is clearly capable of doing the task some of the time, but drifting off into FANTASY is its strong suit, and it doesn’t matter how firmly or how often you ask it to be accurate or use the input carefully. It’s going to lie to you before long. It’s an LLM. Bullshitting is what it does. Get it to do ONE THING only, then check the fuck out of its answer. Don’t trust it to tell you the truth any more than you would trust Donald J Trump to.