• V0ldek@awful.systems
    link
    fedilink
    English
    arrow-up
    22
    ·
    7 hours ago

    In my workflow there is no difference between LLMs and fucking grep for me.

    Well grep doesn’t hallucinate things that are not actually in the logs I’m grepping so I think I’ll stick to grep.

    (Or ripgrep rather)

      • froztbyte@awful.systems
        link
        fedilink
        English
        arrow-up
        9
        ·
        5 hours ago

        (I don’t mean to take aim at you with this despite how irked it’ll sound)

        I really fucking hate how many computer types go “ugh I can’t” at regex. the full spectrum of it, sure, gets hairy. but so many people could be well served by decently learning grouping/backrefs/greedy match/char-classes (which is a lot of what most people seem to reach for[0])

        that said, pomsky is an interesting thing that might in fact help a lot of people go from “I want $x” as a human expression of intent, to “I have $y” as a regex expression

        [0] - yeah okay sometimes you also actually need a parser. that’s a whole other conversation. I’m talking about “quickly hacking shit up in a text editor buffer in 30s” type cases here

        • swlabr@awful.systems
          link
          fedilink
          English
          arrow-up
          7
          ·
          edit-2
          5 hours ago

          Hey. I can do regex. It’s specifically grep I have beef with. I never know off the top of my head how to invoke it. Is it -e? -r? -i? man grep? More like, man, get grep the hell outta here!

            • swlabr@awful.systems
              link
              fedilink
              English
              arrow-up
              4
              ·
              4 hours ago

              If I start using this and add grep functionality to my day-to-day life, I can’t complain about not knowing how to invoke grep in good conscience, dawg. I can’t hold my shitposting back like that, dawg!

              jk that looks useful. Thanks!

              • lagoon8622@sh.itjust.works
                link
                fedilink
                English
                arrow-up
                4
                ·
                4 hours ago

                The cheatsheet and tealdeer projects are awesome. It’s one of my (many) favorite things about the user experience honestly. Really grateful for those projects

          • froztbyte@awful.systems
            link
            fedilink
            English
            arrow-up
            9
            ·
            5 hours ago

            now listen, you might think gnu tools are offensively inconsistent, and to that I can only say

            find(1)

            • swlabr@awful.systems
              link
              fedilink
              English
              arrow-up
              10
              ·
              5 hours ago

              find(1)? You better find(1) some other place to be, buster. In this house, we use the file explorer search bar

    • vivendi@programming.dev
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      9
      ·
      6 hours ago

      Hallucinations become almost a non issue when working with newer models, custom inference, multishot prompting and RAG

      But the models themselves fundamentally can’t write good, new code, even if they’re perfectly factual

      • scruiser@awful.systems
        link
        fedilink
        English
        arrow-up
        10
        ·
        5 hours ago

        The promptfarmers can push the hallucination rates incrementally lower by spending 10x compute on training (and training on 10x the data and spending 10x on runtime cost) but they’re already consuming a plurality of all VC funding so they can’t 10x many more times without going bust entirely. And they aren’t going to get them down to 0%, hallucinations are intrinsic to how LLMs operate, no patch with run-time inference or multiple tries or RAG will eliminate that.

        And as for newer models… o3 actually had a higher hallucination rate because trying to squeeze rational logic out of the models with fine-tuning just breaks them in a different direction.

        I will acknowledge in domains with analytically verifiable answers you can check the LLMs that way, but in that case, its no longer primarily an LLM, you’ve got an entire expert system or proof assistant or whatever that can operate independently of the LLM and the LLM is just providing creative input.

        • swlabr@awful.systems
          link
          fedilink
          English
          arrow-up
          9
          ·
          5 hours ago

          We should maximise hallucinations, actually. That is, we should hack the environmental controls of the data centers to be conducive for fungi growth, and flood them with magic mushrooms spores. We can probably get the rats on board by selling it as a different version of nuking the data centers.

        • vivendi@programming.dev
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          7
          ·
          edit-2
          4 hours ago

          O3 is trash, same with closedAI

          I’ve had the most success with Dolphin3-Mistral 24B (open model finetuned on open data) and Qwen series

          Also lower model temperature if you’re getting hallucinations

          For some reason everyone is still living in 2023 when AI is remotely mentioned. There is a LOT you can criticize LLMs for, some bullshit you regurgitate without actually understanding isn’t one

          You also don’t need 10x the resources where tf did you even hallucinate that from

          • scruiser@awful.systems
            link
            fedilink
            English
            arrow-up
            5
            ·
            edit-2
            3 hours ago

            GPT-1 is 117 million parameters, GPT-2 is 1.5 billion parameters, GPT-3 is 175 billion, GPT-4 is undisclosed but estimated at 1.7 trillion. Token needed for training and training compute scale linearly (edit: actually I’m wrong, looking at the wikipedia page… so I was wrong, it is even worse for your case than I was saying, training compute scales quadratically with model size, it is going up 2 OOM for every 10x of parameters) with model size. They are improving … but only getting a linear improvement in training loss for a geometric increase in model size, training time. A hypothetical GPT-5 would have 10 trillion training parameters and genuinely need to be AGI to have the remotest hope of paying off it’s training. And it would need more quality tokens than they have left, they’ve already scrapped the internet (including many copyrighted sources and sources that requested not to be scrapped). So that’s exactly why OpenAI has been screwing around with fine-tuning setups with illegible naming schemes instead of just releasing a GPT-5. But fine-tuning can only shift what you’re getting within distribution, so it trades off in getting more hallucinations or overly obsequious output or whatever the latest problem they are having.

            Lower model temperatures makes it pick it’s best guess for next token as opposed to randomizing among probable guesses, they don’t improve on what the best guess is and you can still get hallucinations even picking the “best” next token.

            And lol at you trying to reverse the accusation against LLMs by accusing me of regurgitating/hallucinating.

            • vivendi@programming.dev
              link
              fedilink
              English
              arrow-up
              1
              arrow-down
              5
              ·
              3 hours ago

              Small scale models, like Mistral Small or Qwen series, are achieving SOTA performance with lower than 50 billion parameters. QwQ32 could already rival shitGPT with 32 billion parameters, and the new Qwen3 and Gemma (from google) are almost black magic.

              Gemma 4B is more comprehensible than GPT4o, the performance race is fucking insane.

              ClosedAI is 90% hype. Their models are benchmark princesses, but they need huuuuuuge active parameter sizes to effectively reach their numbers.

              Everything said in this post is independently verifiable by taking 5 minutes to search shit up, and yet you couldn’t even bother to do that.

            • vivendi@programming.dev
              link
              fedilink
              English
              arrow-up
              1
              arrow-down
              5
              ·
              3 hours ago

              My most honest goal is to educate people which on lemmy is always met with hate. people love to hate, parroting the same old nonsense that someone else taught them.

              If you insist on ignorance then be ignorant in peace, don’t try such misguided attempts at sneer

              There are things in which LLMs suck. And there are things that you wrongly believe as part of this bullshit twitter civil war.

              • froztbyte@awful.systems
                link
                fedilink
                English
                arrow-up
                5
                ·
                3 hours ago

                My most honest goal is to educate people

                oh and I suppose you can back that up with verifiable facts, yes?

                and that you, yourself, can stand as a sole beacon against the otherwise regularly increasing evidence and studies that both indicate toward and also prove your claims to be full of shit? you are the saviour that can help enlighten us poor unenlightened mortals?

                sounds very hard. managing your calendar must be quite a skill

                • vivendi@programming.dev
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  arrow-down
                  5
                  ·
                  edit-2
                  3 hours ago

                  and that you, yourself, can stand as a sole beacon against the otherwise regularly increasing evidence and studies that both indicate toward and also prove your claims to be full of shit?

                  Hallucination rates and model quality has been going up steadily, same with multishot prompts and RAG reducing hallucination rates. These are proven scientific facts, what the fuck are you on about? Open huggingface RIGHT NOW, go to the papers section, FUCKING READ.

                  I’ve spent 6+ years of my life in compsci academia to come here and be lectured by McDonald in his fucking basement, what has my life become

                  • froztbyte@awful.systems
                    link
                    fedilink
                    English
                    arrow-up
                    4
                    ·
                    edit-2
                    3 hours ago

                    ah yes, my ability to read a pdf immediately confers upon me all the resources required to engage in materially equivalent experimentation of the thing that I just read! no matter whether the publisher spent cents or billions in the execution and development of said publication, oh no! it is so completely a cost paid just once, and thereafter it’s ~totally~ free!

                    oh, wait, hang on. no. no it’s the other thing. that one where all the criticisms continue to hold! my bad, sorry for mistaking those. guess I was roleplaying a LLM for a moment there!

                  • froztbyte@awful.systems
                    link
                    fedilink
                    English
                    arrow-up
                    3
                    ·
                    3 hours ago

                    also

                    I’ve spent 6+ years of my life in compsci academia

                    eh. look.

                    I realize you’ll probably receive/perceive this post negatively, ranging as anywhere from “criticism”/“extremely harsh” through … “condemnation”?

                    but, nonetheless, I have a request for you

                    please, for the love of ${deity}, go out and meet people. get out of your niche, explore a bit. you are so damned close to stepping in the trap, and you could do not-that.

                    (just think! you’ve spent a whole 6+ years on compsci? now imagine what your next 80+ years could be!)

      • Architeuthis@awful.systems
        link
        fedilink
        English
        arrow-up
        11
        ·
        6 hours ago

        If LLM hallucinations ever become a non-issue I doubt I’ll be needing to read a deeply nested buzzword laden lemmy post to first hear about it.

        • vivendi@programming.dev
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          5
          ·
          4 hours ago

          You need to run the model yourself and heavily tune the inference, which is why you haven’t heard from it because most people think using shitGPT is all there is with LLMs. How many people even have the hardware to do so anyway?

          I run my own local models with my own inference, which really helps. There are online communities you can join (won’t link bcz Reddit) where you can learn how to do it too, no need to take my word for it

          • self@awful.systems
            link
            fedilink
            English
            arrow-up
            3
            ·
            2 hours ago

            ah yes, the problem with cryptoLLMs is all the shitcoinsGPTs

            did it sting when the crypto bubble popped? is that what made you like this?

        • swlabr@awful.systems
          link
          fedilink
          English
          arrow-up
          6
          ·
          5 hours ago

          God, this cannot be overstated. An LLM’s sole function is to hallucinate. Anything stated beyond that is overselling.