• vivendi@programming.dev
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    7
    ·
    edit-2
    6 hours ago

    O3 is trash, same with closedAI

    I’ve had the most success with Dolphin3-Mistral 24B (open model finetuned on open data) and Qwen series

    Also lower model temperature if you’re getting hallucinations

    For some reason everyone is still living in 2023 when AI is remotely mentioned. There is a LOT you can criticize LLMs for, some bullshit you regurgitate without actually understanding isn’t one

    You also don’t need 10x the resources where tf did you even hallucinate that from

    • scruiser@awful.systems
      link
      fedilink
      English
      arrow-up
      6
      ·
      edit-2
      5 hours ago

      GPT-1 is 117 million parameters, GPT-2 is 1.5 billion parameters, GPT-3 is 175 billion, GPT-4 is undisclosed but estimated at 1.7 trillion. Token needed for training and training compute scale linearly (edit: actually I’m wrong, looking at the wikipedia page… so I was wrong, it is even worse for your case than I was saying, training compute scales quadratically with model size, it is going up 2 OOM for every 10x of parameters) with model size. They are improving … but only getting a linear improvement in training loss for a geometric increase in model size, training time. A hypothetical GPT-5 would have 10 trillion training parameters and genuinely need to be AGI to have the remotest hope of paying off it’s training. And it would need more quality tokens than they have left, they’ve already scrapped the internet (including many copyrighted sources and sources that requested not to be scrapped). So that’s exactly why OpenAI has been screwing around with fine-tuning setups with illegible naming schemes instead of just releasing a GPT-5. But fine-tuning can only shift what you’re getting within distribution, so it trades off in getting more hallucinations or overly obsequious output or whatever the latest problem they are having.

      Lower model temperatures makes it pick it’s best guess for next token as opposed to randomizing among probable guesses, they don’t improve on what the best guess is and you can still get hallucinations even picking the “best” next token.

      And lol at you trying to reverse the accusation against LLMs by accusing me of regurgitating/hallucinating.

      • vivendi@programming.dev
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        5
        ·
        5 hours ago

        Small scale models, like Mistral Small or Qwen series, are achieving SOTA performance with lower than 50 billion parameters. QwQ32 could already rival shitGPT with 32 billion parameters, and the new Qwen3 and Gemma (from google) are almost black magic.

        Gemma 4B is more comprehensible than GPT4o, the performance race is fucking insane.

        ClosedAI is 90% hype. Their models are benchmark princesses, but they need huuuuuuge active parameter sizes to effectively reach their numbers.

        Everything said in this post is independently verifiable by taking 5 minutes to search shit up, and yet you couldn’t even bother to do that.

      • vivendi@programming.dev
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        5
        ·
        6 hours ago

        My most honest goal is to educate people which on lemmy is always met with hate. people love to hate, parroting the same old nonsense that someone else taught them.

        If you insist on ignorance then be ignorant in peace, don’t try such misguided attempts at sneer

        There are things in which LLMs suck. And there are things that you wrongly believe as part of this bullshit twitter civil war.

        • froztbyte@awful.systems
          link
          fedilink
          English
          arrow-up
          6
          ·
          5 hours ago

          My most honest goal is to educate people

          oh and I suppose you can back that up with verifiable facts, yes?

          and that you, yourself, can stand as a sole beacon against the otherwise regularly increasing evidence and studies that both indicate toward and also prove your claims to be full of shit? you are the saviour that can help enlighten us poor unenlightened mortals?

          sounds very hard. managing your calendar must be quite a skill

          • vivendi@programming.dev
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            5
            ·
            edit-2
            5 hours ago

            and that you, yourself, can stand as a sole beacon against the otherwise regularly increasing evidence and studies that both indicate toward and also prove your claims to be full of shit?

            Hallucination rates and model quality has been going up steadily, same with multishot prompts and RAG reducing hallucination rates. These are proven scientific facts, what the fuck are you on about? Open huggingface RIGHT NOW, go to the papers section, FUCKING READ.

            I’ve spent 6+ years of my life in compsci academia to come here and be lectured by McDonald in his fucking basement, what has my life become

            • froztbyte@awful.systems
              link
              fedilink
              English
              arrow-up
              5
              ·
              edit-2
              5 hours ago

              ah yes, my ability to read a pdf immediately confers upon me all the resources required to engage in materially equivalent experimentation of the thing that I just read! no matter whether the publisher spent cents or billions in the execution and development of said publication, oh no! it is so completely a cost paid just once, and thereafter it’s ~totally~ free!

              oh, wait, hang on. no. no it’s the other thing. that one where all the criticisms continue to hold! my bad, sorry for mistaking those. guess I was roleplaying a LLM for a moment there!

              • vivendi@programming.dev
                link
                fedilink
                English
                arrow-up
                1
                arrow-down
                4
                ·
                edit-2
                5 hours ago

                You can experiment on your own GPU by running the tests using a variety of models of different generations (LLAMA 2 class 7B, LLAMA 3 class 7B, Gemma, Granite, Qwen, etc…)

                Even the lowest end desktop hardware can run atleast 4B models. The only real difficulty is scripting the test system, but the papers are usually helpful with describing their test methodology.

                • swlabr@awful.systems
                  link
                  fedilink
                  English
                  arrow-up
                  4
                  ·
                  5 hours ago

                  👨🏿‍🦲: how many billions of models are you on

                  🗿: like, maybe 3, or 4 right now my dude

                  👨🏿‍🦲: you are like a little baby

                  👨🏿‍🦲: watch this

                  glue pizza

                  • vivendi@programming.dev
                    link
                    fedilink
                    English
                    arrow-up
                    1
                    arrow-down
                    3
                    ·
                    edit-2
                    5 hours ago

                    The most recent Qwen model supposedly works really well for cases like that, but this one I haven’t tested for myself and I’m going based on what some dude on reddit tested

                • froztbyte@awful.systems
                  link
                  fedilink
                  English
                  arrow-up
                  4
                  ·
                  5 hours ago

                  You can experiment on your own GPU

                  you have lost the game

                  you have been voted off the island

                  you are the weakest list

                  etc etc etc

                  • vivendi@programming.dev
                    link
                    fedilink
                    English
                    arrow-up
                    1
                    arrow-down
                    3
                    ·
                    5 hours ago

                    This is the most “insufferable redditor” stereotype shit possible, and to think we’re not even on Reddit

            • froztbyte@awful.systems
              link
              fedilink
              English
              arrow-up
              4
              ·
              5 hours ago

              also

              I’ve spent 6+ years of my life in compsci academia

              eh. look.

              I realize you’ll probably receive/perceive this post negatively, ranging as anywhere from “criticism”/“extremely harsh” through … “condemnation”?

              but, nonetheless, I have a request for you

              please, for the love of ${deity}, go out and meet people. get out of your niche, explore a bit. you are so damned close to stepping in the trap, and you could do not-that.

              (just think! you’ve spent a whole 6+ years on compsci? now imagine what your next 80+ years could be!)