You can hardly get online these days without hearing some AI booster talk about how AI coding is going to replace human programmers. AI code is absolutely up to production quality! Also, you’re all…
(I don’t mean to take aim at you with this despite how irked it’ll sound)
I really fucking hate how many computer types go “ugh I can’t” at regex. the full spectrum of it, sure, gets hairy. but so many people could be well served by decently learning grouping/backrefs/greedy match/char-classes (which is a lot of what most people seem to reach for[0])
that said, pomsky is an interesting thing that might in fact help a lot of people go from “I want $x” as a human expression of intent, to “I have $y” as a regex expression
[0] - yeah okay sometimes you also actually need a parser. that’s a whole other conversation. I’m talking about “quickly hacking shit up in a text editor buffer in 30s” type cases here
Hey. I can do regex. It’s specifically grep I have beef with. I never know off the top of my head how to invoke it. Is it -e? -r? -i? man grep? More like, man, get grep the hell outta here!
If I start using this and add grep functionality to my day-to-day life, I can’t complain about not knowing how to invoke grep in good conscience, dawg. I can’t hold my shitposting back like that, dawg!
The cheatsheet and tealdeer projects are awesome. It’s one of my (many) favorite things about the user experience honestly. Really grateful for those projects
The promptfarmers can push the hallucination rates incrementally lower by spending 10x compute on training (and training on 10x the data and spending 10x on runtime cost) but they’re already consuming a plurality of all VC funding so they can’t 10x many more times without going bust entirely. And they aren’t going to get them down to 0%, hallucinations are intrinsic to how LLMs operate, no patch with run-time inference or multiple tries or RAG will eliminate that.
And as for newer models… o3 actually had a higher hallucination rate because trying to squeeze rational logic out of the models with fine-tuning just breaks them in a different direction.
I will acknowledge in domains with analytically verifiable answers you can check the LLMs that way, but in that case, its no longer primarily an LLM, you’ve got an entire expert system or proof assistant or whatever that can operate independently of the LLM and the LLM is just providing creative input.
We should maximise hallucinations, actually. That is, we should hack the environmental controls of the data centers to be conducive for fungi growth, and flood them with magic mushrooms spores. We can probably get the rats on board by selling it as a different version of nuking the data centers.
I’ve had the most success with Dolphin3-Mistral 24B (open model finetuned on open data) and Qwen series
Also lower model temperature if you’re getting hallucinations
For some reason everyone is still living in 2023 when AI is remotely mentioned. There is a LOT you can criticize LLMs for, some bullshit you regurgitate without actually understanding isn’t one
You also don’t need 10x the resources where tf did you even hallucinate that from
GPT-1 is 117 million parameters, GPT-2 is 1.5 billion parameters, GPT-3 is 175 billion, GPT-4 is undisclosed but estimated at 1.7 trillion. Token needed for training and training compute scale linearly (edit: actually I’m wrong, looking at the wikipedia page… so I was wrong, it is even worse for your case than I was saying, training compute scales quadratically with model size, it is going up 2 OOM for every 10x of parameters) with model size. They are improving … but only getting a linear improvement in training loss for a geometric increase in model size, training time. A hypothetical GPT-5 would have 10 trillion training parameters and genuinely need to be AGI to have the remotest hope of paying off it’s training. And it would need more quality tokens than they have left, they’ve already scrapped the internet (including many copyrighted sources and sources that requested not to be scrapped). So that’s exactly why OpenAI has been screwing around with fine-tuning setups with illegible naming schemes instead of just releasing a GPT-5. But fine-tuning can only shift what you’re getting within distribution, so it trades off in getting more hallucinations or overly obsequious output or whatever the latest problem they are having.
Lower model temperatures makes it pick it’s best guess for next token as opposed to randomizing among probable guesses, they don’t improve on what the best guess is and you can still get hallucinations even picking the “best” next token.
And lol at you trying to reverse the accusation against LLMs by accusing me of regurgitating/hallucinating.
Small scale models, like Mistral Small or Qwen series, are achieving SOTA performance with lower than 50 billion parameters. QwQ32 could already rival shitGPT with 32 billion parameters, and the new Qwen3 and Gemma (from google) are almost black magic.
Gemma 4B is more comprehensible than GPT4o, the performance race is fucking insane.
ClosedAI is 90% hype. Their models are benchmark princesses, but they need huuuuuuge active parameter sizes to effectively reach their numbers.
Everything said in this post is independently verifiable by taking 5 minutes to search shit up, and yet you couldn’t even bother to do that.
My most honest goal is to educate people which on lemmy is always met with hate. people love to hate, parroting the same old nonsense that someone else taught them.
If you insist on ignorance then be ignorant in peace, don’t try such misguided attempts at sneer
There are things in which LLMs suck. And there are things that you wrongly believe as part of this bullshit twitter civil war.
oh and I suppose you can back that up with verifiable facts, yes?
and that you, yourself, can stand as a sole beacon against the otherwise regularly increasing evidence and studies that both indicate toward and also prove your claims to be full of shit? you are the saviour that can help enlighten us poor unenlightened mortals?
sounds very hard. managing your calendar must be quite a skill
and that you, yourself, can stand as a sole beacon against the otherwise regularly increasing evidence and studies that both indicate toward and also prove your claims to be full of shit?
Hallucination rates and model quality has been going up steadily, same with multishot prompts and RAG reducing hallucination rates. These are proven scientific facts, what the fuck are you on about? Open huggingface RIGHT NOW, go to the papers section, FUCKING READ.
I’ve spent 6+ years of my life in compsci academia to come here and be lectured by McDonald in his fucking basement, what has my life become
ah yes, my ability to read a pdf immediately confers upon me all the resources required to engage in materially equivalent experimentation of the thing that I just read! no matter whether the publisher spent cents or billions in the execution and development of said publication, oh no! it is so completely a cost paid just once, and thereafter it’s ~totally~ free!
oh, wait, hang on. no. no it’s the other thing. that one where all the criticisms continue to hold! my bad, sorry for mistaking those. guess I was roleplaying a LLM for a moment there!
I’ve spent 6+ years of my life in compsci academia
eh. look.
I realize you’ll probably receive/perceive this post negatively, ranging as anywhere from “criticism”/“extremely harsh” through … “condemnation”?
but, nonetheless, I have a request for you
please, for the love of ${deity}, go out and meet people. get out of your niche, explore a bit. you are so damned close to stepping in the trap, and you could do not-that.
(just think! you’ve spent a whole 6+ years on compsci? now imagine what your next 80+ years could be!)
You need to run the model yourself and heavily tune the inference, which is why you haven’t heard from it because most people think using shitGPT is all there is with LLMs. How many people even have the hardware to do so anyway?
I run my own local models with my own inference, which really helps. There are online communities you can join (won’t link bcz Reddit) where you can learn how to do it too, no need to take my word for it
Well grep doesn’t hallucinate things that are not actually in the logs I’m grepping so I think I’ll stick to grep.
(Or ripgrep rather)
With grep it’s me who hallucinates that I can right good regex :,)
(I don’t mean to take aim at you with this despite how irked it’ll sound)
I really fucking hate how many computer types go “ugh I can’t” at regex. the full spectrum of it, sure, gets hairy. but so many people could be well served by decently learning grouping/backrefs/greedy match/char-classes (which is a lot of what most people seem to reach for[0])
that said, pomsky is an interesting thing that might in fact help a lot of people go from “I want $x” as a human expression of intent, to “I have $y” as a regex expression
[0] - yeah okay sometimes you also actually need a parser. that’s a whole other conversation. I’m talking about “quickly hacking shit up in a text editor buffer in 30s” type cases here
The funny thing is, I’m just going with the joke, I’m actually pretty good with regex lol
woo! but still also check out pomsky, it’s legit handy!
(also I did my disclaimer at the start there, so, y’know (but also igwym))
Hey. I can do regex. It’s specifically grep I have beef with. I never know off the top of my head how to invoke it. Is it
-e
?-r
?-i
?man grep
? More like,man, get grep the hell outta here!
curl cht.sh/grep
If I start using this and add grep functionality to my day-to-day life, I can’t complain about not knowing how to invoke grep in good conscience, dawg. I can’t hold my shitposting back like that, dawg!
jk that looks useful. Thanks!
The cheatsheet and tealdeer projects are awesome. It’s one of my (many) favorite things about the user experience honestly. Really grateful for those projects
now listen, you might think gnu tools are offensively inconsistent, and to that I can only say
find(1)
find(1)
? You betterfind(1)
some other place to be, buster. In this house, we use the file explorer search barHallucinations become almost a non issue when working with newer models, custom inference, multishot prompting and RAG
But the models themselves fundamentally can’t write good, new code, even if they’re perfectly factual
The promptfarmers can push the hallucination rates incrementally lower by spending 10x compute on training (and training on 10x the data and spending 10x on runtime cost) but they’re already consuming a plurality of all VC funding so they can’t 10x many more times without going bust entirely. And they aren’t going to get them down to 0%, hallucinations are intrinsic to how LLMs operate, no patch with run-time inference or multiple tries or RAG will eliminate that.
And as for newer models… o3 actually had a higher hallucination rate because trying to squeeze rational logic out of the models with fine-tuning just breaks them in a different direction.
I will acknowledge in domains with analytically verifiable answers you can check the LLMs that way, but in that case, its no longer primarily an LLM, you’ve got an entire expert system or proof assistant or whatever that can operate independently of the LLM and the LLM is just providing creative input.
We should maximise hallucinations, actually. That is, we should hack the environmental controls of the data centers to be conducive for fungi growth, and flood them with magic mushrooms spores. We can probably get the rats on board by selling it as a different version of nuking the data centers.
What if [tokes joint] hallucinations are actually, like, proof the models are almost at human level man!
Sadly I have seen people make that exact point
stopping this bit here because I don’t want to continue writing a JRE episode
@swlabr @scruiser Java Runtime Environment?
no the worse one
Doesn’t really narrow it down, sorry
(jk I don’t have beef with the JavaRE)
O3 is trash, same with closedAI
I’ve had the most success with Dolphin3-Mistral 24B (open model finetuned on open data) and Qwen series
Also lower model temperature if you’re getting hallucinations
For some reason everyone is still living in 2023 when AI is remotely mentioned. There is a LOT you can criticize LLMs for, some bullshit you regurgitate without actually understanding isn’t one
You also don’t need 10x the resources where tf did you even hallucinate that from
this user has been escorted off the premises via the fourth floor window
GPT-1 is 117 million parameters, GPT-2 is 1.5 billion parameters, GPT-3 is 175 billion, GPT-4 is undisclosed but estimated at 1.7 trillion. Token needed for training and training compute scale
linearly(edit: actually I’m wrong, looking at the wikipedia page… so I was wrong, it is even worse for your case than I was saying, training compute scales quadratically with model size, it is going up 2 OOM for every 10x of parameters) with model size. They are improving … but only getting a linear improvement in training loss for a geometric increase in model size, training time. A hypothetical GPT-5 would have 10 trillion training parameters and genuinely need to be AGI to have the remotest hope of paying off it’s training. And it would need more quality tokens than they have left, they’ve already scrapped the internet (including many copyrighted sources and sources that requested not to be scrapped). So that’s exactly why OpenAI has been screwing around with fine-tuning setups with illegible naming schemes instead of just releasing a GPT-5. But fine-tuning can only shift what you’re getting within distribution, so it trades off in getting more hallucinations or overly obsequious output or whatever the latest problem they are having.Lower model temperatures makes it pick it’s best guess for next token as opposed to randomizing among probable guesses, they don’t improve on what the best guess is and you can still get hallucinations even picking the “best” next token.
And lol at you trying to reverse the accusation against LLMs by accusing me of regurgitating/hallucinating.
Small scale models, like Mistral Small or Qwen series, are achieving SOTA performance with lower than 50 billion parameters. QwQ32 could already rival shitGPT with 32 billion parameters, and the new Qwen3 and Gemma (from google) are almost black magic.
Gemma 4B is more comprehensible than GPT4o, the performance race is fucking insane.
ClosedAI is 90% hype. Their models are benchmark princesses, but they need huuuuuuge active parameter sizes to effectively reach their numbers.
Everything said in this post is independently verifiable by taking 5 minutes to search shit up, and yet you couldn’t even bother to do that.
this isn’t the place to decide which seed generator you want for your autoplag runtime
My most honest goal is to educate people which on lemmy is always met with hate. people love to hate, parroting the same old nonsense that someone else taught them.
If you insist on ignorance then be ignorant in peace, don’t try such misguided attempts at sneer
There are things in which LLMs suck. And there are things that you wrongly believe as part of this bullshit twitter civil war.
oh and I suppose you can back that up with verifiable facts, yes?
and that you, yourself, can stand as a sole beacon against the otherwise regularly increasing evidence and studies that both indicate toward and also prove your claims to be full of shit? you are the saviour that can help enlighten us poor unenlightened mortals?
sounds very hard. managing your calendar must be quite a skill
Hallucination rates and model quality has been going up steadily, same with multishot prompts and RAG reducing hallucination rates. These are proven scientific facts, what the fuck are you on about? Open huggingface RIGHT NOW, go to the papers section, FUCKING READ.
I’ve spent 6+ years of my life in compsci academia to come here and be lectured by McDonald in his fucking basement, what has my life become
ah yes, my ability to read a pdf immediately confers upon me all the resources required to engage in materially equivalent experimentation of the thing that I just read! no matter whether the publisher spent cents or billions in the execution and development of said publication, oh no! it is so completely a cost paid just once, and thereafter it’s ~totally~ free!
oh, wait, hang on. no. no it’s the other thing. that one where all the criticisms continue to hold! my bad, sorry for mistaking those. guess I was roleplaying a LLM for a moment there!
also
eh. look.
I realize you’ll probably receive/perceive this post negatively, ranging as anywhere from “criticism”/“extremely harsh” through … “condemnation”?
but, nonetheless, I have a request for you
please, for the love of ${deity}, go out and meet people. get out of your niche, explore a bit. you are so damned close to stepping in the trap, and you could do not-that.
(just think! you’ve spent a whole 6+ years on compsci? now imagine what your next 80+ years could be!)
If LLM hallucinations ever become a non-issue I doubt I’ll be needing to read a deeply nested buzzword laden lemmy post to first hear about it.
You need to run the model yourself and heavily tune the inference, which is why you haven’t heard from it because most people think using shitGPT is all there is with LLMs. How many people even have the hardware to do so anyway?
I run my own local models with my own inference, which really helps. There are online communities you can join (won’t link bcz Reddit) where you can learn how to do it too, no need to take my word for it
ah yes, the problem with
cryptoLLMs is all the shitcoinsGPTsdid it sting when the crypto bubble popped? is that what made you like this?
@vivendi @V0ldek * hallucinations are a fundamental trait of LLM tech, they’re not going anywhere
God, this cannot be overstated. An LLM’s sole function is to hallucinate. Anything stated beyond that is overselling.