You can hardly get online these days without hearing some AI booster talk about how AI coding is going to replace human programmers. AI code is absolutely up to production quality! Also, you’re all…
I’m a pretty big proponent of FOSS AI, but none of the models I’ve ever used are good enough to work without a human treating it like a tool to automate small tasks. In my workflow there is no difference between LLMs and fucking grep for me.
People who think AI codes well are shit at their job
(I don’t mean to take aim at you with this despite how irked it’ll sound)
I really fucking hate how many computer types go “ugh I can’t” at regex. the full spectrum of it, sure, gets hairy. but so many people could be well served by decently learning grouping/backrefs/greedy match/char-classes (which is a lot of what most people seem to reach for[0])
that said, pomsky is an interesting thing that might in fact help a lot of people go from “I want $x” as a human expression of intent, to “I have $y” as a regex expression
[0] - yeah okay sometimes you also actually need a parser. that’s a whole other conversation. I’m talking about “quickly hacking shit up in a text editor buffer in 30s” type cases here
Hey. I can do regex. It’s specifically grep I have beef with. I never know off the top of my head how to invoke it. Is it -e? -r? -i? man grep? More like, man, get grep the hell outta here!
If I start using this and add grep functionality to my day-to-day life, I can’t complain about not knowing how to invoke grep in good conscience, dawg. I can’t hold my shitposting back like that, dawg!
The cheatsheet and tealdeer projects are awesome. It’s one of my (many) favorite things about the user experience honestly. Really grateful for those projects
The promptfarmers can push the hallucination rates incrementally lower by spending 10x compute on training (and training on 10x the data and spending 10x on runtime cost) but they’re already consuming a plurality of all VC funding so they can’t 10x many more times without going bust entirely. And they aren’t going to get them down to 0%, hallucinations are intrinsic to how LLMs operate, no patch with run-time inference or multiple tries or RAG will eliminate that.
And as for newer models… o3 actually had a higher hallucination rate because trying to squeeze rational logic out of the models with fine-tuning just breaks them in a different direction.
I will acknowledge in domains with analytically verifiable answers you can check the LLMs that way, but in that case, its no longer primarily an LLM, you’ve got an entire expert system or proof assistant or whatever that can operate independently of the LLM and the LLM is just providing creative input.
We should maximise hallucinations, actually. That is, we should hack the environmental controls of the data centers to be conducive for fungi growth, and flood them with magic mushrooms spores. We can probably get the rats on board by selling it as a different version of nuking the data centers.
I’ve had the most success with Dolphin3-Mistral 24B (open model finetuned on open data) and Qwen series
Also lower model temperature if you’re getting hallucinations
For some reason everyone is still living in 2023 when AI is remotely mentioned. There is a LOT you can criticize LLMs for, some bullshit you regurgitate without actually understanding isn’t one
You also don’t need 10x the resources where tf did you even hallucinate that from
GPT-1 is 117 million parameters, GPT-2 is 1.5 billion parameters, GPT-3 is 175 billion, GPT-4 is undisclosed but estimated at 1.7 trillion. Token needed for training and training compute scale linearly (edit: actually I’m wrong, looking at the wikipedia page… so I was wrong, it is even worse for your case than I was saying, training compute scales quadratically with model size, it is going up 2 OOM for every 10x of parameters) with model size. They are improving … but only getting a linear improvement in training loss for a geometric increase in model size, training time. A hypothetical GPT-5 would have 10 trillion training parameters and genuinely need to be AGI to have the remotest hope of paying off it’s training. And it would need more quality tokens than they have left, they’ve already scrapped the internet (including many copyrighted sources and sources that requested not to be scrapped). So that’s exactly why OpenAI has been screwing around with fine-tuning setups with illegible naming schemes instead of just releasing a GPT-5. But fine-tuning can only shift what you’re getting within distribution, so it trades off in getting more hallucinations or overly obsequious output or whatever the latest problem they are having.
Lower model temperatures makes it pick it’s best guess for next token as opposed to randomizing among probable guesses, they don’t improve on what the best guess is and you can still get hallucinations even picking the “best” next token.
And lol at you trying to reverse the accusation against LLMs by accusing me of regurgitating/hallucinating.
Small scale models, like Mistral Small or Qwen series, are achieving SOTA performance with lower than 50 billion parameters. QwQ32 could already rival shitGPT with 32 billion parameters, and the new Qwen3 and Gemma (from google) are almost black magic.
Gemma 4B is more comprehensible than GPT4o, the performance race is fucking insane.
ClosedAI is 90% hype. Their models are benchmark princesses, but they need huuuuuuge active parameter sizes to effectively reach their numbers.
Everything said in this post is independently verifiable by taking 5 minutes to search shit up, and yet you couldn’t even bother to do that.
My most honest goal is to educate people which on lemmy is always met with hate. people love to hate, parroting the same old nonsense that someone else taught them.
If you insist on ignorance then be ignorant in peace, don’t try such misguided attempts at sneer
There are things in which LLMs suck. And there are things that you wrongly believe as part of this bullshit twitter civil war.
oh and I suppose you can back that up with verifiable facts, yes?
and that you, yourself, can stand as a sole beacon against the otherwise regularly increasing evidence and studies that both indicate toward and also prove your claims to be full of shit? you are the saviour that can help enlighten us poor unenlightened mortals?
sounds very hard. managing your calendar must be quite a skill
and that you, yourself, can stand as a sole beacon against the otherwise regularly increasing evidence and studies that both indicate toward and also prove your claims to be full of shit?
Hallucination rates and model quality has been going up steadily, same with multishot prompts and RAG reducing hallucination rates. These are proven scientific facts, what the fuck are you on about? Open huggingface RIGHT NOW, go to the papers section, FUCKING READ.
I’ve spent 6+ years of my life in compsci academia to come here and be lectured by McDonald in his fucking basement, what has my life become
You need to run the model yourself and heavily tune the inference, which is why you haven’t heard from it because most people think using shitGPT is all there is with LLMs. How many people even have the hardware to do so anyway?
I run my own local models with my own inference, which really helps. There are online communities you can join (won’t link bcz Reddit) where you can learn how to do it too, no need to take my word for it
From what i know from recent articles about retracing LLM indepth, they are indeed best suited for language translation and perfectly explain the halucinations. And i think i’ve read somewhere that this was the originally intended purpose of the tech?
many of the proponents of things in this field will propose/argue $x thing to be massively valuable for $x
thing is, that doesn’t often work out
yes, there’s some value in the tech for translation outcomes. to anyone even mildly online, “so are language teaching apps/sites using this?” is probably a very nearby question. and rightly so!
and then when you go digging into how that’s going in practice, wow fuck damn doesn’t that Glorious AI Future sheen just fall right off…
I’m guessing if it would actually work for that, somebody would have done it by now.
But it probably just does it’s usual thing of bullshitting something that looks like code, only now you’re wasting the time of maintainers as well who have to confirm that it is bobbins.
It’s already doing that, some FOSS projects regularly get weird PRs that on first glance look good, but if you look closer are either total nonsense or riddled with bugs. Especially awful are security-related PRs; although those are never made in good faith, that’s usually grifting (throwing AI at the wall trying to cash in as many bounties as possible). The project lead of curl recently announced that anyone who posts a PR that’s obviously AI, or is made with AI, will get banned.
Like, it’s really good as a learning tool as long as you don’t blindly believe everything it says given you can ask stuff in natural language and it will resolve possible knowledge dependencies for you that you’d otherwise get stuck on in official docs, and since you can ask contextual questions receiving contextual answers (no logical abstraction). But code generation… please don’t.
Nice conversation you had right there in your head
that you recognize none of this is telling. that someone else got it, more so.
I assume
you could just ask, you know. since you seem so comfortable fondling prompts, not sure why you wouldn’t ask a person. is it because they might tell you to fuck off?
I’ve taken a closer look…
fuck off with the unrequested advertising. never mind that no-one asked you for how you felt for some fucking piece of shit. oh, you feel happy that the logo is a certain tint of <colour>? bully for you, now fuck off and do something worthwhile
That makes it a good tool
a tool you say? wow, sure glad you’re going to replace your *spins the wheel* Punctured Car Tyre with *spins the wheel again* Needlenose Pliers!
think I’m some AI worshipper, fuck no. They’re amoral as fuck
so, you think there’s moral problems, but only sometimes? it’s supes okay to do your version of leveraged exploitation? cool, thanks for letting us know
those very few truly FOSS ones
oh yeah, right, the “truly FOSS ones”! tell me again how those are trained - who’s funding that compute? are the licenses contextually included in the model definition?
wait, hold on! why are you squealing away like a deflating balloon?! those are actual questions! you’re the one who brought up morals!
Otherwise you’ll end up in a social corner filled with bitterness
I’ve met people like you at parties. they’re often popular, but they’re never fun. and I always regret it.
There are technologies that are utter bullshit like NFTs. However (unfortunately?) that isn’t the case for AI
Bro, sneerclub and techtakes are for sneering at bad technology and those that worship it, not for engaging in apologia for it (or worse yet, tone policing the sneering). If you don’t like it, you can ask the mods for an exit pass out (if they haven’t generously given you one already).
if you can’t make a good post that’s a you problem. if people end up poking holes in your shit and you suddenly can’t keep your incoherent nonsense together, still a you problem. but:
nonsensically off-the-rails
take your abuser bullshit and fuck right off, thanks
Otherwise you’ll end up in a social corner filled with bitterness
This is a standard Internet phenomenon (I generalize) called a Sneer Club, i.e. people who enjoy getting together and picking on designated targets. Sneer Clubs (I expect) attract people with high Dark Triad characteristics, which is (I suspect) where Asshole Internet Atheists come from - if you get a club together for the purpose of sneering at religious people, it doesn’t matter that God doesn’t actually exist, the club attracts psychologically f’d-up people. Bullies, in a word, people who are powerfully reinforced by getting in what feels like good hits on Designated Targets, in the company of others doing the same and congratulating each other on it.
Hey, Devin! Really impressive that the product best known for literally lying about all of its functionality in its release video still somehow exists and you can pay it money. Isn’t the free market great.
No the fuck it’s not
I’m a pretty big proponent of FOSS AI, but none of the models I’ve ever used are good enough to work without a human treating it like a tool to automate small tasks. In my workflow there is no difference between LLMs and fucking
grep
for me.People who think AI codes well are shit at their job
Well grep doesn’t hallucinate things that are not actually in the logs I’m grepping so I think I’ll stick to grep.
(Or ripgrep rather)
With grep it’s me who hallucinates that I can right good regex :,)
(I don’t mean to take aim at you with this despite how irked it’ll sound)
I really fucking hate how many computer types go “ugh I can’t” at regex. the full spectrum of it, sure, gets hairy. but so many people could be well served by decently learning grouping/backrefs/greedy match/char-classes (which is a lot of what most people seem to reach for[0])
that said, pomsky is an interesting thing that might in fact help a lot of people go from “I want $x” as a human expression of intent, to “I have $y” as a regex expression
[0] - yeah okay sometimes you also actually need a parser. that’s a whole other conversation. I’m talking about “quickly hacking shit up in a text editor buffer in 30s” type cases here
The funny thing is, I’m just going with the joke, I’m actually pretty good with regex lol
woo! but still also check out pomsky, it’s legit handy!
(also I did my disclaimer at the start there, so, y’know (but also igwym))
Hey. I can do regex. It’s specifically grep I have beef with. I never know off the top of my head how to invoke it. Is it
-e
?-r
?-i
?man grep
? More like,man, get grep the hell outta here!
curl cht.sh/grep
If I start using this and add grep functionality to my day-to-day life, I can’t complain about not knowing how to invoke grep in good conscience, dawg. I can’t hold my shitposting back like that, dawg!
jk that looks useful. Thanks!
The cheatsheet and tealdeer projects are awesome. It’s one of my (many) favorite things about the user experience honestly. Really grateful for those projects
now listen, you might think gnu tools are offensively inconsistent, and to that I can only say
find(1)
find(1)
? You betterfind(1)
some other place to be, buster. In this house, we use the file explorer search barHallucinations become almost a non issue when working with newer models, custom inference, multishot prompting and RAG
But the models themselves fundamentally can’t write good, new code, even if they’re perfectly factual
The promptfarmers can push the hallucination rates incrementally lower by spending 10x compute on training (and training on 10x the data and spending 10x on runtime cost) but they’re already consuming a plurality of all VC funding so they can’t 10x many more times without going bust entirely. And they aren’t going to get them down to 0%, hallucinations are intrinsic to how LLMs operate, no patch with run-time inference or multiple tries or RAG will eliminate that.
And as for newer models… o3 actually had a higher hallucination rate because trying to squeeze rational logic out of the models with fine-tuning just breaks them in a different direction.
I will acknowledge in domains with analytically verifiable answers you can check the LLMs that way, but in that case, its no longer primarily an LLM, you’ve got an entire expert system or proof assistant or whatever that can operate independently of the LLM and the LLM is just providing creative input.
We should maximise hallucinations, actually. That is, we should hack the environmental controls of the data centers to be conducive for fungi growth, and flood them with magic mushrooms spores. We can probably get the rats on board by selling it as a different version of nuking the data centers.
What if [tokes joint] hallucinations are actually, like, proof the models are almost at human level man!
Sadly I have seen people make that exact point
stopping this bit here because I don’t want to continue writing a JRE episode
@swlabr @scruiser Java Runtime Environment?
no the worse one
O3 is trash, same with closedAI
I’ve had the most success with Dolphin3-Mistral 24B (open model finetuned on open data) and Qwen series
Also lower model temperature if you’re getting hallucinations
For some reason everyone is still living in 2023 when AI is remotely mentioned. There is a LOT you can criticize LLMs for, some bullshit you regurgitate without actually understanding isn’t one
You also don’t need 10x the resources where tf did you even hallucinate that from
this user has been escorted off the premises via the fourth floor window
GPT-1 is 117 million parameters, GPT-2 is 1.5 billion parameters, GPT-3 is 175 billion, GPT-4 is undisclosed but estimated at 1.7 trillion. Token needed for training and training compute scale
linearly(edit: actually I’m wrong, looking at the wikipedia page… so I was wrong, it is even worse for your case than I was saying, training compute scales quadratically with model size, it is going up 2 OOM for every 10x of parameters) with model size. They are improving … but only getting a linear improvement in training loss for a geometric increase in model size, training time. A hypothetical GPT-5 would have 10 trillion training parameters and genuinely need to be AGI to have the remotest hope of paying off it’s training. And it would need more quality tokens than they have left, they’ve already scrapped the internet (including many copyrighted sources and sources that requested not to be scrapped). So that’s exactly why OpenAI has been screwing around with fine-tuning setups with illegible naming schemes instead of just releasing a GPT-5. But fine-tuning can only shift what you’re getting within distribution, so it trades off in getting more hallucinations or overly obsequious output or whatever the latest problem they are having.Lower model temperatures makes it pick it’s best guess for next token as opposed to randomizing among probable guesses, they don’t improve on what the best guess is and you can still get hallucinations even picking the “best” next token.
And lol at you trying to reverse the accusation against LLMs by accusing me of regurgitating/hallucinating.
Small scale models, like Mistral Small or Qwen series, are achieving SOTA performance with lower than 50 billion parameters. QwQ32 could already rival shitGPT with 32 billion parameters, and the new Qwen3 and Gemma (from google) are almost black magic.
Gemma 4B is more comprehensible than GPT4o, the performance race is fucking insane.
ClosedAI is 90% hype. Their models are benchmark princesses, but they need huuuuuuge active parameter sizes to effectively reach their numbers.
Everything said in this post is independently verifiable by taking 5 minutes to search shit up, and yet you couldn’t even bother to do that.
this isn’t the place to decide which seed generator you want for your autoplag runtime
My most honest goal is to educate people which on lemmy is always met with hate. people love to hate, parroting the same old nonsense that someone else taught them.
If you insist on ignorance then be ignorant in peace, don’t try such misguided attempts at sneer
There are things in which LLMs suck. And there are things that you wrongly believe as part of this bullshit twitter civil war.
oh and I suppose you can back that up with verifiable facts, yes?
and that you, yourself, can stand as a sole beacon against the otherwise regularly increasing evidence and studies that both indicate toward and also prove your claims to be full of shit? you are the saviour that can help enlighten us poor unenlightened mortals?
sounds very hard. managing your calendar must be quite a skill
Hallucination rates and model quality has been going up steadily, same with multishot prompts and RAG reducing hallucination rates. These are proven scientific facts, what the fuck are you on about? Open huggingface RIGHT NOW, go to the papers section, FUCKING READ.
I’ve spent 6+ years of my life in compsci academia to come here and be lectured by McDonald in his fucking basement, what has my life become
If LLM hallucinations ever become a non-issue I doubt I’ll be needing to read a deeply nested buzzword laden lemmy post to first hear about it.
You need to run the model yourself and heavily tune the inference, which is why you haven’t heard from it because most people think using shitGPT is all there is with LLMs. How many people even have the hardware to do so anyway?
I run my own local models with my own inference, which really helps. There are online communities you can join (won’t link bcz Reddit) where you can learn how to do it too, no need to take my word for it
ah yes, the problem with
cryptoLLMs is all the shitcoinsGPTsdid it sting when the crypto bubble popped? is that what made you like this?
@vivendi @V0ldek * hallucinations are a fundamental trait of LLM tech, they’re not going anywhere
God, this cannot be overstated. An LLM’s sole function is to hallucinate. Anything stated beyond that is overselling.
Because it’s a upscaled translation tech maybe?
These views on LLMs are simplistic. As a wise man once said, “check yoself befo yo wreck yoself”, I recommend more education thus
LLM structures arw over hyped, but they’re also not that simple
Yes you’re right, it has some keyboard equivalent autocomplete as well.
From what i know from recent articles about retracing LLM indepth, they are indeed best suited for language translation and perfectly explain the halucinations. And i think i’ve read somewhere that this was the originally intended purpose of the tech?
Ah, here, and here more tabloid-ish.
many of the proponents of things in this field will propose/argue $x thing to be massively valuable for $x
thing is, that doesn’t often work out
yes, there’s some value in the tech for translation outcomes. to anyone even mildly online, “so are language teaching apps/sites using this?” is probably a very nearby question. and rightly so!
and then when you go digging into how that’s going in practice, wow fuck damn doesn’t that Glorious AI Future sheen just fall right off…
There are plenty of open issues on open source repos it could open PRs for though?
I’m guessing if it would actually work for that, somebody would have done it by now.
But it probably just does it’s usual thing of bullshitting something that looks like code, only now you’re wasting the time of maintainers as well who have to confirm that it is bobbins.
Yea it’s a problem already for security bugs, llms just waste maintainers time and make them angry.
They are useless and make more work for programmers, even on python and js codebases that they are trained on the most and are the “easiest”.
It’s already doing that, some FOSS projects regularly get weird PRs that on first glance look good, but if you look closer are either total nonsense or riddled with bugs. Especially awful are security-related PRs; although those are never made in good faith, that’s usually grifting (throwing AI at the wall trying to cash in as many bounties as possible). The project lead of curl recently announced that anyone who posts a PR that’s obviously AI, or is made with AI, will get banned.
Like, it’s really good as a learning tool as long as you don’t blindly believe everything it says given you can ask stuff in natural language and it will resolve possible knowledge dependencies for you that you’d otherwise get stuck on in official docs, and since you can ask contextual questions receiving contextual answers (no logical abstraction). But code generation… please don’t.
Fuck you were doing so well in the first half, ahhh,
the poster: “it’s really good as a learning tool”
the poster: “but don’t blindly believe it”
the learner: “how should I know when to believe it?”
the poster: “check everything”
the learner: “so you’re saying I should just read the actual documentation and/or source?”
the poster: “how are you going to ask that anything? how can you fondle something that isn’t a prompt?!”
the learner: “thanks for your time, I think I’m going to find another class”
@froztbyte @Natanox
In that moment, the novice was enlightened
Removed by mod
holy fuck this is so many words to say so little
so congrats I’m upgrading your ban and also pruning you from the thread
on the one hand I feel for other people who’ll maybe read this thread somewhen down the line
on the other, it’s not exactly like I clipped words in my post
also, unless extraneous circumstance, please don’t clip their display of abusive nonsense
I bet they’d just haaaate it to be on the internet, and yet it’s exactly the kind of fingerprint of theirs that people should be able to find
that you recognize none of this is telling. that someone else got it, more so.
you could just ask, you know. since you seem so comfortable fondling prompts, not sure why you wouldn’t ask a person. is it because they might tell you to fuck off?
fuck off with the unrequested advertising. never mind that no-one asked you for how you felt for some fucking piece of shit. oh, you feel happy that the logo is a certain tint of <colour>? bully for you, now fuck off and do something worthwhile
a tool you say? wow, sure glad you’re going to replace your *spins the wheel* Punctured Car Tyre with *spins the wheel again* Needlenose Pliers!
so, you think there’s moral problems, but only sometimes? it’s supes okay to do your version of leveraged exploitation? cool, thanks for letting us know
oh yeah, right, the “truly FOSS ones”! tell me again how those are trained - who’s funding that compute? are the licenses contextually included in the model definition?
wait, hold on! why are you squealing away like a deflating balloon?! those are actual questions! you’re the one who brought up morals!
I’ve met people like you at parties. they’re often popular, but they’re never fun. and I always regret it.
citation. fucking. needed.
Holy shit, get some help. Given how nonsensically off-the-rails you just went you clearly need it.
no shithead, you don’t post this many paragraphs of mid garbage unprompted then call the other person unhinged
Bro, sneerclub and techtakes are for sneering at bad technology and those that worship it, not for engaging in apologia for it (or worse yet, tone policing the sneering). If you don’t like it, you can ask the mods for an exit pass out (if they haven’t generously given you one already).
if you can’t make a good post that’s a you problem. if people end up poking holes in your shit and you suddenly can’t keep your incoherent nonsense together, still a you problem. but:
take your abuser bullshit and fuck right off, thanks
This is a standard Internet phenomenon (I generalize) called a Sneer Club, i.e. people who enjoy getting together and picking on designated targets. Sneer Clubs (I expect) attract people with high Dark Triad characteristics, which is (I suspect) where Asshole Internet Atheists come from - if you get a club together for the purpose of sneering at religious people, it doesn’t matter that God doesn’t actually exist, the club attracts psychologically f’d-up people. Bullies, in a word, people who are powerfully reinforced by getting in what feels like good hits on Designated Targets, in the company of others doing the same and congratulating each other on it.
Removed by mod
Banned from the community for advertising.
Hey, Devin! Really impressive that the product best known for literally lying about all of its functionality in its release video still somehow exists and you can pay it money. Isn’t the free market great.
Devin? Fuck Devin. That slimy motherfucker owes me 10 bucks.
“a fool and their money are soon parted”
fuck off with the unrequested advertising kthx