Lena@gregtech.eu to Programmer Humor@programming.devEnglish · 3 days ago"Source code file"gregtech.euimagemessage-square236fedilinkarrow-up1858arrow-down110cross-posted to: [email protected]
arrow-up1848arrow-down1image"Source code file"gregtech.euLena@gregtech.eu to Programmer Humor@programming.devEnglish · 3 days agomessage-square236fedilinkcross-posted to: [email protected]
minus-squarehperrin@lemmy.calinkfedilinkEnglisharrow-up30·3 days agoHey Grok, take this one file out of the context of my 250,000 line project and give me that delicious AI slop!
minus-squarehperrin@lemmy.calinkfedilinkEnglisharrow-up26·3 days agoJust really fuck up this shit. I want it unrecognizable!
minus-squareZetta@mander.xyzlinkfedilinkarrow-up2·edit-22 days agoPerfect, groks context limit is 256,000, and as we all know llm recal only gets better as context fills so you will get perfect slop that works amazingly /s and more info on quality drop with context increase here https://github.com/NVIDIA/RULER
minus-squarehperrin@lemmy.calinkfedilinkEnglisharrow-up1·2 days ago250,000 lines is way more than 250,000 tokens, so even that context is too small.
Hey Grok, take this one file out of the context of my 250,000 line project and give me that delicious AI slop!
Just really fuck up this shit. I want it unrecognizable!
Perfect, groks context limit is 256,000, and as we all know llm recal only gets better as context fills so you will get perfect slop that works amazingly
/s and more info on quality drop with context increase here https://github.com/NVIDIA/RULER
250,000 lines is way more than 250,000 tokens, so even that context is too small.