Users points out in comments how the LLM recommends APT on Fedora which is clearly wrong. I can’t tell if OP is responding with LLM as well–it would be really embarrassing if so.

PS: Debian is really cool btw :)

  • brucethemoose@lemmy.world
    link
    fedilink
    arrow-up
    10
    ·
    edit-2
    16 days ago

    gpt-oss 20B

    See all the errors in that rambling wall of slop (which they posted and didn’t even check for some reason?)

    Trying to use a local LLM… could be worse. But in my experience, small ones are just too dumb for stuff beyond fully automated RAG or other really focused cases. They feel like fragile toys until you get to 32B dense or ~120B MoE.

    Doubly so behind buggy, possibly vibe coded abstractions.

    The other part is that Goose is probably using a primitive CPU-only llama.cpp quantization. I see they name check “Ryzen AI” a couple of times, but it can’t even use the NPU! There’s nothing “AI” about it, and the author probably has no idea.

    I’m an unapologetic local LLM advocate in the same way I’d recommend Lemmy/Piefed over Reddit, but honestly, it’s just not ready. People want these 1 click agents on their laptops and (unless you’re an enthusiast/tinkerer) the software’s simply not there yet, no matter how much AMD and such try to gaslight people into thinking it is.

    Maybe if they spent 1/10th of their AI marketing budget on helping open source projects, it would be…