I really can’t understand this LLM hype (note, I think models used for finding cures to diseases and other sciences are a good thing. I’m referring to the general populace LLM hype).

It’s not interesting. To me, computers were so cool and interesting because of what you can do yourself, with just the hardware and learning code. It’s awesome. What I don’t find interesting in any way is typing a prompt. “But bro, prompt engineer!” that is about the stupidest fucking thing I’ve ever heard.

How anyone thinks its anything beyond a parlor trick baffles me. Plus, you’re literally just playing with a toy made by billionaires to fuck the planet and the rest of us over even more.

And yes, to a point I realize “coding” is similar to “prompting” the computers hardware…if that was even an argument someone would try to make. I think we can agree it’s nowhere near the same thing.

I would like to see if there is a correlation between TikTok addicts and LLM believers. I could guarantee it’s probably very high.

  • ansiz@lemmy.world
    link
    fedilink
    English
    arrow-up
    17
    ·
    8 days ago

    Some of the LLM deployments are just stupid as well. Take the way AWS is using Claude as part of the Q CLI offering. You would assume such a model would at a bare minimum be able to reference and refer to AWS’ own public documentation and knowledgebase data. But no, it doesn’t even have access or the ability to read AWS public website unless you copy and paste the text into your chat session. That’s just so fucking stupid I can’t understand it.

    As a result, it’s all too common to get the model to just make shit up about how an AWS service functions, and if you ask it how do you know that? It will admit that it actually doesn’t know and just made it up.

    The only thing I’ve found it useful for is very limited and basic python scripts, but even then you have to be careful since it’s not very good at that either.