• sunzu2@thebrainbin.org
    link
    fedilink
    arrow-up
    34
    arrow-down
    1
    ·
    1 day ago

    LLM have use cases but calling it “AI” was the grifter move.

    These parasites ruined blockchain reputation and now they are doing the same with “ai”

    • saltesc@lemmy.world
      link
      fedilink
      arrow-up
      22
      ·
      edit-2
      23 hours ago

      I keep saying it for what it is, “genAI” is just Markov chains…AGAIN. And the first chain Markov ever invented was an language model back in ~1904, published in 1905.

      Every time in household IT history, people are fooled into thinking tech is doing magic intelligence stuff but it’s just a classic Markov chain. Something that was once done on paper but now ripped through 2025 processors.

      In no way does a single algorithm type fit the definition of artificial intelligence. It’s just simple mathematics that can now be done incredibly fast.

      All it does is mathematically calculate the likelihood of what’s next based on how things occur in the data it’s been given. It’s prediction to generate is just weighted values and the quality is entirely dependent on the historical data it’s referencing.

      What normally comes after A? According to data, B does 76% of the time. Choose B. What comes after B? C 78% of the time but S follows AB 98% of the time. Choose S. Be able to do this thousands of times a second aaaaand, bingo. Perceived “intelligence”.

      That’s literally it.

      Why is genAI so bad at its job? Because you can never get 100% for everything and the chain can steer down a wrong path based on a single mistake in one of the links. It’s why we call it probability and not fact. But there is no intelligence there to problem solve itself, just deeper and deeper data validation checks on the linear chain to prevent low quality routes. Checks done using Markov’s same fundamentals.

      • fckreddit@lemmy.ml
        link
        fedilink
        arrow-up
        2
        ·
        20 hours ago

        This is what precisely what I have been saying for so long. Just because LLMs sound smart doesn’t mean they are. They don’t form world views, or even understand ideas or concepts. They are just glorified statistical parrots that predict the next word through a prob distribution.

      • sunzu2@thebrainbin.org
        link
        fedilink
        arrow-up
        4
        ·
        23 hours ago

        100%

        Expect I would say the quality of output depends on the user tbh

        If you know the subject. You can use it. If you don’t know the subject you can use LLM to learn but you will need proper documentation to cross reference what you are learning.

        • saltesc@lemmy.world
          link
          fedilink
          arrow-up
          8
          ·
          23 hours ago

          Yep. I find people that understand what’s actually going on in the back end have much more successful results. They know to introduce their own conditions in the prompt that prevent common or expected failures. The chain can obviously not do this itself as it is not an AI.

    • Blue_Morpho@lemmy.world
      link
      fedilink
      arrow-up
      7
      arrow-down
      2
      ·
      edit-2
      23 hours ago

      The problem isn’t AI but layman’s assumptions about what AI means.

      Expert systems (bunch of if else) are AI. Chess programs are AI. Optical Character Recognition is AI. Markov chain programs are AI. LLMs are AI.

      LLM AI is useful. It doesn’t need to be a self aware super human intelligence to provide tremendous efficiency gains to business by fixing grammar in inter office emails.

      • sunzu2@thebrainbin.org
        link
        fedilink
        arrow-up
        9
        ·
        23 hours ago

        I can’t tell if you are trolling here

        provide tremendous efficiency gains to business by fixing grammar in inter office emails.

        I wear my typos as a badge of honor… The recipient knows that the shitpost email they are reading was hand made by grade A idiot, not an LLM

  • jaybone@lemmy.zip
    link
    fedilink
    English
    arrow-up
    9
    ·
    19 hours ago

    Somewhat OT, I don’t quite get web3. The idea of decentralized sounds good, if we could get content back out of these select few walled gardens like fb and ig and such. But then they throw in all this blockchain and crypto bullshit.

    • squaresinger@lemmy.world
      link
      fedilink
      arrow-up
      6
      ·
      17 hours ago

      The point of web3 is to take a bunch of unrelated tech and brand it as “web3”, even though it has nothing to do with the web, just to attach it to some very popular, well-known branding that isn’t controlled by any single organization.

      It’s free and misleading marketing by slimy and untrustworthy people.