MIT’s Project NANDA has a new paper: “The GenAI Divide: State of AI in Business 2025.” This got a writeup in Fortune a few days ago about how 95% of generative AI projects in business just fail. Th…
I keep saying it for what it is, “genAI” is just Markov chains…AGAIN. And the first chain Markov ever invented was an language model back in ~1904, published in 1905.
Every time in household IT history, people are fooled into thinking tech is doing magic intelligence stuff but it’s just a classic Markov chain. Something that was once done on paper but now ripped through 2025 processors.
In no way does a single algorithm type fit the definition of artificial intelligence. It’s just simple mathematics that can now be done incredibly fast.
All it does is mathematically calculate the likelihood of what’s next based on how things occur in the data it’s been given. It’s prediction to generate is just weighted values and the quality is entirely dependent on the historical data it’s referencing.
What normally comes after A? According to data, B does 76% of the time. Choose B. What comes after B? C 78% of the time but S follows AB 98% of the time. Choose S. Be able to do this thousands of times a second aaaaand, bingo. Perceived “intelligence”.
That’s literally it.
Why is genAI so bad at its job? Because you can never get 100% for everything and the chain can steer down a wrong path based on a single mistake in one of the links. It’s why we call it probability and not fact. But there is no intelligence there to problem solve itself, just deeper and deeper data validation checks on the linear chain to prevent low quality routes. Checks done using Markov’s same fundamentals.
This is what precisely what I have been saying for so long. Just because LLMs sound smart doesn’t mean they are. They don’t form world views, or even understand ideas or concepts. They are just glorified statistical parrots that predict the next word through a prob distribution.
Expect I would say the quality of output depends on the user tbh
If you know the subject. You can use it. If you don’t know the subject you can use LLM to learn but you will need proper documentation to cross reference what you are learning.
Yep. I find people that understand what’s actually going on in the back end have much more successful results. They know to introduce their own conditions in the prompt that prevent common or expected failures. The chain can obviously not do this itself as it is not an AI.
The problem isn’t AI but layman’s assumptions about what AI means.
Expert systems (bunch of if else) are AI. Chess programs are AI. Optical Character Recognition is AI. Markov chain programs are AI. LLMs are AI.
LLM AI is useful. It doesn’t need to be a self aware super human intelligence to provide tremendous efficiency gains to business by fixing grammar in inter office emails.
LLM have use cases but calling it “AI” was the grifter move.
These parasites ruined blockchain reputation and now they are doing the same with “ai”
I keep saying it for what it is, “genAI” is just Markov chains…AGAIN. And the first chain Markov ever invented was an language model back in ~1904, published in 1905.
Every time in household IT history, people are fooled into thinking tech is doing magic intelligence stuff but it’s just a classic Markov chain. Something that was once done on paper but now ripped through 2025 processors.
In no way does a single algorithm type fit the definition of artificial intelligence. It’s just simple mathematics that can now be done incredibly fast.
All it does is mathematically calculate the likelihood of what’s next based on how things occur in the data it’s been given. It’s prediction to generate is just weighted values and the quality is entirely dependent on the historical data it’s referencing.
What normally comes after A? According to data, B does 76% of the time. Choose B. What comes after B? C 78% of the time but S follows AB 98% of the time. Choose S. Be able to do this thousands of times a second aaaaand, bingo. Perceived “intelligence”.
That’s literally it.
Why is genAI so bad at its job? Because you can never get 100% for everything and the chain can steer down a wrong path based on a single mistake in one of the links. It’s why we call it probability and not fact. But there is no intelligence there to problem solve itself, just deeper and deeper data validation checks on the linear chain to prevent low quality routes. Checks done using Markov’s same fundamentals.
This is what precisely what I have been saying for so long. Just because LLMs sound smart doesn’t mean they are. They don’t form world views, or even understand ideas or concepts. They are just glorified statistical parrots that predict the next word through a prob distribution.
100%
Expect I would say the quality of output depends on the user tbh
If you know the subject. You can use it. If you don’t know the subject you can use LLM to learn but you will need proper documentation to cross reference what you are learning.
Yep. I find people that understand what’s actually going on in the back end have much more successful results. They know to introduce their own conditions in the prompt that prevent common or expected failures. The chain can obviously not do this itself as it is not an AI.
Hey, down in the front! The rest of us paid good money for the magic clown show.
Sadly people don’t know better…
The problem isn’t AI but layman’s assumptions about what AI means.
Expert systems (bunch of if else) are AI. Chess programs are AI. Optical Character Recognition is AI. Markov chain programs are AI. LLMs are AI.
LLM AI is useful. It doesn’t need to be a self aware super human intelligence to provide tremendous efficiency gains to business by fixing grammar in inter office emails.
I can’t tell if you are trolling here
I wear my typos as a badge of honor… The recipient knows that the
shitpostemail they are reading was hand made by grade A idiot, not an LLMThe best and most concise explanation I’ve seen. Thank you.