[I literally had this thought in the shower this morning so please don’t gatekeep me lol.]
If AI was something everyone wanted or needed, it wouldn’t be constantly shoved your face by every product. People would just use it.
Imagine if printers were new and every piece of software was like “Hey, I can put this on paper for you” every time you typed a word. That would be insane. Printing is a need, and when you need to print, you just print.


I’ve been wondering about a similar thing recently - if AI is this big, life-changing thing, why were there so little rumblings among tech-savy people before it became “mainstream”? Sure, Machine Learning was somewhat talked about, but very little of it seemed to relate to LLM-style Machine learning. With basically all other innovations technology, the nerds tended to have it years before everyone else, so why was it so different with AI?
Sizes are different. Before “AI” went mainstream, those in machine learning were very excited about word2vec and reinforcement learning for example. And it was known that there will be improvement with larger size neural networks but I’m not sure if anyone knew for certain how well chatgpt would have worked. Given the costs of training and inference for LLMs, I doubt you can see nerds doing it. Also, previously you didn’t have big tech firms. Not the current behemoths anyway.
Because AI is a solution to a problem individuals don’t have. The last 20 years we have collected and compiled an absurd amount of data on everyone. So much that the biggest problem is how to make that data useful by analyzing and searching it. AI is the tool that completes the other half of data collection, analyzing. It was never meant for normal people and its not being funded by average people either.
Sam altman is also a fucking idiot yes-man who could talk himself into literally any position. If this was meant to help society the AI products wouldnt be assisting people with killing themselves so that they can collect data on suicide.
Realistically, computational power
The more number crunching units and more memory you throw at the problem, the easier it is and the more useful the final model is. The math and theoretical computer science behind LLMs has been known for decades, it’s just that the resource investment required to make something even mediocre was too much for any business type to be willing to sign off on. Me and my fellow nerds had the technology and largely dismissed it as worthless or a set of pipe dreams
But then number crunching units and memory became cheap enough that a couple of investors were willing to take the risk and you get a model like ChatGPT1. Talks close enough like a human that it catches business types attention as a new revolutionary thing, and without the technical background to know they were getting lied to, the Venture Capitalism machine cranks out the shit show we have today.
And additionally, I’ve never seen an actual tech-savy nerd that supports its implementation, especially in this draconian ways.