I do not believe that LLMs are intelligent. That being said I have no fundamental understanding of how they work. I hear and often regurgitate things like “language prediction” but I want a more specific grasp of whats going on.
I’ve read great articles/posts about the environmental impact of LLMs, their dire economic situation, and their dumbing effects on people/companies/products. But the articles I’ve read that ask questions like “can AI think?” basically just go “well its just language and language isnt the same as thinking so no.” I haven’t been satisfied with this argument.
I guess I’m looking for something that dives deeper into that type of assertion that “LLMs are just language” with a critical lens. (I am not looking for a comprehensive lesson on technical side LLMs because I am not knowledgeable enough for that, some goldy locks zone would be great). If you guys have any resources you would recommend pls lmk thanks


LLM’s are not even language. They’re just functions created from data and statistics. In this case the data is “writing” but that doesn’t really matter since it’s all stored and computed as numbers. It’s a similar process for functions that generate images, etc. There’s no evidence or reason why “intelligence” would manifest from completely straightforward computations. So the grift that a function is somehow “intelligent” is completely detached from humdrum computational reality.