I do not believe that LLMs are intelligent. That being said I have no fundamental understanding of how they work. I hear and often regurgitate things like “language prediction” but I want a more specific grasp of whats going on.

I’ve read great articles/posts about the environmental impact of LLMs, their dire economic situation, and their dumbing effects on people/companies/products. But the articles I’ve read that ask questions like “can AI think?” basically just go “well its just language and language isnt the same as thinking so no.” I haven’t been satisfied with this argument.

I guess I’m looking for something that dives deeper into that type of assertion that “LLMs are just language” with a critical lens. (I am not looking for a comprehensive lesson on technical side LLMs because I am not knowledgeable enough for that, some goldy locks zone would be great). If you guys have any resources you would recommend pls lmk thanks

  • humanspiral@lemmy.ca
    link
    fedilink
    arrow-up
    1
    ·
    2 days ago

    There is no magic conclusion that can be made from “they are just token prediction machines”. There are reasoning and search agents and other tools that can improve the final right answer token prediction. There are other ML tools and hardware innovations to make them faster, and so “think longer” before giving an answer.

    These tools are likely to keep improving their “correct answer rates”, without ever achieving a “0 dumb error rate”