I do not believe that LLMs are intelligent. That being said I have no fundamental understanding of how they work. I hear and often regurgitate things like “language prediction” but I want a more specific grasp of whats going on.

I’ve read great articles/posts about the environmental impact of LLMs, their dire economic situation, and their dumbing effects on people/companies/products. But the articles I’ve read that ask questions like “can AI think?” basically just go “well its just language and language isnt the same as thinking so no.” I haven’t been satisfied with this argument.

I guess I’m looking for something that dives deeper into that type of assertion that “LLMs are just language” with a critical lens. (I am not looking for a comprehensive lesson on technical side LLMs because I am not knowledgeable enough for that, some goldy locks zone would be great). If you guys have any resources you would recommend pls lmk thanks

  • Hello_there@fedia.io
    link
    fedilink
    arrow-up
    3
    ·
    3 days ago

    Cgp grey has a video about algorithms that is very accessible, and gives a good visual display of how they work. It’s kind of stuck in my brain and I think back to it every now and then. I think the same explanation will apply to LLMs.

    To sum up - they are good at making associations between data - but nobody knows which linkages are made or how/why the final results of the LLM are made. That’s why you get outcomes like chatgpt making statements with racist undertones - if there is racism in the training data (e.g., racist internet comments) those associations will carry thru to the final result.

    https://youtu.be/R9OHn5ZF4Uo