• wischi@programming.dev
    link
    fedilink
    arrow-up
    18
    arrow-down
    11
    ·
    3 days ago

    Current LLMs are definitely not intelligent, but predicting the future is a big part (if not the most important part) of intelligence.

    Your comment is a bit like saying that humans can’t be intelligent, because the biochemistry in our brains is just laws of physics in motion, and the laws of physics are not intelligent.

    Intelligent is an emergent property. You can definitely be intelligent even if every component is not.

    But with LLMs we found a new weird “dimension” that something can be very knowledgeable without being intelligent. Even current LLMs have more general knowledge than all humans but they lack actual intelligence.

    • angband@lemmy.world
      link
      fedilink
      arrow-up
      7
      arrow-down
      2
      ·
      edit-2
      2 days ago

      it doesn’t predict, it follows a weighted graph or the equivalent. it doesn’t guess, /dev/urandom input makes the path unpredictable. any case where it looks like it predicts or guesses is purely accidental, and all in the eye of the observer.

      further, it only posses knowledge to tthe degree that an encyclopedia does. the prompt is just the equivalent of a hash key pulling a bucket out of a map.

      it is literally just a huge database of key-value pairs stored so as to minimize the description length of the values.

      • wischi@programming.dev
        link
        fedilink
        arrow-up
        2
        ·
        2 days ago

        The training process evolves models to do predictions. The actual underlying mechanisms are not too relevant because the prediction function is an emergent property.

        You brain is just biochemistry and biochemistry isn’t intelligent and yet you are. Think of the number three and all you know about it. There is not a single neuron in your brain that has any idea what the concept of three even means. It’s an emergent behavior.

        • angband@lemmy.world
          link
          fedilink
          arrow-up
          1
          arrow-down
          1
          ·
          2 days ago

          there’s no emergent behavior in llm’s. your perception that there is, is an anthropomorphism, same as with the idea of prediction. statistically “predicting” the next word based on the frequency of input data isn’t an emergent property, it exists as a staic feature of the algorithm from the start. at a certain level of complexity, llms appear to produce comprehensible text, provided you stop them in time. that’s merely because of the rules of the algorithm. the illusion of intelligence comes merely from being able to select “merged buckets” from the map, which are put together mathematically.

          it is a one trick pony that will never become anything else.

      • mirshafie@europe.pub
        link
        fedilink
        arrow-up
        1
        ·
        edit-2
        2 days ago

        Excuse me but what do you think memory is other than a huge database of key-value pairs?

      • oyo@lemmy.zip
        link
        fedilink
        arrow-up
        3
        arrow-down
        1
        ·
        3 days ago

        The dictionary is certainly more knowledgeable about words than you.

        • queermunist she/her@lemmy.ml
          link
          fedilink
          arrow-up
          4
          arrow-down
          2
          ·
          3 days ago

          Let’s abstract if further. Rip every page out of the dictionary and put it through a shredder. All the knowledge is still there, the paper hasn’t been destroyed and the knowledge can be accessed by someone patient, it’s just not in a form that can be easily read.

          But is that pile of shredded paper knowledgeable?

          • wischi@programming.dev
            link
            fedilink
            arrow-up
            2
            ·
            3 days ago

            I don’t get your analogy. Put your brain through a shredder. Is it still intelligent? All the atoms are still there.

            • queermunist she/her@lemmy.ml
              link
              fedilink
              arrow-up
              2
              ·
              edit-2
              3 days ago

              Exactly? Both intelligence and knowledgeably are emergent, you can’t just have all the knowledge in one place and then call it knowledgeable (or intelligent, for that matter). A book (or a chatbot) isn’t knowledgeable, it merely contains knowledge.

              • wischi@programming.dev
                link
                fedilink
                arrow-up
                2
                ·
                3 days ago

                I’m not a native speaker, but that sounds like semantics to me. How would you, when chatting, differentiate if the other end is “knowledgeable” or if it “merely contains knowledge”?

                • queermunist she/her@lemmy.ml
                  link
                  fedilink
                  arrow-up
                  1
                  ·
                  edit-2
                  2 days ago

                  The distinction is important because an intelligent being can actually be trusted to do the things they are trained to do. LLMs can’t. The “hallucination” problem comes from these things just being probability engines, they don’t actually know what they’re saying. B follows A, they don’t know why and they don’t care and they can’t care. That’s why LLMs are not actually able to really replace workers, at best they’re productivity software that can (maybe, I’m not convinced) make human workers more productive.

                  One distinction is that it requires work to actually get real useful knowledge out of these things. You can’t just prompt it and then always expect the answer to be correct or useful, you have to double check everything because it might just make shit up.

                  The knowledgeably, the intelligence, still comes from the human user.

                  • wischi@programming.dev
                    link
                    fedilink
                    arrow-up
                    1
                    ·
                    edit-2
                    2 days ago

                    To be fair all of what you’ve said applies to humans too. Look how many flat earthers there are and even more people that believe in homeopathy or think that vaccines give you autism, think that aliens built the pyramids.

                    But nobody calls that “hallucinations” in humans. Are LLMs perfect? Definitely not. Are they useful? Somewhat; but definitely extremely far from PhD level intelligence as some claim.

                    But there are things LLMs are why better than any single human already (not collectively). Giving you a hint (doesn’t have to be 100% accurate) what topics to look up if you can just describe is vaguely but don’t know what you would even search for in a traditional search engine.

                    Of course you can not trust it blindly, but you shouldn’t trust humans blindly either, that’s we we have the scientific method, because humans are unreliable too.

          • oyo@lemmy.zip
            link
            fedilink
            arrow-up
            1
            ·
            3 days ago

            Are you trying to say that the word ‘knowledgeable’ has some implication of intelligence? Because, depending on context, yes it can. Or are you trying to say that LLMs take a lot of time and/or energy to reassemble their shredded data? To answer your question, yes, the pile of shredded paper contains knowledge, and its accessibility is irrelevant to the conversation.

              • Jinarched@lemmy.ca
                link
                fedilink
                arrow-up
                1
                ·
                2 days ago

                Your exchange makes me think about the chinese room thought experiment.

                The person inside the room has instructions and a dictonary they uses to translate chinese symbols into english words. They never leave the room and never interact with anyone. They just translate single words.

                They don’t understand chinese, but the output of the system (the room) gives the impression that there is thinking behind the process. If I remember correctly, it was an argument against the Turing test. The claim was that computers could be extremely efficient into constructing anwsers that seems to be backed by human consciousness/thinking.

                • queermunist she/her@lemmy.ml
                  link
                  fedilink
                  arrow-up
                  1
                  ·
                  2 days ago

                  Right, so the parking lot covered with shredded dictionaries needs a human mind or else its just a bunch of trash.

                  The human inside the Chinese room, or in the parking lot picking up and organizing the trash, or in a discussion with a chatbot is still critical to the overall intelligence/knowledgeability of the system. It’s still needed for that spark and, without it, it’s just trash.

                • wischi@programming.dev
                  link
                  fedilink
                  arrow-up
                  1
                  ·
                  2 days ago

                  I think you are right. IMHO the room actually does speak/understand Chinese, even of the robot/human in the room does not.

                  There are no neurons in your brain that “understand” English, yet you do. Intelligence is an emergent property. If you “zoom-in” enough everything is just laws of physics and those laws don’t understand English or Chinese.

                  • queermunist she/her@lemmy.ml
                    link
                    fedilink
                    arrow-up
                    1
                    ·
                    2 days ago

                    If we carry the thought experiment forward, the parking lot requires a human to put in energy to make the whole system knowledgeable. In order for knowledgeability or intelligence to emerge we still need a human involved in the process, whether it’s a Chinese room or a parkinglot covered with shredded dictionaries or a chatbot productivity software.

                    We have not eliminated the human from the process, and until we do, we can not say that it is intelligent or knowledgeable.