I think AI is neat.

    • Poik@pawb.social
      link
      fedilink
      arrow-up
      8
      arrow-down
      6
      ·
      10 months ago

      … Alexa literally is A.I.? You mean to say that Alexa isn’t AGI. AI is the taking of inputs and outputting something rational. The first AI’s were just large if-else complications called First Order Logic. Later AI utilized approximate or brute force state calculations such as probabilistic trees or minimax search. AI controls how people’s lines are drawn in popular art programs such as Clip Studio when they use the helping functions. But none of these AI could tell me something new, only what they’re designed to compute.

      The term AI is a lot more broad than you think.

      • BellyPurpledGerbil@sh.itjust.works
        link
        fedilink
        arrow-up
        11
        arrow-down
        1
        ·
        10 months ago

        The term AI being used by corporations isn’t some protected and explicit categorization. Any software company alive today, selling what they call AI, isn’t being honest about it. It’s a marketing gimmick. The same shit we fall for all the time. “Grass fed” meat products aren’t actually 100% grass fed at all. “Healthy: Fat Free!” foods just replace the fat with sugar and/or corn syrup. Women’s dress sizes are universally inconsistent across all clothing brands in existence.

        If you trust a corporation to tell you that their product is exactly what they market it as, you’re only gullible. It’s forgivable. But calling something AI when it’s clearly not, as if the term is so broad it can apply to any old if-else chain of logic, is proof that their marketing worked exactly as intended.

        • Poik@pawb.social
          link
          fedilink
          arrow-up
          1
          ·
          10 months ago

          The term AI is older than the idea of machine learning. AI is a rectangle where machine learning is a square. And deep learning is a unit square.

          Please, don’t muddy the waters. That’s what caused the AI winter of 1960. But do go after the liars. I’m all for that.

      • Holzkohlen@feddit.de
        link
        fedilink
        arrow-up
        8
        arrow-down
        2
        ·
        10 months ago

        The term AI is a lot more broad than you think.

        That is precisely what I dislike. It’s kinda like calling those crappy scooter thingies “hoverboards”. It’s just a marketing term. I simply oppose the use of “AI” for the weak kinds of AI we have right now and I’d prefer “AI” to only refer to strong AI. Though that is of course not within my power to force upon people and most people seem to not care one bit, so eh 🤷🏼‍♂️

        • Poik@pawb.social
          link
          fedilink
          arrow-up
          1
          ·
          10 months ago

          The term AI is older than the idea of machine learning. AI is a rectangle where machine learning is a square. And deep learning is a unit square.

          Please, don’t muddy the waters. That’s what caused the AI winter of 1960. But do go after the liars. I’m all for that.

        • QuaternionsRock@lemmy.world
          link
          fedilink
          arrow-up
          5
          arrow-down
          3
          ·
          10 months ago

          I still don’t follow your logic. You say that GPT has no ability to problem solve, yet it clearly has the ability to solve problems? Of course it isn’t infallible, but neither is anything else with the ability to solve problems. Can you explain what you mean here in a little more detail.

          One of the most difficult problems that AI attempts to solve in the Alexa pipeline is, “What is the desired intent of the received command?” To give an example of the purpose of this question, as well as how Alexa may fail to answer it correctly: I have a smart bulb in a fixture, and I gave it a human name. When I say,” “Alexa, make Mr. Smith white,” one of two things will happen, depending on the current context (probably including previous commands, tone, etc.):

          1. It will change the color of the smart bulb to white
          2. It will refuse to answer, assuming that I’m asking it to make a person named Josh… white.

          It’s an amusing situation, but also a necessary one: there will always exist contexts in which always selecting one response over the other would be incorrect.

          • ☭ SaltyIceteaMaker ☭@iusearchlinux.fyi
            link
            fedilink
            arrow-up
            4
            arrow-down
            2
            ·
            edit-2
            10 months ago

            See that’s hard to define. What i mean is things like reasoning and understanding. Let’s take your example as an… Example. Obviously you can’t turn a person white so they probably mean the led. Now you could ask if they meant the led but it’s not critical so let’s just do it and the person will complain if it’s wrong. Thing is yes you can train an ai to act like this but in the end it doesn’t understand what it’s doing, only (maybe) if it did it right ir wrong. Like chat gpt doesn’t understand what it’s saying. It cannot grasp concepts, it can only try to emulate understanding although it doesn’t know how or even what understanding is. In the end it’s just a question of the complexity of the algorithm (cause we are just algorithms too) and i wouldn’t consider current “AI” to be complex enough to be called intelligent

            (Sorry if this a bit on the low quality side in terms of readibility and grammer but this was hastily written under a bit of time pressure)

            • QuaternionsRock@lemmy.world
              link
              fedilink
              arrow-up
              4
              arrow-down
              2
              ·
              edit-2
              10 months ago

              Obviously you can’t turn a person white so they probably mean the led.

              This is true, but it still has to distinguish between facetious remarks and genuine commands. If you say, “Alexa, go fuck yourself,” it needs to be able to discern that it should not attempt to act on the input.

              Intelligence is a spectrum, not a binary classification. It is roughly proportional to the complexity of the task and the accuracy with which the solution completes the task correctly. It is difficult to quantify these metrics with respect to the task of useful language generation, but at the very least we can say that the complexity is remarkable. It also feels prudent to point out that humans do not know why they do what they do unless they consciously decide to record their decision-making process and act according to the result. In other words, when given the prompt “solve x^2-1=0 for x”, I can instinctively answer “x = {+1, -1}”, but I cannot tell you why I answered this way, as I did not use the quadratic formula in my head. Any attempt to explain my decision process later would be no more than an educated guess, susceptible to similar false justifications and hallucinations that GPT experiences. I haven’t watched it yet, but I think this video may explain what I mean.

              Edit: this is the video I was thinking of, from CGP Grey.

              • ☭ SaltyIceteaMaker ☭@iusearchlinux.fyi
                link
                fedilink
                arrow-up
                4
                arrow-down
                1
                ·
                edit-2
                10 months ago

                Hmm it seems like we have different perspectives. For example i cannot do something i don’t understand, meaning if i do a calculation in my head i can tell you exactly how i got there because i have to think through every step of the process. This starts at something as simple as 9 + 3 wher i have to actively think aboit the calculation, it goes like this in my head: 9 + 3… Take 1 from 3 add it to 9 = 10 + 2 = 12. This also applies to more complex things wich on one hand means i am regularly slower than my peers but i understand more stuff than them.

                So i think because of our different… Thinking (?) We both lack a critical part in understanding each other’s view point

                Anyhow back to ai.

                Intelligence is a spectrum, not a binary classification

                Yeah that’s the problem where does the spectrum start… Like i wouldn’t call a virus, bacteria or single cell intelligent, yet somehow a bunch of them is arguing about what intelligence is. i think this is just case of how you define intelligence, wich would vary from person to person. Also, I agree that llms are unfathomably complex. However i wouldn’t calssify them as intelligent, yet. In any case it was an interesting and fun conversation to have but i will end it here and go to sleep. Thanks for having an actual formal disagreement and not just immediately going for insults. Have a great day/night

                • Poik@pawb.social
                  link
                  fedilink
                  arrow-up
                  1
                  ·
                  edit-2
                  10 months ago

                  And I wouldn’t call a human intelligent if TV was anything to go by. Unfortunately, humans do things they don’t understand constantly and confidently. It’s common place, and you could call it fake it until you make it, but a lot of times it’s more of people thinking they understand something.

                  LLMs do things confident that they will satisfy their fitness function, but they do not have the ability to see farther than that at this time. Just sounds like politics to me.

                  I’m being a touch facetious, of course, but the idea that the line has to be drawn upon that term, intelligence, is a bit too narrow for me. I prefer to use the terms Artificial Narrow Intelligence and Artificial General Intelligence as they are better defined. Narrow referring to it being designed for one task and one task only, such as LLMs which are designed to minimize a loss function of people accepting the output as “acceptable” language, which is a highly volatile target. AGI or Strong AI is AI that can generalize outside of its targeted fitness function and continuously. I don’t mean that a computer vision neural network that is able to classify anomalies as something that the car should stop for. That’s out of distribution reasoning, sure, but if it can reasonably determine the thing in bounds as part of its loss function, then anything that falls significantly outside can be easily flagged. That’s not true generalization, more of domain recognition, but it is important in a lot of safety critical applications.

                  This is an important conversation to have though. The way we use language is highly personal based upon our experiences, and that makes coming to an understanding in natural languages hard. Constructed languages aren’t the answer because any language in use undergoes change. If the term AI is to change, people will have to understand that the scientific term will not, and pop sci magazines WILL get harder to understand. That’s why I propose splitting the ideas in a way that allows for more nuanced discussions, instead of redefining terms that are present in thousands of ground breaking research papers over a century, which will make research a matter of historical linguistics as well as one of mathematical understanding. Jargon is already hard enough as it is.

    • Exocrinous@lemm.ee
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      2
      ·
      10 months ago

      Alexa is AI. She’s artificially intelligent. Moreso than an ant or a pigeon, and I’d call those animals pretty smart.

    • A_Very_Big_Fan@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      4
      ·
      10 months ago

      Nobody is claiming there is problem solving in LLMs, and you don’t need problem solving skills to be artificially intelligent. The same way a knife doesn’t have to be a Swiss army knife to be called a “knife.”

      • Cringe2793@lemmy.world
        link
        fedilink
        arrow-up
        3
        ·
        10 months ago

        I mean, people generally don’t have problem solving skills, yet we call them “intelligent” and “sentient” so…

        • A_Very_Big_Fan@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          ·
          10 months ago

          There’s a lot more to intelligence and sentience than just problem solving. One of them is recalling data and effectively communicating it.

        • A_Very_Big_Fan@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          10 months ago

          I just realized I interpreted your comment backwards the first time lol. When I wrote that I had “people don’t have issues with problem solving” in my head