• ameancow@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      3 hours ago

      Just a fun reminder how we make AI.

      We take what is essentially trillions and trillions of “dials” that turn between “this is right/this is wrong” and set them up to compare yuuuuuge sets of data, from pictures to books to vast collections of human chatter and experiences, and we feed that into the data with some big sets of instructions (“this is what a cat looks like, this is not”) and then we feed the whole thing the power equivalent of a small city… FOR A YEAR STRAIGHT. We just let it cook. It grows slowly, flipping all these trillions of dials over and over until it works out all the relationships between all this data. At the end of this period, the machine can talk. We don’t fully understand why.

      We don’t program the shit, we don’t write hard code to make it comply with Asimovian commandments. We just grow it like a tree and after it’s grown there’s not a lot we can do to change its structure. The tree is vast. So vast are its limbs and branches that nobody can possibly map it out and engineer ways to alter it. We can wrap new things around it, we can alter it’s desired outcomes and output, but whatever we baked into it will always be there.

      This is why they behave so weird, this is why they will say “I promise to behave” and then drive someone to suicide. This is why whenever Elon tries to make Grok behave in a way that pleases him, it just leads to more problems and unexpected nonsense.

      This is why we need to stop AI from taking over our decision making. This is why we can’t allow police, military and governments to hand over control of life-and-death decision making to these things.

      • wewbull@feddit.uk
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        2 hours ago

        The problem I have with your description is that it abdicates responsibility for what eventually gets generated with a big shrug and “we don’t fully understand why”.

        The choice of training data is key to how the final model operates. All sorts of depraved material must be being used as part of the training set, otherwise the model wouldn’t be able to generate the text it does (even if it’s being coached).

        It’s clear the “AI race” is all about who gets the power of owning, and therefore influencing, everybody’s information stream. If they couldn’t influence it, there wouldn’t be such a race.

        • ameancow@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          2 hours ago

          The problem I have with your description is that it abdicates responsibility for what eventually gets generated with a big shrug and “we don’t fully understand why”.

          I’m not sure how it does that, I said that the instructions during that training dictate what kind of AI it will be, and the effects of wrapping new instructions around it have profound and unpredictable results, which I tried to describe.

          Nothing I said could imply that there’s no human involvement in the creation of an AI. My point was just a lot broader, which is that the things are made by people using vast resources for unpredictable results and people are trying to make them power everything.

          A racist chat LLM is bad. A generalized AI with access to the power grid, defense systems and drone targeting systems which is built on a model that Elon Musk has made or fucked around with is much, MUCH worse.