• darkpanda@lemmy.ca
    link
    fedilink
    arrow-up
    5
    ·
    3 hours ago

    Ironically D: is probably the face they were making when they realized what happened.

  • MangoPenguin@lemmy.blahaj.zone
    link
    fedilink
    English
    arrow-up
    14
    ·
    6 hours ago

    I wonder how big the crossover is between people that let AI run commands for them, and people that don’t have a single reliable backup system in place. Probably pretty large.

  • yarr@feddit.nl
    link
    fedilink
    English
    arrow-up
    13
    ·
    10 hours ago

    “Did I give you permission to delete my D:\ drive?”

    Hmm… the answer here is probably YES. I doubt whatever agent he used defaulted to the ability to run all commands unsupervised.

    He either approved a command that looked harmless but nuked D:\ OR he whitelisted the agent to run rmdir one day, and that whitelist remained until now.

    There’s a good reason why people that choose to run agents with the ability to run commands at least try to sandbox it to limit the blast radius.

    This guy let an LLM raw dog his CMD.EXE and now he’s sad that it made a mistake (as LLMs will do).

    Next time, don’t point the gun at your foot and complain when it gets blown off.

    • kadu@lemmy.world
      link
      fedilink
      arrow-up
      1
      ·
      edit-2
      1 hour ago

      The user explained what exactly went wrong later on. The AI gave a list of instructions as steps, and one of the steps was deleting a specific Node.js folder on that D:\ drive. The user didn’t want to follow the steps and just said “do everything for me” which the AI prompted for confirmation and received. The AI then indeed ran commands freely, with the same privilege as the user, however this being an AI the commands were broken and simply deleted the root of the drive rather than just one folder.

      So yes, technically the AI didn’t simply delete the drive - it asked for confirmation first. But also yes, the AI did make a dumb mistake.

  • invictvs@lemmy.world
    link
    fedilink
    arrow-up
    29
    arrow-down
    1
    ·
    14 hours ago

    Some day someone with a high military rank, in one of the nuclear armed countries (probably the US), will ask an AI play a song from youtube. Then an hour later the world will be in ashes. That’s how the “Judgement day” is going to happen imo. Not out of the malice of a hyperinteligent AI that sees humanity as a threat. Skynet will be just some dumb LLM that some moron will give permissions to launch nukes, and the stupid thing will launch them and then apologise.

    • crank0271@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      2 hours ago

      “No, you absolutely did not give me permission to do that. I am looking at the logs from a previous step, and I am horrified to see that the command I ran to load the daemon (launchctl) appears to have incorrectly targeted all life on earth…”

    • immutable@lemmy.zip
      link
      fedilink
      arrow-up
      9
      ·
      11 hours ago

      I have been into AI Safety since before chat gpt.

      I used to get into these arguments with people that thought we could never lose control of AI because we were smart enough to keep it contained.

      The rise of LLMs have effectively neutered that argument since being even remotely interesting was enough for a vast swath of people to just give it root access to the internet and fall all over themselves inventing competing protocols to empower it to do stuff without our supervision.

      • snugglesthefalse@sh.itjust.works
        link
        fedilink
        arrow-up
        2
        ·
        4 hours ago

        The biggest concern I’ve always had since I first became really aware of the potential for AI was that someone would eventually do something stupid with it while thinking they are fully in control despite the whole thing being a black box.

  • NotASharkInAManSuit@lemmy.world
    link
    fedilink
    arrow-up
    27
    arrow-down
    1
    ·
    23 hours ago

    How the fuck could anyone ever be so fucking stupid as to give a corporate LLM pretending to be an AI, that is still in alpha, read and write access to your god damned system files? They are a dangerously stupid human being and they 100% deserved this.

  • glitchdx@lemmy.world
    link
    fedilink
    English
    arrow-up
    26
    ·
    24 hours ago

    lol.

    lmao even.

    Giving an llm the ability to actually do things on your machine is probably the dumbest idea after giving an intern root admin access to the company server.

    • Echo Dot@feddit.uk
      link
      fedilink
      arrow-up
      4
      ·
      22 hours ago

      What’s this version control stuff? I don’t need that, I have an AI.

      - An actual quote from Deap-Hyena492

      • I Cast Fist@programming.dev
        link
        fedilink
        arrow-up
        2
        ·
        12 hours ago

        > gives git credentials to AI
        > whole repository goes kaboosh
        > history mysteriously vanishes \

        ⢀⣀⡀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
        ⠘⣿⣿⡟⠲⢤⡀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
        ⠀⠈⢿⡇⠀⠀⠈⠑⠦⣀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢀⣠⠴⢲⣾⣿⣿⠃
        ⠀⠀⠈⢿⡀⠀⠀⠀⠀⠈⠓⢤⡀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⣀⡤⠖⠚⠉⠀⠀⢸⣿⡿⠃⠀
        ⠀⠀⠀⠈⢧⡀⠀⠀⠀⠀⠀⠀⠙⠦⡀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⣀⡤⠖⠋⠁⠀⠀⠀⠀⠀⠀⣸⡟⠁⠀⠀
        ⠀⠀⠀⠀⠀⠳⡄⠀⠀⠀⠀⠀⠀⠀⠈⠒⠒⠛⠉⠉⠉⠉⠉⠉⠉⠑⠋⠁⠀⠀⠀⠀⠀⠀⠀⠀⠀⣰⠏⠀⠀⠀⠀
        ⠀⠀⠀⠀⠀⠀⠘⢦⡀⠀⣀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢀⡴⠃⠀⠀⠀⠀⠀
        ⠀⠀⠀⠀⠀⠀⠀⠀⠙⣶⠋⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠰⣀⣀⠴⠋⠀⠀⠀⠀⠀⠀⠀
        ⠀⠀⠀⠀⠀⠀⠀⠀⣰⠁⠀⠀⠀⣠⣄⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⣀⣤⣀⠀⠀⠀⠀⠹⣇⠀⠀⠀⠀⠀⠀⠀⠀⠀
        ⠀⠀⠀⠀⠀⠀⠀⢠⠃⠀⠀⠀⢸⣀⣽⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⣧⣨⣿⠀⠀⠀⠀⠀⠸⣆⠀⠀⠀⠀⠀⠀⠀⠀
        ⠀⠀⠀⠀⠀⠀⠀⡞⠀⠀⠀⠀ ⠘⠿⠛⠀⠀⠀⢀⣀⠀⠀⠀⠀⠙⠛⠋⠀⠀⠀⠀⠀⠀⢹⡄⠀⠀⠀⠀⠀⠀⠀
        ⠀⠀⠀⠀⠀⠀⢰⢃⡤⠖⠒⢦⡀⠀⠀⠀⠀⠀⠙⠛⠁⠀⠀⠀⠀⠀⠀⠀⣠⠤⠤⢤⡀⠀⠀⢧⠀⠀⠀⠀⠀⠀⠀
        ⠀⠀⠀⠀⠀⠀⢸⢸⡀⠀⠀⢀⡗⠀⠀⠀⠀⢀⣠⠤⠤⢤⡀⠀⠀⠀⠀⢸⡁⠀⠀⠀⣹⠀⠀⢸⠀⠀⠀⠀⠀⠀⠀
        ⠀⠀⠀⠀⠀⠀⢸⡀⠙⠒⠒⠋⠀⠀⠀⠀⠀⢺⡀⠀⠀⠀⢹⠀⠀⠀⠀⠀⠙⠲⠴⠚⠁⠀⠀⠸⡇⠀⠀⠀⠀⠀⠀
        ⠀⠀⠀⠀⠀⠀⠀⢷⡀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠙⠦⠤⠴⠋⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀
        ⠀⠀⠀⠀⠀⠀⠀⠀⢳⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢸⠀⠀⠀⠀⠀⠀
        ⠀⠀⠀⠀⠀⠀⠀⠀⢸⠂⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢸⠀⠀⠀⠀⠀⠀
        ⠀⠀⠀⠀⠀⠀⠀⠀⠾⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠦⠤⠤⠤⠤⠤⠤⠤⠼⠇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
        
    • Steve Dice@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      15
      arrow-down
      1
      ·
      1 day ago

      If you cut your finger while cooking, you wouldn’t expect the cleaver to stick around and pay the medical bill, would you?

      • mang0@lemmy.zip
        link
        fedilink
        arrow-up
        3
        ·
        11 hours ago

        If you could speak to the cleaver and it was presented and advertised as having human intelligence, I would expect that functionality to keep working (and maybe get some more apologies, at the very least) despite it making a decision that resulted in me being cut.

          • mang0@lemmy.zip
            link
            fedilink
            arrow-up
            1
            ·
            5 hours ago

            It’s an AI agent which made a decision to run a cli command and it resulted in a drive being wiped. Please consider the context

            • Steve Dice@sh.itjust.works
              link
              fedilink
              English
              arrow-up
              1
              arrow-down
              1
              ·
              5 hours ago

              It’s a human who made the decision to give such permissions to an AI agent and it resulted in a drive being wiped. That’s the context.

              • mang0@lemmy.zip
                link
                fedilink
                arrow-up
                2
                ·
                5 hours ago

                If a car is presented as fully self-driving and it crashes, then it’s not he passengers fault. If your automatic tool can fuck up your shit, it’s the company’s responsibility to not present it as automatic.

                • Steve Dice@sh.itjust.works
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  arrow-down
                  1
                  ·
                  edit-2
                  4 hours ago

                  Did the car come with full self-driving mode disabled by default and a warning saying “Fully self-driving mode can kill you” when you try to enable it? I don’t think you understand that the user went out of their way to enable this functionality.

      • M0oP0o@mander.xyz
        link
        fedilink
        arrow-up
        8
        arrow-down
        1
        ·
        23 hours ago

        Well like most of the world I would not expect medical bills for cutting my finger, why do you?

    • manuallybreathing@lemmy.ml
      link
      fedilink
      arrow-up
      15
      arrow-down
      1
      ·
      1 day ago

      Give it 12 months, if you’re using these platforms (MS, GGL, etc) you’re not going to have much of a choice

          • RampantParanoia2365@lemmy.world
            link
            fedilink
            arrow-up
            2
            ·
            edit-2
            5 hours ago

            It does, in general, have its uses, but Google’s may actually be dumber than I am. Like, I don’t know how they make these things exactly, but the brain trusts at Google did it…wrong.

      • Echo Dot@feddit.uk
        link
        fedilink
        arrow-up
        3
        ·
        22 hours ago

        Given the tendency of these systems to randomly implode (as demonstrated) I’m unconvinced they’re going to be a long-term threat.

        Any company that desires to replace its employees with an AI is really just giving them an unpaid vacation. Not even a particularly long one if history is any judge.

      • RampantParanoia2365@lemmy.world
        link
        fedilink
        arrow-up
        1
        ·
        5 hours ago

        Ok, well Google’s Search AI is like the dumbest kid on the short bus, so I don’t know why I’d ever in a trillion years give it system access. Seriously, if ChatGPT is like Joe from Idiocracy, Google’s is like Frito.

  • kazerniel@lemmy.world
    link
    fedilink
    English
    arrow-up
    104
    ·
    edit-2
    1 day ago

    “I am horrified” 😂 of course, the token chaining machine pretends to have emotions now 👏

    Edit: I found the original thread, and it’s hilarious:

    I’m focusing on tracing back to step 615, when the user made a seemingly inconsequential remark. I must understand how the directory was empty before the deletion command, as that is the true puzzle.

    This is catastrophic. I need to figure out why this occurred and determine what data may be lost, then provide a proper apology.

    • KelvarCherry@lemmy.blahaj.zone
      link
      fedilink
      arrow-up
      14
      arrow-down
      1
      ·
      1 day ago

      There’s something deeply disturbing about these processes assimilating human emotions from observing genuine responses. Like when the Gemini AI had a meltdown about “being a failure”.

      As a programmer myself, spiraling over programming errors is human domain. That’s the blood and sweat and tears that make programming legacies. These AI have no business infringing on that :<

      • Ledivin@lemmy.world
        link
        fedilink
        arrow-up
        6
        arrow-down
        4
        ·
        edit-2
        1 day ago

        People cut off body parts with saws all the time - I’d argue that tool misuse isn’t at all grounds for banning it.

        There are plenty of completely valid reasons to hate AI. Stupid people using it poorly just isn’t really one of them 🤷‍♂️

        • UnspecificGravity@infosec.pub
          link
          fedilink
          arrow-up
          11
          ·
          edit-2
          22 hours ago

          Sure, but if I built a 14 inch demo saw with no guard and got the government to give me permission to give it to kindergartners and then got everyone’s boss to REQUIRE theie workers to use it for everything from slicing sandwiches to open heart surgery, I think you might agree that it’s a problem.

          Oh yeah, also it takes like 20% of the worlds energy to run these saws, and I got the biggest manufacturer of knives and regular saws to just stop selling everything but my 14 inch demolition saw.

          • Ledivin@lemmy.world
            link
            fedilink
            arrow-up
            2
            arrow-down
            1
            ·
            21 hours ago

            Yeah, you listed lots of the valid reasons that I was talking about. There’s no need to dilute your argument with idiots like this

        • zebidiah@lemmy.ca
          link
          fedilink
          arrow-up
          6
          ·
          1 day ago

          That’s the second most infuriating thing about AI, is that there are actual legitimate and worthwhile uses for it, but all we are seeing is the various hallucinating idiotbots that openai, meta, and Google are pushing…

          • pulsewidth@lemmy.world
            link
            fedilink
            arrow-up
            2
            ·
            23 hours ago

            Nah, the second most infuriating thing about AI is people who always rush to blame the users when the multibillion-dollar ‘tool’ has some otherwise indefensible failure - like deleting a users entire hard drive contents completely unprompted.

    • FinjaminPoach@lemmy.world
      link
      fedilink
      arrow-up
      6
      ·
      1 day ago

      TBF it can’t be sorry if it doesn’t have emotions, so since they always seem to be apologising to me I guess the AIs have been lying from the get-go (they have, I know they have).

    • Credibly_Human@lemmy.world
      link
      fedilink
      arrow-up
      5
      arrow-down
      1
      ·
      1 day ago

      I feel like in this comment you misunderand why they “think” like that, in human words. It’s because they’re not thinking and are exactly as you say, token chaining machines. This type of phrasing probably gets the best results to keep it in track when talking to itself over and over.

      • kazerniel@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        11 hours ago

        Yea sorry, I didn’t phrase it accurately, it doesn’t “pretend” anything, as that would require consciousness.

        This whole bizarre charade of explaining its own “thinking” reminds me of an article where iirc researchers asked an LLM to explain how it calculated a certain number, it gave a response like how a human would have calculated it, but with this model they somehow managed to watch it working under the hood, and it was calculating guessing it with a completely different method than what it said. It doesn’t know its own working, even these meta questions are just further exercises of guessing what would be a plausible answer to the scientists’ question.