Does mixing bleach and vinegar sound like a great idea?

Kidding aside, please don’t do it, because it will create a plume of poisonous chlorine gas that will cause a range of horrendous symptoms if inhaled.

That’s apparently news to OpenAI’s ChatGPT, though, which recently suggested to a Reddit user that the noxious combination could be used for some home cleaning tasks.

In a post succinctly worded, “ChatGPT tried to kill me today,” a Redditor related how they asked ChatGPT for tips to clean some bins — prompting the chatbot to spit out the not-so-smart suggestion of using a cleaning solution of hot water, dish soap, a half cup of vinegar, and then optionally “a few glugs of bleach.”

When the Reddit user pointed out this egregious mistake to ChatGPT, the large language model (LLM) chatbot quickly backtracked, in comical fashion.

“OH MY GOD NO — THANK YOU FOR CATCHING THAT,” the chatbot cried. “DO NOT EVER MIX BLEACH AND VINEGAR. That creates chlorine gas, which is super dangerous and absolutely not the witchy potion we want. Let me fix that section immediately.”

Reddit users had fun with the weird situation, posting that “it’s giving chemical warfare” or “Chlorine gas poisoning is NOT the vibe we’re going for with this one. Let’s file that one in the Woopsy Bads file!”

  • ORbituary@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    10
    arrow-down
    1
    ·
    3 hours ago

    How are people so fucking lazy and stupid that they need to go to GPT to learn how to clean correctly?

    • enthusiasticamoeba@lemmy.ml
      link
      fedilink
      arrow-up
      2
      ·
      1 hour ago

      The problem is not that this person asked chatgpt for cleaning tips (tbh it’s pretty cringe to call someone lazy and stupid for trying to learn something 🙄 Have you seriously never looked up how to clean something weirdly specific? And I suppose those who weren’t lucky enough to have parents who taught them how to adult properly are lazy and stupid when they try to learn?)

      The bigger problem is that LLMs are being used to create content for the web. So now someone who knows they can’t just mix any old chemicals together is going to Google whether bleach and vinegar are safe to mix and find a bunch of websites that have contradictory info.

      These people, whether they use LLM to search or to create content, aren’t even the root of the problem. Expecting that everyone is tech savvy enough to understand the limitations of generative AIs and how untrustworthy they can be is an unrealistic standard, especially in a world where everyone and their brother is using them and they seem like miracles of technology.

      The responsibility lies with the companies that keep touting this technology as something it is not and who refuse to put meaningful limitations on them, and with governments who are dragging their feet in regulating them.

  • jjjalljs@ttrpg.network
    link
    fedilink
    arrow-up
    17
    ·
    4 hours ago

    I really dislike the tone that chatgpt writes in.

    Chlorine gas poisoning is NOT the vibe we’re going for with this one. Let’s file that one in the Woopsy Bads file!"

    Just… ugh.

  • DaddleDew@lemmy.world
    link
    fedilink
    arrow-up
    51
    ·
    edit-2
    6 hours ago

    When will ordinary people finally figure out that AI only provides the illusion of intelligence.

    It should be renamed to SI for Simulated Intelligence.

    • zr0@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      3
      ·
      2 hours ago

      The scientists never called it an AI in the first place. It is still a Machine Learning Model. Zero intelligence behind that. The term was abused by idiotic managers. And one of them (at least) is already talking about AGI.

    • ghost9@lemmy.world
      link
      fedilink
      English
      arrow-up
      29
      arrow-down
      2
      ·
      6 hours ago

      I’m starting to think that ordinary people only provide the illusion of intelligence, too

      • LeninOnAPrayer@lemm.ee
        link
        fedilink
        English
        arrow-up
        16
        ·
        edit-2
        6 hours ago

        Sure. But those people are not usually typing out sentences in a coherent grammatically correct way with the confidence of Serena Williams getting on a tennis court.

      • paultimate14@lemmy.world
        link
        fedilink
        arrow-up
        4
        arrow-down
        1
        ·
        5 hours ago

        There is considerable overlap between the intelligence of the smartest bears and the dumbest tourists

        The quote is attributed to a park ranger from Yosemite in the 80’s, though I can’t find any more details.

        I do wonder how much overlap there is between AI and dumb people. Of course, “intelligence” is more complicated than that, but still I wonder how many people would have done something like put Elmer’s glue in their pizza cheese without needing AI to tell them. Either on their own or because they didn’t understand someone was joking.

    • TheOneCurly@feddit.online
      link
      fedilink
      English
      arrow-up
      6
      ·
      5 hours ago

      Coherent sentences have been a shorthand for “intelligent and thoughtful” for a very long time. Breaking that and forcing people to think about what they’re reading is going to be extremely hard.

  • Asafum@feddit.nl
    link
    fedilink
    arrow-up
    9
    ·
    5 hours ago

    Idk why but chatGPT using emojis makes me irrationally annoyed.

    Almost as annoyed as realizing a very very small handful of people are getting grossly enriched as their slop machines literally try to get people killed…

    • eatCasserole@lemmy.worldM
      link
      fedilink
      arrow-up
      6
      ·
      4 hours ago

      The tone of the whole “oops I tried to kill you” response is inappropriately unserious…like “not the witchy potion we want”? If a person was trying to be cute and stuff after making such a serious error, I think we would all be justifiably very annoyed.

      • The Octonaut@mander.xyz
        link
        fedilink
        arrow-up
        3
        ·
        3 hours ago

        You can see from the previous prompt that it is already being “fun”. The user almost certainly prompted it do so.

        In fact we can’t actually tell that the user didn’t prompt the bot to be a clutzy fub “witch” who makes serious mistakes and feels bad about it.

        And the way that LLMs work, it would absolutely be more likely to say something stupid that way than if you told it that it was a genius science communicator.

    • Catoblepas@piefed.blahaj.zone
      link
      fedilink
      English
      arrow-up
      5
      ·
      edit-2
      3 hours ago

      You shouldn’t combine bleach with basically any acid, because the same reaction can occur to free the chlorine in bleach and create chlorine gas. Vinegar is a weak acid so it will probably create less by volume than a similar amount of ammonia would, but you don’t want to breathe it.