• Otter@lemmy.caM
    link
    fedilink
    English
    arrow-up
    16
    ·
    6 days ago

    That is true, from the article

    A century ago, somewhere around 8–10 percent of all psychiatric admissions in the US were caused by bromism. That’s because, then as now, people wanted sedatives to calm their anxieties, to blot out a cruel world, or simply to get a good night’s sleep. Bromine-containing salts—things like potassium bromide—were once drugs of choice for this sort of thing.

    Unfortunately, bromide can easily build up in the human body, where too much of it impairs nerve function. This causes a wide variety of problems, including grotesque skin rashes (warning: the link is exactly what it sounds like) and significant mental problems, which are all grouped under the name of “bromism.”

    Bromide sedatives vanished from the US market by 1989, after the Food and Drug Administration banned them, and “bromism” as a syndrome is today unfamiliar to many Americans

    The problem right now is

    • AI hallucinations that people may believe because they treat AI as a one stop shop for credible information
    • People using AI to confirm their own beliefs

    Both things were possible before, if you found someone to feed you had info, but the scale and ease of access is different now.

    It was during his stay, once doctors had his psychosis under control, that the man began telling them how it all began. He had read about the problems with too much table salt, which led him to rid his diet of sodium chloride, which led him to ChatGPT, which led him to believe that he could use sodium bromide instead.

    it’s not clear that the man was actually told by the chatbot to do what he did. Bromide salts can be substituted for table salt—just not in the human body. They are used in various cleaning products and pool treatments, however.

    In my opinion, part of the solution is to share stories like this to slowly educate people on what not to do with generative AI

    On your last point, the article had something relevant at the end

    The story seems like a perfect cautionary tale for the modern age, where we are drowning in information—but where we often lack the economic resources, the information-vetting skills, the domain-specific knowledge, or the trust in others that would help us make the best use of it.

    • Hegar@fedia.io
      link
      fedilink
      arrow-up
      5
      ·
      6 days ago

      AI hallucinations that people may believe because they treat AI as a one stop shop for credible information

      I would phrase it slightly differently: because AI is sold as a one stop shop for credible information.

      No ads show people cross referencing non AI sources or stopping to check what their friends think before acting on the information.

      A certain percentage of people will always delude themselves into something harmful, but that doesn’t absolve AI companies of making that number larger.