I think if it only takes a matter of weeks to go into full psychosis from conversation alone, they’re probably already on shaky ground, mentally. Late onset schizophrenia is definitely a thing.
People are often overly confident about their imperviousness to mental illness. In fact I think that --given the right cues – we’re all more vulnerable to mental illness than we’d like to think.
Baldur Bjarnason wrote about this recently. He talked about how chatbots are incentivizing and encouraging a sort of “self-experimentation” that exposes us to psychological risks we aren’t even aware of. Risks that no amount of willpower or intelligence will help you avoid. In fact, the more intelligent you are, the more likely you may be to fall into the traps laid in front of you, because your intelligence helps you rationalize your experiences.
I think this has happened before. There are accounts of people who completely lost touch with reality after getting involved with certain scammers, cult-leaders, self-help gurus, “life coaches”, fortune tellers or the like. However, these perpetrators were real people who could only handle a limited number of victims at any given time. Also, they probably had their very specific methods and strategies which wouldn’t work on everybody, not even all the people who might have been the most susceptible. ChatGPT, on the other hand, can do this at scale. Also, it was probably trained from all websites and public utterances of any scammer, self-help author, (wannabe) cult leader, life coach, cryptobro, MLM peddler etc. available, which allows it to generate whatever response works best to keep people “hooked”. In my view, this alone is a cause for concern.
I think if it only takes a matter of weeks to go into full psychosis from conversation alone, they’re probably already on shaky ground, mentally. Late onset schizophrenia is definitely a thing.
People are often overly confident about their imperviousness to mental illness. In fact I think that --given the right cues – we’re all more vulnerable to mental illness than we’d like to think.
Baldur Bjarnason wrote about this recently. He talked about how chatbots are incentivizing and encouraging a sort of “self-experimentation” that exposes us to psychological risks we aren’t even aware of. Risks that no amount of willpower or intelligence will help you avoid. In fact, the more intelligent you are, the more likely you may be to fall into the traps laid in front of you, because your intelligence helps you rationalize your experiences.
I think this has happened before. There are accounts of people who completely lost touch with reality after getting involved with certain scammers, cult-leaders, self-help gurus, “life coaches”, fortune tellers or the like. However, these perpetrators were real people who could only handle a limited number of victims at any given time. Also, they probably had their very specific methods and strategies which wouldn’t work on everybody, not even all the people who might have been the most susceptible. ChatGPT, on the other hand, can do this at scale. Also, it was probably trained from all websites and public utterances of any scammer, self-help author, (wannabe) cult leader, life coach, cryptobro, MLM peddler etc. available, which allows it to generate whatever response works best to keep people “hooked”. In my view, this alone is a cause for concern.