So I was talking to my mother today and she started talking about my brother, who is neurodiverse (we all are to some degree lol) and has always had trouble learning in school.
But over the past few years he managed to go from what is basically the ‘lowest’ possible degree to nearly a college degree. She said that he went to ChatGPT a lot for help fir writing essays and stuff and that he managed to go from struggling to write a one page report to getting straight A’s and B’s, which really surprised me.
What also surprised me is that he suffers from anxiety like me but that he talks to ChatGPT like it is a friend and that it really manages to comfort him when he is having anxiety attacks, going against the anxious thoughts he has at these moments. It never occurred to me that it could be used as a low entry level therapy form.
For example, asbestos was found in his house and he became really anxious about it to the point he could not function properly anymore. And he went to chat with ChatGPT about it and it gave him information on asbestos, on how high the chance was of something happening, of the building materials in the houses build in the decade in my specific hometown, etc. And it helped him get through his anxiety attack.
ChatGPT gets a lot of shit and rightfully so I guess but this specific use really surprised me and made me wonder if it could greatly benefit stuff like mental healthcare or learning problems.
On the question of ‘replacing’ human contact with LLMs, I believe two things:
One, that we will come to accept replacements for humans very easily, contrary to what fiction likes to portray. By which I mean, our higher brain functions as humans are not made solely to work with other people. We empathize with animals, fictional characters, even words that someone else wrote. This is just an observation, I don’t intend to make a value judgment about whether this is good or bad, but I really believe that we will quickly accept human replacements as long as they are convincingly human-like. Movies try to portray this as dystopic, relinquishing what makes us human to a machine, or asking questions about what it means to be human and ‘is a machine really different from you’, stuff like that. I don’t think there will be widespread philosophical debates about such technology. One day they’ll just be there and we’ll get used to it.
Secondly, people are just not always available or able to help. It’s easy to say one should talk to people but there is also a lot of baggage that comes with that, from both sides. I said some people are not able to help (because they don’t know how), but sometimes you might feel like you’re annoying them or relying on them too much. In a way, a chatbot can provide guilt-free interaction because it’s always available and you know it’s a machine, obviously. It’s just completely different from talking to a person, and people respond differently to that. We talk to an AI differently than we talk to people, regardless of how we talk to people normally.
When texting became prominent a lot of people were against it on the grounds of its impact on communication skills, replacing social interactions, and declining language quality. They said the same things about the walkman, that it was a way for people to isolate themselves from society, a form of sensory deprivation from the real world, and that it was rude to have headphones in public.
There is indeed an impact on social relations, but it didn’t come from the advent of texting or the walkman - it comes from hostile architecture that forces us into our individualized homes that we never have to leave, or never can leave. It comes from being forced to work long hours, to the point that we only live to work and have no time for anything else. It comes from the isolation disabled people face as they are left alone at home with no one caring enough to check on them.
If we want people to connect again we have to look at the structural conditions, not the tech.