So I was talking to my mother today and she started talking about my brother, who is neurodiverse (we all are to some degree lol) and has always had trouble learning in school.
But over the past few years he managed to go from what is basically the ‘lowest’ possible degree to nearly a college degree. She said that he went to ChatGPT a lot for help fir writing essays and stuff and that he managed to go from struggling to write a one page report to getting straight A’s and B’s, which really surprised me.
What also surprised me is that he suffers from anxiety like me but that he talks to ChatGPT like it is a friend and that it really manages to comfort him when he is having anxiety attacks, going against the anxious thoughts he has at these moments. It never occurred to me that it could be used as a low entry level therapy form.
For example, asbestos was found in his house and he became really anxious about it to the point he could not function properly anymore. And he went to chat with ChatGPT about it and it gave him information on asbestos, on how high the chance was of something happening, of the building materials in the houses build in the decade in my specific hometown, etc. And it helped him get through his anxiety attack.
ChatGPT gets a lot of shit and rightfully so I guess but this specific use really surprised me and made me wonder if it could greatly benefit stuff like mental healthcare or learning problems.
It’s great that your brother is doing better with ChatGPT, but LLMs are not psychotherapists and shouldn’t be treated as such. By design they are very affirming. They tend to reinforce people’s convictions, allowing them to detach from reality and becoming psychotic. If you keep telling ChatGPT that you are a wothless piece of shit, and that you should kill yourself, before long, ChatGPT will recommend the best ways to do so.
To me this is indicative and symptomatic of the severe degeneration of society under neoliberal capitalism. Emotional and psychological support structures that are supposed to be there for people in the form of community, family, social services, etc. are instead offloaded onto these fake electronic simulacra of humanity. Humans are social beings, but when that community and social support that we need is missing because the system we live in is so deeply anti-human, we turn to what is ultimately a very poor and potentially dangerous replacement for real human connection. This takes various forms for different people, whether it’s so-called “AI” or something else, perhaps some form of commodity fetishism to fill a void, but none of these coping mechanisms are sufficient replacement for the real thing which is human to human interaction. At best you are temporarily treating a symptom while the systemic underlying cause festers unabated.
What also surprised me is that he suffers from anxiety like me but that he talks to ChatGPT like it is a friend and that it really manages to comfort him when he is having anxiety attacks, going against the anxious thoughts he has at these moments. It never occurred to me that it could be used as a low entry level therapy form.
from what you have said, i believe it works well for him primarily because it is non-judgemental
the robot (by default at least) won’t call you an idiot for worrying about things, or treat you differently because of a strange beliefi am glad that he found something that works well for him, but it is depressing that society has got to this point where a chat bot can fill this need to begin with
Well Yeah now that I think about it, in our current society it is depressing that this is a better way to get help instead of actual therapy. But at the same time under a socialist system I do see some potential. It does need some guidelines though.
if it could greatly benefit stuff like mental healthcare or learning problems
I have talked to deepseek a couple of times when I was having a bad time and it didn’t help much because I can’t say for sure whether whatever it is saying as true or correct. This problem doesn’t exist to such a large extent in domains like coding because you can compile and run the program and test it against reality. LLMs being weird yes men is something I find deeply unsettling. On the other hand, what is someone gonna do? Talk to friends, family, therapist etc.? Some people can and should do this while others don’t have this privilege. So I can’t condemn people turning to chatbots for mental health support but I wish they didn’t have to do that. We will find out about the long term efficacy of therapist LLMs later but my prediction is that it won’t be considered useful for that.
For learning and stuff I find it a bit suspect because if you are using LLMs to learn something, you won’t be able to tell when it is making shit up. You might get decent results if you use it at a level up to high school and maybe early undergrad but aa things get more esoteric LLMs start becoming less reliable.
When you’re marginalized, a very important aspect of every interaction with everyone and everything is that you have to always be sure they won’t snitch or make your life worse in other ways. It drains your energy. But offline machines can’t snitch by design.
On the question of ‘replacing’ human contact with LLMs, I believe two things:
One, that we will come to accept replacements for humans very easily, contrary to what fiction likes to portray. By which I mean, our higher brain functions as humans are not made solely to work with other people. We empathize with animals, fictional characters, even words that someone else wrote. This is just an observation, I don’t intend to make a value judgment about whether this is good or bad, but I really believe that we will quickly accept human replacements as long as they are convincingly human-like. Movies try to portray this as dystopic, relinquishing what makes us human to a machine, or asking questions about what it means to be human and ‘is a machine really different from you’, stuff like that. I don’t think there will be widespread philosophical debates about such technology. One day they’ll just be there and we’ll get used to it.
Secondly, people are just not always available or able to help. It’s easy to say one should talk to people but there is also a lot of baggage that comes with that, from both sides. I said some people are not able to help (because they don’t know how), but sometimes you might feel like you’re annoying them or relying on them too much. In a way, a chatbot can provide guilt-free interaction because it’s always available and you know it’s a machine, obviously. It’s just completely different from talking to a person, and people respond differently to that. We talk to an AI differently than we talk to people, regardless of how we talk to people normally.
When texting became prominent a lot of people were against it on the grounds of its impact on communication skills, replacing social interactions, and declining language quality. They said the same things about the walkman, that it was a way for people to isolate themselves from society, a form of sensory deprivation from the real world, and that it was rude to have headphones in public.
There is indeed an impact on social relations, but it didn’t come from the advent of texting or the walkman - it comes from hostile architecture that forces us into our individualized homes that we never have to leave, or never can leave. It comes from being forced to work long hours, to the point that we only live to work and have no time for anything else. It comes from the isolation disabled people face as they are left alone at home with no one caring enough to check on them.
If we want people to connect again we have to look at the structural conditions, not the tech.
LLMs cannot be therapists, it will be obvious if you try to use one for therapy. In general, do not trust anything trained and gatekept so heavily by corporations with your personal data. That being said there is one small area I’ve found pretty legit help from them in the past.
In my opinion acute anxiety is one of the few use cases I’ve found where LLMs have been absolutely superior to other things for me. I am autistic and I can at times get pretty significant health anxiety, like I’m convinced I’m dying. In the past I would google it and really freak out. Sometimes (more than once) I’ve ended up in the ER from it.
I’ve found it helpful to describe my symptoms and mention that I have severe health anxiety and I’m able to get information from it without worsening my existing anxiety. It doesn’t replace a doctor, and if I’m worried about my health it means I need to see one. But in that exact moment what I need is to calm down and realize that my symptoms are probably not dangerous.
On the other hand I cannot talk to an LLM about personal things, or like it’s a friend. I have tried, I’m not saying it for moral reasons or anything. But I just get mad with how steerable the conversation is, how formulaic the responses sometimes seem, and how full of platitudes a lot of the messages are. And if at some point it misinterprets something I’m asking and decides to refuse it can really irritate me.