https://30p87.de/shitskibidi.mp4, if you don’t want to access ShitTok
I’m posting this because it’s a great example of how LLMs do not actually have their own thoughts, or any sort of awareness of what is actually happening in a conversation.
It also shows how completely useless it is to have a “conversation” with someone who is just in agreeability mode the whole time (a.k.a. “maximizing engagement mode”) – offering up none of their own thoughts, but just continually prompting you to keep talking. And honestly, some people act that way too. And other kinds of people crave a conversation partner who acts that way, because it makes them the center of attention. It makes you feel interesting when the other person is endlessly saying, “You’re right! Go on.”
LLMs work best when used as a mirror to help you reflect on your own thoughts, as that’s the only thinking going into the process. This isn’t how most people are using it.
this is an effect known for almost 60 years now https://en.wikipedia.org/wiki/ELIZA_effect
The only winning move is not to play.
Pretty sure anyone that has ever used chatgpt could have foreseen that.
Unfortunately, I’m not sure about that. Plenty of people who use ChatGPT end up thinking that it is sentient and has its own thoughts, but that’s because they don’t realize how much they are having to drive the conversation.
That was painful to listen to. It would’ve been more interesting if he just gave them a fucking prompt and let them spiral.
Maybe it would be more interesting for a reply or two, but it would have quickly fallen right into the same spiral.
No I think it wouldve been very capable of bringing up semi-related facts and that would prompt a different response.
Who needed those trees anyways
Removed by mod
So chat GPT is the contrary to that depressed guy I used to date