AI models are annoyingly affirming even for the most benign questions. I can be like what shape is a stop sign? It would reply with something like “Way to think on your toes and you are so right for asking about that!”
I feel like it started doing so not too long ago. The first couple times it worked on me and was kinda proud I asked a clever question. But eventually I noticed, it does it no matter what I ask, and I felt so foolish.
Gotta hit LLMs with utterly unbiased questions, and that’s hard for most. I get pretty solid results, but still, gotta look into the reply, not take it at face value. And the further you pursue a certain tack, the less valuable the output.
AI models are annoyingly affirming even for the most benign questions. I can be like what shape is a stop sign? It would reply with something like “Way to think on your toes and you are so right for asking about that!”
I want them to stop yapping fake platitudes and just give me the answer straight with no conversational fluff.
This has probably been said a lot, but partly I think is the tech techbros want to make HAL, Jarvis, any other fictional AI with personality.
I feel like it started doing so not too long ago. The first couple times it worked on me and was kinda proud I asked a clever question. But eventually I noticed, it does it no matter what I ask, and I felt so foolish.
give me a model that responds “it’s an octagon dipshit, sesame street taught you this”
Give me a model that eats other models and then dies.
I know of people who (proudly) post screenshots of GPT calling them insightful, as if the matrix multiplier didn’t already tell everyone that
Gotta hit LLMs with utterly unbiased questions, and that’s hard for most. I get pretty solid results, but still, gotta look into the reply, not take it at face value. And the further you pursue a certain tack, the less valuable the output.