I saw this on reddit, thinking it’s a joke, but it’s not. GPT 5 mini (and maybe sometimes 5) gives this answer. You can check it yourself, or see somebody else’s similar conversation here: https://chatgpt.com/share/689a4f6f-18a4-8013-bc2e-84d11c763a99
It’s doing precisely what it’s intended to do: telling you what it thinks you want to hear.
Bingo.
LLMs are increasingly getting a sycophancy bias, though that only applies here if you give them anything to go on in the chat history.
It makes benchmarks look better. Which are all gamed now anyway, but kinda all they have to go on.