I saw this on reddit, thinking it’s a joke, but it’s not. GPT 5 mini (and maybe sometimes 5) gives this answer. You can check it yourself, or see somebody else’s similar conversation here: https://chatgpt.com/share/689a4f6f-18a4-8013-bc2e-84d11c763a99
That’s not my understanding based on this article I read yesterday.
https://minimaxir.com/2025/08/llm-blueberry/
Just because “AI” models have named contrived systems after words related to intelligence does not magically make it true. “Reasoning models” are just shit algorithms tying multiple word blenders together in an attempt to mimic intelligence. There is nothing actually reasoning, no matter what some shit executive decides to name it.
Even in your own cited post, they say their system merely tries to determine if it should use a different model to answer a question, not that it’s actually producing any level of “reasoning” what so ever. It’s just picking a different model, ffs.
That’s a blog post, not an article.
Why don’t you give this one a read: https://arstechnica.com/ai/2025/08/researchers-find-llms-are-bad-at-logical-inference-good-at-fluent-nonsense/
Or, even better, why not read the actual study: https://arxiv.org/pdf/2508.01191