Per my understanding there are no “thinking logs”, the “thinking” is just a part of the processing, not the kind of thing that would be logged, just like how the neural network operation is not logged
I’m no expert though so if you know this to be wrong tell me
“Thinking” is a trained, structured part of the text response. It’s no different than the response itself: more continued text, hence you can get non-thinking models to do it.
Its a training pattern, not an architectual innovation. Some training schemes like GRPO are interesting…
Anyway, what OpenAI does is chop off the thinking part of the response so others can’t train on their outputs, but also so users can’t see the more “offensive” and out-of-character tone LLMs take in their thinking blocks. It kind of pulls back the curtain, and OpenAI doesn’t want that because it ‘dispels’ the magic.
Gemini takes a more reasonable middle ground of summarizing/rewording the thinking block. But if you use a more open LLM (say, Z AI’s) via their UI or a generic API, it’ll show you the full thinking text.
EDIT:
And to make my point clear, LLMs often take a very different tone during thinking.
For example, in the post’s text, ChatGPT likely ruminated on what the users wants and how to satisfy the query, what tone to play, what OpenAI system prompt restrictions to follow, and planned out a response. It would reveal that its really just roleplaying, and “knows it.”
That’d be way more damning to OpenAI. As not only did the LLM know exactly what it was doing, but OpenAI deliberately hid information that could have dispelled the AI psychosis.
Also, you can be sure OpenAI logs the whole response, to use for training later.
“Thinking” is a trained, structured part of the text response. It’s no different than the response itself: more continued text, hence you can get non-thinking models to do it.
Its a training pattern, not an architectual innovation. Some training schemes like GRPO are interesting…
Anyway, what OpenAI does is chop off the thinking part of the response so others can’t train on their outputs, but also so users can’t see the more “offensive” and out-of-character tone LLMs take in their thinking blocks. It kind of pulls back the curtain, and OpenAI doesn’t want that because it ‘dispels’ the magic.
Gemini takes a more reasonable middle ground of summarizing/rewording the thinking block. But if you use a more open LLM (say, Z AI’s) via their UI or a generic API, it’ll show you the full thinking text.
EDIT:
And to make my point clear, LLMs often take a very different tone during thinking.
For example, in the post’s text, ChatGPT likely ruminated on what the users wants and how to satisfy the query, what tone to play, what OpenAI system prompt restrictions to follow, and planned out a response. It would reveal that its really just roleplaying, and “knows it.”
That’d be way more damning to OpenAI. As not only did the LLM know exactly what it was doing, but OpenAI deliberately hid information that could have dispelled the AI psychosis.
Also, you can be sure OpenAI logs the whole response, to use for training later.