- cross-posted to:
- [email protected]
- cross-posted to:
- [email protected]
For one month beginning on October 5, I ran an experiment: Every day, I asked ChatGPT 5 (more precisely, its “Extended Thinking” version) to find an error in “Today’s featured article”. In 28 of these 31 featured articles (90%), ChatGPT identified what I considered a valid error, often several. I have so far corrected 35 such errors.


The tool doesn’t just check the text for errors it would know of. It can also check sources, compare articles, and find inconsistencies within the article itself.
There’s a list of the problems it found that often explains where it got the correct information from.