- cross-posted to:
- [email protected]
- cross-posted to:
- [email protected]
For one month beginning on October 5, I ran an experiment: Every day, I asked ChatGPT 5 (more precisely, its “Extended Thinking” version) to find an error in “Today’s featured article”. In 28 of these 31 featured articles (90%), ChatGPT identified what I considered a valid error, often several. I have so far corrected 35 such errors.


Using ChatGPT to “fix” Wikipedia, what could possibly go wrong? (/s as the approach seems valid, this is just a funny statement)