Your ‘cognitive sparring partner’
https://www.youtube.com/watch?v=cpJZcl_eRfU&list=UU9rJrMVgcXTfa8xuMnbhAEA - video
https://pivottoai.libsyn.com/20251202-lawyers-find-more-work-cleaning-up-after-ai-bots - podcast
time: 4 min 11 sec
Your ‘cognitive sparring partner’
https://www.youtube.com/watch?v=cpJZcl_eRfU&list=UU9rJrMVgcXTfa8xuMnbhAEA - video
https://pivottoai.libsyn.com/20251202-lawyers-find-more-work-cleaning-up-after-ai-bots - podcast
time: 4 min 11 sec
I mean, trust but verify is a thing for a reason.
You cannot honestly call it “trust” if you still have to go through the output with a magnifying glass and make sure it didn’t tell anyone to put glue on their pizza.
When any other technology fails to achieve its stated purpose, we call it flawed and unreliable. But AI is so magical! It receives credit for everything it happens to get right, and it’s my fault when it gets something wrong.
The business must have some level of trust to deploy the tool.
They are trusting a “tool” that categorically cannot be trusted. They are fools to trust it.
Yes they are fools.
Distrust but verify
The fact that it needs repeating is confirmation that AI output is dogshit that cannot be trusted. Using AI as anything other than a starting point, like how search engines are used, is dangerous for anything where accuracy matters.
And it just so happens that chatbots discourage the “verify” part by design…