irelephant [he/him]@programming.devM to iiiiiiitttttttttttt@programming.devEnglish · 30 days agoWe put the Thing That Can't Do Numbers™ in your spreadsheetsprogramming.devimagemessage-square100fedilinkarrow-up1841arrow-down17cross-posted to: [email protected]
arrow-up1834arrow-down1imageWe put the Thing That Can't Do Numbers™ in your spreadsheetsprogramming.devirelephant [he/him]@programming.devM to iiiiiiitttttttttttt@programming.devEnglish · 30 days agomessage-square100fedilinkcross-posted to: [email protected]
minus-squareT156@lemmy.worldlinkfedilinkEnglisharrow-up10·edit-229 days agoAt the same time, that sounds like something you’d just use old-fashioned sentiment analysis for. It’s less accurate, but also far less demanding, and doesn’t risk hallucinating.
minus-squareThe Ramen Dutchman@ttrpg.networklinkfedilinkEnglisharrow-up1·edit-220 days ago It’s less accurate and doesn’t risk hallucinating I might be mistaken, but don’t these two lines mean the exact opposite in this context? Is AI more often right, or more often wrong?
minus-squareT156@lemmy.worldlinkfedilinkEnglisharrow-up1·20 days agoBoth, because the way it’s right and wrong are different. Sentiment analysis might misclassify some of the data, but it doesn’t risk making things up wholescale like an LLM would.
At the same time, that sounds like something you’d just use old-fashioned sentiment analysis for.
It’s less accurate, but also far less demanding, and doesn’t risk hallucinating.
I might be mistaken, but don’t these two lines mean the exact opposite in this context?
Is AI more often right, or more often wrong?
Both, because the way it’s right and wrong are different.
Sentiment analysis might misclassify some of the data, but it doesn’t risk making things up wholescale like an LLM would.