Jaden Norman@lemmy.world to Technology@lemmy.worldEnglish · 2 days agoAI agents wrong ~70% of time: Carnegie Mellon studywww.theregister.comexternal-linkmessage-square266fedilinkarrow-up1961arrow-down120cross-posted to: [email protected][email protected][email protected]
arrow-up1941arrow-down1external-linkAI agents wrong ~70% of time: Carnegie Mellon studywww.theregister.comJaden Norman@lemmy.world to Technology@lemmy.worldEnglish · 2 days agomessage-square266fedilinkcross-posted to: [email protected][email protected][email protected]
minus-squareLog in | Sign up@lemmy.worldlinkfedilinkEnglisharrow-up1arrow-down1·19 hours agoWhat’s 0.7^10?
minus-squareLog in | Sign up@lemmy.worldlinkfedilinkEnglisharrow-up1arrow-down1·15 hours agoSo the chances of it being right ten times in a row are 2%.
minus-squareKnock_Knock_Lemmy_In@lemmy.worldlinkfedilinkEnglisharrow-up2·edit-25 hours agoNo the chances of being wrong 10x in a row are 2%. So the chances of being right at least once are 98%.
minus-squareLog in | Sign up@lemmy.worldlinkfedilinkEnglisharrow-up2·14 hours agoAh, my bad, you’re right, for being consistently correct, I should have done 0.3^10=0.0000059049 so the chances of it being right ten times in a row are less than one thousandth of a percent. No wonder I couldn’t get it to summarise my list of data right and it was always lying by the 7th row.
minus-squareKnock_Knock_Lemmy_In@lemmy.worldlinkfedilinkEnglisharrow-up1·5 hours agoThat looks better. Even with a fair coin, 10 heads in a row is almost impossible. And if you are feeding the output back into a new instance of a model then the quality is highly likely to degrade.
minus-squarejwmgregory@lemmy.dbzer0.comlinkfedilinkEnglisharrow-up1·14 hours agodon’t you dare understand the explicitly obvious reasons this technology can be useful and the essential differences between P and NP problems. why won’t you be angry >:(
What’s 0.7^10?
About 0.02
So the chances of it being right ten times in a row are 2%.
No the chances of being wrong 10x in a row are 2%. So the chances of being right at least once are 98%.
Ah, my bad, you’re right, for being consistently correct, I should have done 0.3^10=0.0000059049
so the chances of it being right ten times in a row are less than one thousandth of a percent.
No wonder I couldn’t get it to summarise my list of data right and it was always lying by the 7th row.
That looks better. Even with a fair coin, 10 heads in a row is almost impossible.
And if you are feeding the output back into a new instance of a model then the quality is highly likely to degrade.
don’t you dare understand the explicitly obvious reasons this technology can be useful and the essential differences between P and NP problems. why won’t you be angry >:(