BrikoX@lemmy.zip to United States | News & Politics@midwest.socialEnglish · 4 days agoConnecticut Man's Case Believed to Be First Murder-Suicide Associated With AI Psychosisgizmodo.comexternal-linkmessage-square7fedilinkarrow-up139arrow-down11file-textcross-posted to: [email protected][email protected]
arrow-up138arrow-down1external-linkConnecticut Man's Case Believed to Be First Murder-Suicide Associated With AI Psychosisgizmodo.comBrikoX@lemmy.zip to United States | News & Politics@midwest.socialEnglish · 4 days agomessage-square7fedilinkfile-textcross-posted to: [email protected][email protected]
minus-squareNutWrench@lemmy.mllinkfedilinkEnglisharrow-up1·edit-24 days agoAIs aren’t capable of figuring out the ethics of what you ask them. They just tell you what they think you want to hear. “I’m thinking of doing (obviously horrible thing) because it will make me feel better.” AI: “Well, that sounds like a wonderful idea.” “But if I do (obviously horrible thing) horrible consequences will happen.” (explaining that the thing is BAD) AI: “Well, you clearly can’t do THAT, can you?”
minus-squareCorkyskog@sh.itjust.workslinkfedilinkarrow-up1·4 days agoThe southpark episodes with Randy and the AI hit the nail on the head.
AIs aren’t capable of figuring out the ethics of what you ask them. They just tell you what they think you want to hear.
“I’m thinking of doing (obviously horrible thing) because it will make me feel better.”
AI: “Well, that sounds like a wonderful idea.”
“But if I do (obviously horrible thing) horrible consequences will happen.” (explaining that the thing is BAD)
AI: “Well, you clearly can’t do THAT, can you?”
The southpark episodes with Randy and the AI hit the nail on the head.