cm0002@lemmy.cafe to Technology@lemmy.zipEnglish · 3 days agoResearchers Jailbreak AI by Flooding It With Bullshit Jargonwww.404media.coexternal-linkmessage-square15fedilinkarrow-up187arrow-down14cross-posted to: [email protected][email protected][email protected][email protected]
arrow-up183arrow-down1external-linkResearchers Jailbreak AI by Flooding It With Bullshit Jargonwww.404media.cocm0002@lemmy.cafe to Technology@lemmy.zipEnglish · 3 days agomessage-square15fedilinkcross-posted to: [email protected][email protected][email protected][email protected]
minus-squareSheeEttin@lemmy.ziplinkfedilinkEnglisharrow-up4·3 days agoNo, those filters are performed by a separate system on the output text after it’s been generated.
minus-squareiAvicenna@lemmy.worldlinkfedilinkEnglisharrow-up1·3 days agomakes sense though I wonder if you can also tweak the initial prompt so that the output is also full of jargon so that output filter also misses the context
minus-squareSheeEttin@lemmy.ziplinkfedilinkEnglisharrow-up1·3 days agoYes. I tried it, and it only filtered English and Chinese. If I told it to use Spanish, it didn’t get killed.
No, those filters are performed by a separate system on the output text after it’s been generated.
makes sense though I wonder if you can also tweak the initial prompt so that the output is also full of jargon so that output filter also misses the context
Yes. I tried it, and it only filtered English and Chinese. If I told it to use Spanish, it didn’t get killed.