shish_mish to Technology@lemmy.worldEnglish • 1 year agoResearchers jailbreak AI chatbots with ASCII art – ArtPrompt bypasses safety measures to unlock malicious querieswww.tomshardware.comexternal-linkmessage-square31fedilinkarrow-up1297arrow-down14
arrow-up1293arrow-down1external-linkResearchers jailbreak AI chatbots with ASCII art – ArtPrompt bypasses safety measures to unlock malicious querieswww.tomshardware.comshish_mish to Technology@lemmy.worldEnglish • 1 year agomessage-square31fedilink
minus-square@Mastengwe@lemm.eelinkfedilinkEnglish32•1 year agoSafe AI cannot exist in the same world as hackers.