- ASCII creation elicits harmful responses from 5 large AI chatbots Ars Technica
- Researchers jailbreak AI chatbots with ASCII creation -- ArtPrompt bypasses information measures to unlock malicious queries Tom's Hardware
- Low-Tech Computer Art Foils Cutting-Edge AI Safety Systems Inc.
- New Jailbreak Method for Large Language Models | by Andreas Stöckl | Mar, 2024 DataDrivenInvestor
- Meet SafeDecoding: A Novel Safety-Aware Decoding AI Strategy to Defend Against Jailbreak Attacks MarkTechPost