Jailbreak Attacks

Researchers Hacked AI Assistants Using ASCII Art

Large language models (LLMs) are vulnerable to attacks, leveraging their inability to recognize prompts conveyed through ASCII art.  ASCII art…

11 months ago