May 07, 2026
As LLMs move into critical business processes, security is paramount. Garak is an open-source vulnerability scanner—think of it as "nmap for LLMs"—that probes your models for a wide range of security and quality issues.
Garak runs a series of "probes" against your model to test for prompt injection, data leakage, toxic output, and hallucinations. It attempts to "break" the model using known attack vectors, providing you with a detailed report on where your AI system is vulnerable.
By integrating Garak into your development lifecycle, you can perform automated security audits every time you update your model or your system prompts. This proactive approach to AI safety ensures that your applications remain secure and reliable as the threat landscape evolves.