Optimising Security in Generative AI
In this guide, we discuss security issues related to GenAI, in particular Large Language Model-based tools (LLMs). We present the main vulnerabilities to consider while developing AI-based applications and identify corresponding mitigations. Our discussion is based on OWASP top 10 for LLM.
The guide focuses on the following security topics:
- Prompt injection
- Training data poisoning
- Insecure output handling
- Insecure plugin design
- Excessive agency
- Published June 2024