Webinar

Top security issues with Generative AI

In this webinar, developers of solutions based on Large Language Models (LLMs) will learn about the top vulnerabilities that can compromise LLMs and the mitigation strategies to reduce their risk.

LLM-based solutions such as ChatBots and ChatGPT have recently become very popular, and organisations are scrambling to incorporate AI to improve their productivity and competitive edge. But LLMs also introduce new attack vectors, that must be mitigated to avoid introducing vulnerabilities.

Do you develop Large Language Models (LLMs), plugins, or software that is based on an LLM? If so, you should take steps to mitigate the most common vulnerabilities that can compromise the LLM, its users, or the company hosting the application.

This webinar is based on the new OWASP top 10 for LLM vulnerabilities, in which security and AI experts from the Alexandra Institute will outline what you as the developer should be aware of, and how you can protect yourself and your organisation from the most common LLM vulnerabilities.

The webinar was organised by the Sb3D project in collaboration with the Alexandra Institute.