Top security issues with Generative AI

Date and time

Monday 15 January 2024




Sb3D in collaboration with the Alexandra Institute and the Danish Industry Foundation

In this webinar, developers of solutions based on Large Language Models (LLMs) will learn about the top vulnerabilities that can compromise LLMs and the mitigation strategies to reduce their risk.

Understand basic LLM vulnerabilities and how to mitigate them

LLM-based solutions such as ChatBots and ChatGPT have recently become very popular, and organisations are scrambling to incorporate AI to improve their productivity and competitive edge. But LLMs also introduce new attack vectors, that must be mitigated to avoid introducing vulnerabilities.

Do you develop Large Language Models (LLMs), plugins, or software that is based on an LLM?

If so, you should take steps to mitigate the most common vulnerabilities that can compromise the LLM, its users, or the company hosting the application.

This webinar is based on the new OWASP top 10 for LLM vulnerabilities, in which security and AI experts from the Alexandra Institute will outline what you as the developer should be aware of, and how you can protect yourself and your organisation from the most common LLM vulnerabilities.

Target audience

The webinar is aimed at:

  • Software developers.
  • Managers of products based on LLM technologies, such as chatbots, voice recognition software, text to speech software, image generation, etc.
  • Tech-savvy IT professionals.

Key takeaways

  • Knowledge about vulnerabilities and risks when developing Large Language Model based solutions.
  • Knowledge about how you can protect the LLM solution and your organisation from the most important LLM vulnerabilities.


If you have any questions, please feel free to contact:

Sebastian Holmgaard Christophersen
Alexandra Institute

E-mail: s.christophersen@alexandra.dk
Phone: 93 52 26 54