GENAI Security: Risks and Challenges

  • Thursday, 03 Oct 2024 11:00AM EDT (03 Oct 2024 15:00 UTC)
  • Speaker: Ahmed Abugharbia

While GenAI allows organizations to tackle new challenges and minimize resource expenditure, it also presents several security risks, as is common with any new technology. Cybercriminals can exploit novel attack vectors, often due to organizations' limited understanding of GenAI's complexities.
In this talk, we will explore the architecture of GenAI applications and examine the security risks that can impact them. We will begin our discussion by introducing fundamental terminology, including concepts such as Large Language Models (LLMs), Vector Databases, and Retrieval-Augmented Generation (RAG).
Next, we will dive into a typical architecture, examining how different components interact with each other and with the external environment. Following this, we will explore the various risks associated with RAG-based GenAI applications, categorizing them into three main areas: data risks, LLM model risks, and application risks. For each category, we will provide practical examples to illustrate these security concerns.
One attack we will discuss is the concept of Prompt Injection. Prompt Injection can manipulate the instructions programmed into an AI assistant, potentially causing it to reveal sensitive information or perform malicious actions, depending on its integrations. A specific example of this involves targeting the Vector Database. RAG-based GenAI applications rely on Vector Databases for their knowledge base. Unauthorized access to these databases could expose sensitive data and, more critically, facilitate Prompt Injection attacks.
Finally, we will conclude with security recommendations for effectively managing and securing GenAI implementations.

GenAI Risks and Challenges