Beta

SEC545: GenAI and LLM Application Security™

  • Online
7 CPEs

Currently, industry security practices for Generative AI (GenAI) are not standardized due to the novelty of the field. This course aims to contribute to the development of GenAI security best practices, guiding the security community through ongoing research and an evolving curriculum. SEC545 training focuses on understanding the security risks associated with Generative AI (GenAI) applications and implementing security controls throughout their lifecycle - from development to hosting and deployment.

What You Will Learn

Secure the Future of GenAI

SEC545 training provides an in-depth exploration of GenAI technologies, starting with core principles and underlying technologies. It will assess security risks by identifying and analyzing real-world threats impacting GenAI applications. As students progress, they will learn to establish security best practices by exploring different measures for securing GenAI applications effectively.

The course begins with a brief introduction to the fundamentals of GenAI, covering key concepts and terminologies such as Large Language Models (LLMs), embeddings, and Retrieval-Augmented Generation (RAG). It then examines the security risks associated with GenAI, including prompt injection attacks, malicious models, and third-party supply chain vulnerabilities. Following this, the course dives into the essential components needed to build a secure GenAI application, including coverage of vector databases, LangChain, and AI agents. The course concludes with a comprehensive overview of hosting GenAI applications, discussing options for local deployment, cloud solutions, and platforms like AWS Bedrock.

Business Takeaways

  • Understand GenAI Applications
  • Identify potential security risks associated with GenAI applications
  • Learn how to mitigate GenAI security risks effectively

Skills Learned

  • Understand key concepts and terminologies: Gain a deep understanding of GenAI, LLM architectures, and their application in real-world scenarios.
  • Explore various models and tools: Examine the types of models and tools available for building and deploying GenAI applications.
  • Explore fine-tuning and customization: Learn how to fine-tune and customize models for specific use cases.
  • Assess risks and mitigation strategies: Identify security risks unique to GenAI applications and explore effective mitigation techniques.
  • Secure RAG, embeddings, and vector databases: Understand Retrieval-Augmented Generation (RAG), Embeddings, and VectorDB, and how to securely configure different components.
  • Explore operations and security controls: Explore the operational aspects of building and deploying GenAI applications and learn about the relevant security controls.
  • Compare hosting options: Understand the various GenAI hosting options and their differences from a security perspective.
  • Leverage cloud security controls: Learn about the security controls offered by cloud providers for LLM hosting services.
  • Explore GenAI adjacent technologies: Examine technologies such as LangChain and agents, and understand the security risks they introduce.
  • Integrate GenAI into security frameworks: Learn how to build or integrate GenAI security practices into existing organizational security frameworks.

Hands-On GenAI and LLM Application Security Training

This course covers essential GenAI concepts, technologies, and security risks, featuring hands-on labs designed to illustrate how attackers can exploit specific vulnerabilities and the strategies to mitigate them. Throughout the course, we will reference the OWASP Top 10 for LLMs and Generative AI applications to highlight prevalent security issues and their solutions.

The labs are conducted using a chat application hosted on AWS EKS, with agents capable of assisting in evaluating resumes for potential candidates and interacting with various AWS infrastructure components. The app also integrates with a Weaviate Vector Database to demonstrate both attack scenarios and defense mechanisms. Participants will work with multiple LLM providers, including OpenAI, a locally hosted Llama 3.2, and AWS Bedrock, providing a comprehensive hands-on experience in securing GenAI applications.

Included Labs:

  • Prompt Injection
  • Comprising VectorDB
  • Compromising LLM Supply Chain
  • Pivoting from LLMs

Additional Free Resources

What You Will Receive

  • Electronic and printed courseware
  • Mp3 audio files of course lecture
  • SANS provisioned AWS account
  • SANS provided OpenAI API Key

Syllabus (7 CPEs)

Download PDF
  • Overview

    The course begins with an introduction to Generative AI (GenAI) concepts, including General AI, Large Language Models (LLMs), Vector Databases, and Embeddings. Students will explore prompt injection techniques and their implications.

    Next, the focus shifts to security risks associated with GenAI, such as prompt and instruction poisoning, as well as malicious models. Students will learn how to compromise Vector Databases and corrupt data.

    The course then delves into GenAI application architecture, covering LLM customization, integration with tools like LangChain and AI agents, and prompt engineering. Students will gain hands-on experience in compromising the LLM supply chain and pivoting to other components within the infrastructure.

    The course concludes with a discussion on hosting GenAI applications, utilizing local models, OpenAI, AWS Bedrock, and Hugging Face. This practical experience equips students with essential skills for securing GenAI applications and deployment.

    Exercises
    • Lab 1.1: Prompt injection
    • Lab 1.2: Compromising Vector DB
    • Lab 1.3: Compromising LLM Supply Chain
    • Lab 1.4: Pivoting from LLMs
    Topics
    • GenAI Introduction and Concepts
      • General AI and Generative AI
      • Large Language Models (LLMs)
      • OpenAI Models
      • Retrieval-Augmented Generation (RAG)
      • Prompt Injection
    • Augmenting GenAI Knowledge
      • Vector Databases
      • Knowledge Sources
      • Agents
      • Poisinging Data Sources
      • Prompt and instruction Poisoning
    • Hosting GenAI applications
      • AWS Bedrock
      • Hugging Face
      • Local Models
      • LLM customization
      • Supply Chain Attacks
    • GenAI Application Architecture
      • LLM Models
      • Decision-making process
      • Langchain
      • Building and Deploying LLMs

Prerequisites

  • Familiarity with Linux command shells and associated commands
  • Familiarity Python and Bash scripting
  • Basic understanding of common application attacks and vulnerabilities

Laptop Requirements

!!! IMPORTANT NOTICE !!!

Cloud Accounts:

Time-limited AWS accounts are provided to students 24 hours before class starts by SANS to use for completing the labs. Students can log in to their SANS account and visit the MyLabs page to download their cloud credentials the day before class begins.

Mandatory Laptop Requirement:

Students must bring their own system configured according to these instructions.

A properly configured system is required to fully participate in this course. If you do not carefully read and follow these instructions, you will likely leave the class unsatisfied because you will not be able to participate in hands-on exercises that are essential to this course. Therefore, we strongly urge you to arrive with a system meeting all the requirements specified for the course.

Students must be in full control of their system's network configuration. The system will need to communicate with the cloud-hosted DevOps server using a combination of HTTPS, SSH, and SOCKS5 traffic on non-standard ports. Running VPN, intercepting proxy, or egress firewall filters may cause connection issues communicating with the DevOps server. Students must be able to configure or disable these services to connect to the lab environment.

Bring Your Own Laptop Configured Using the Following Directions:

A properly configured system is required for each student participating in this course. Before starting your course, carefully read and follow these instructions exactly:

  • Host Operating System: Latest version of Windows 10, macOS 10.15.x or later, or Linux that also can install and run the Firefox browser described below.
  • Fully update your host operating system prior to the class to ensure you have the right drivers and patches installed.
Mandatory Host Hardware Requirements
  • CPU: 64-bit 2.5+ GHz multi-core processor or higher
  • Wireless Ethernet 802.11 B/G/N/AC
  • Local Administrator Access within your host operating system
  • Must have the ability to install Firefox, enable a Firefox extension, and install a new trusted root certificate on the machine.
Mandatory Software Requirements
  • Prior to class, ensure that the following software is installed on the host operating system:
  • Firefox 120.0+
  • Firefox SmartProxy extension: https://addons.mozilla.org/en-US/firefox/addon/smartproxy/
In Summary

Before beginning the course you should:

After you have completed those steps, access the SANS provided AWS account to connect to the SANS Cloud Security Flight Simulator and connect to the SEC545 DevOps server. The SEC545 Instance hosts an electronic workbook, VSCode, Gitlab, and Terminal services that can be accessed through the Firefox browser.

Your course materials include a "Setup Instructions" document that details important steps you must take before you travel to a live class event or start an online class. It may take 30 minutes or more to complete these instructions.

Your class uses an electronic workbook for its lab instructions. In this new environment, a second monitor and/or a tablet device can be useful for keeping class materials visible while you are working on your course's labs.

If you have additional questions about the laptop specifications, please contact customer service.

Author Statement

Emerging technologies often bring substantial value, transforming industries and opening new possibilities. However, their rapid adoption also introduces complex risks that are frequently not fully understood at the outset. As these technologies evolve, the nature and scale of associated risks can shift in unexpected ways, making it challenging to anticipate their full impact. This pattern has been clear with technologies like cloud computing, where the pace of innovation often surpasses our understanding of its security implications. The greater the potential of a technology, the more complex its associated risks.

AI, particularly generative AI, represents the next major wave of transformation, with the potential to reshape nearly every application. This course aims to deepen students' understanding of GenAI and its security challenges, equipping them with the skills to proactively manage and mitigate these risks.

As the industry evolves, so will this course, ensuring that our approach to securing GenAI applications remains at the forefront.

-Ahmed Abugharbia

Register for SEC545

Learn about Group Pricing

Prices below exclude applicable taxes and shipping costs. If applicable, these will be shown on the last page of checkout.

Loading...