New

SEC545: GenAI and LLM Application Security™

  • In Person (3 days)
  • Online
18 CPEs

SEC545 training focuses on understanding the security risks associated with Generative AI (GenAI) applications and implementing security controls throughout their lifecycle—from development to hosting and deployment. The course begins with an introduction to core GenAI concepts, covering popular tools and vendors. It then explores specific topics such as large language models (LLMs), agents, retrieval-augmented generation (RAG), and best practices for hosting GenAI applications. Security controls and risk mitigation strategies are examined at each stage. The course concludes with guidance on establishing a GenAI security practice or integrating it into existing security frameworks.

What You Will Learn

Currently, industry security practices for Generative AI (GenAI) are not standardized due to the novelty of this field. This course aims to contribute to the development of GenAI security best practices, guiding the security community through ongoing research and an evolving curriculum.

SEC545 training provides an in-depth exploration of GenAI technologies, starting with core principles and underlying technologies. It will assess security risks by identifying and analyzing real-world threats impacting GenAI applications. As students progress, they will learn to establish security best practices by exploring different measures for securing GenAI applications effectively.

The course begins with an introduction to the fundamentals of GenAI, covering key concepts and terminologies such as Large Language Models (LLMs), embeddings, and Retrieval-Augmented Generation (RAG). It then examines the security risks associated with GenAI, including prompt injection attacks, malicious models, and third-party supply chain vulnerabilities. Following this, the course dives into the essential components needed to build a GenAI application, including coverage of vector databases, LangChain, and AI agents. The course concludes with a comprehensive overview of hosting GenAI applications, discussing options for local deployment, cloud solutions, and platforms like AWS Bedrock.

Business Takeaways

  • Understanding GenAI Applications
  • Identifying potential risks associated with GenAI applications
  • Learning how to mitigate GenAI risks effectively

Skills Learned

  • Understand key concepts and terminologies: Gain a deep understanding of GenAI, LLM architectures, and their application in real-world scenarios.
  • Explore various models and tools: Examine the types of models and tools available for building and deploying GenAI applications.
  • Explore fine-tuning and customization: Learn how to fine-tune and customize models for specific use cases.
  • Assess risks and mitigation strategies: Identify security risks unique to GenAI applications and explore effective mitigation techniques.
  • Secure RAG, embeddings, and vector databases: Understand Retrieval-Augmented Generation (RAG), Embeddings, and VectorDB, and how to securely configure different components.
  • Explore operations and security controls: Explore the operational aspects of building and deploying GenAI applications and learn about the relevant security controls.
  • Compare hosting options: Understand the various GenAI hosting options and their differences from a security perspective.
  • Leverage cloud security controls: Learn about the security controls offered by cloud providers for LLM hosting services.
  • Explore GenAI adjacent technologies: Examine technologies such as LangChain and agents, and understand the security risks they introduce.
  • Integrate GenAI into security frameworks: Learn how to build or integrate GenAI security practices into existing organizational security frameworks.

Hands-On GenAI and LLM Application Security Training

With the anticipated transformative impact of Generative AI (GenAI) on industries and technologies, the need for robust security practices to address its risks has never been more critical. SEC545™ training equips students with the necessary knowledge to secure GenAI applications.

SEC545™ training offers a comprehensive exploration of GenAI technologies, starting with foundational principles and underlying frameworks. It rigorously evaluates security risks by identifying and analyzing real-world threats affecting GenAI applications. Students will progressively learn to implement security best practices by exploring strategies to safeguard GenAI systems effectively.

By the end of this training, students will possess a holistic understanding of GenAI security, empowering them to design, deploy, and defend GenAI systems in a rapidly evolving technological landscape.

Included Labs:

Section 1:

  • Lab 1.1: LLMs and Prompt Injection
  • Lab 1.2: Fine-tuning OpenAI Models
  • Lab 1.3: Compromising Vector Database
  • Lab 1.4: Safe Use and Moderation

Section 2:

  • Lab 2.1: AWS Bedrock
  • Lab 2.2: Compromising LLM Supply Chain
  • Lab 2.3: Pivoting from LLMs
  • Lab 2.4: Langchain Security

Section 3:

  • Lab 3.1: Model Serialization Attacks
  • Lab 3.2: MLSecOps - Securing AI Deployment Pipeline
  • Lab 3.3: Capture the Flag

Additional Free Resources

What You Will Receive

  • Electronic and printed courseware
  • Mp3 audio files of course lecture
  • SANS provisioned AWS account
  • SANS provided OpenAI API Key

Syllabus (18 CPEs)

Download PDF
  • Overview

    This course begins with a thorough introduction to GenAI fundamentals, covering essential concepts such as Large Language Models (LLMs), embeddings, and Retrieval-Augmented Generation (RAG). Students will dive into the security risks unique to GenAI, including prompt injection attacks, malicious model manipulation, and vulnerabilities within third-party supply chains.

    Exercises
    • Lab 1.1: LLMs and Prompt Injection
    • Lab 1.2: Fine-tuning OpenAI Models
    • Lab 1.3: Compromising Vector Database
    • Lab 1.4: Safe Use and Moderation
    Topics

    GenAI Introduction and Concepts

    • General AI and Generative AI
    • Large Language Models (LLMs)
    • Retrieval-Augmented Generation (RAG)
    • GenAI Application Components Security
    • Prompt Injection

    Fine-tuning Models:

    • OpenAI fine-tuning
    • File-tuning risks and models' access

    Augmenting GenAI Knowledge

    • Vector Databases
    • Knowledge Sources
    • Poisoning Data Sources
    • Prompt and instruction Poisoning

    Safe Use and Moderation

  • Overview

    Building on the foundation of section 1, students will examine the key components needed to develop GenAI applications, including vector databases, LangChain, and AI agents. The course extends to deployment strategies, offering a comparative analysis of cloud-based solutions and on-premises setups, with an emphasis on the specific security risks inherent to each option.

    Exercises
    • Lab 2.1: AWS Bedrock
    • Lab 2.2: Compromising LLM Supply Chain
    • Lab 2.3: Pivoting from LLMs
    • Lab 2.4: Langchain Security
    Topics

    Hosting GenAI Applications

    • AWS Bedrock and it's security features
    • Running Local Models Securely
    • LLM customization
    • Models hosting and Supply Chain Attacks

    GenAI Applications Architecture

    • Building and deploying GenAI applications
    • GenAI Architecture Security

    Agentic AI

    • Agents` design and capabilities
    • Agents' security risks

    Langchain Security

  • Overview

    In the third and final section, this course shifts its focus to MLSecOps—the integration of security operations into the machine learning lifecycle—and concludes with advanced threat modeling techniques aimed at identifying, assessing, and comprehensively mitigating risks.

    Exercises
    • Lab 3.1: Model Serialization Attacks
    • Lab 3.2: MLSecOps - Securing AI Deployment Pipeline
    • Lab 3.3: Capture the Flag
    Topics

    Model Sanitization Attacks

    MLSecOps - Securing AI Deployment Pipeline

    • GenAI application Lifecycles
    • Data Protections
    • Security Operations

    Capture the Flag

Prerequisites

  • Familiarity with Linux command shells and associated commands
  • Familiarity Python and Bash scripting
  • Basic understanding of common application attacks and vulnerabilities

Laptop Requirements

!!! IMPORTANT NOTICE !!!

Cloud Accounts:

Time-limited AWS accounts are provided to students 24 hours before class starts by SANS to use for completing the labs. Students can log in to their SANS account and visit the MyLabs page to download their cloud credentials the day before class begins.

Mandatory Laptop Requirement:

Students must bring their own system configured according to these instructions.

A properly configured system is required to fully participate in this course. If you do not carefully read and follow these instructions, you will likely leave the class unsatisfied because you will not be able to participate in hands-on exercises that are essential to this course. Therefore, we strongly urge you to arrive with a system meeting all the requirements specified for the course.

Students must be in full control of their system's network configuration. The system will need to communicate with the cloud-hosted DevOps server using a combination of HTTPS, SSH, and SOCKS5 traffic on non-standard ports. Running VPN, intercepting proxy, or egress firewall filters may cause connection issues communicating with the DevOps server. Students must be able to configure or disable these services to connect to the lab environment.

Bring Your Own Laptop Configured Using the Following Directions:

A properly configured system is required for each student participating in this course. Before starting your course, carefully read and follow these instructions exactly:

  • Host Operating System: Latest version of Windows 10, macOS 10.15.x or later, or Linux that also can install and run the Firefox browser described below.
  • Fully update your host operating system prior to the class to ensure you have the right drivers and patches installed.
Mandatory Host Hardware Requirements
  • CPU: 64-bit 2.5+ GHz multi-core processor or higher
  • Wireless Ethernet 802.11 B/G/N/AC
  • Local Administrator Access within your host operating system
  • Must have the ability to install Firefox, enable a Firefox extension, and install a new trusted root certificate on the machine.
Mandatory Software Requirements
  • Prior to class, ensure that the following software is installed on the host operating system:
  • Firefox 120.0+
  • Firefox SmartProxy extension: https://addons.mozilla.org/en-US/firefox/addon/smartproxy/
In Summary

Before beginning the course you should:

After you have completed those steps, access the SANS provided AWS account to connect to the SANS Cloud Security Flight Simulator and connect to the SEC545 DevOps server. The SEC545 Instance hosts an electronic workbook, VSCode, Gitlab, and Terminal services that can be accessed through the Firefox browser.

Your course materials include a "Setup Instructions" document that details important steps you must take before you travel to a live class event or start an online class. It may take 30 minutes or more to complete these instructions.

Your class uses an electronic workbook for its lab instructions. In this new environment, a second monitor and/or a tablet device can be useful for keeping class materials visible while you are working on your course's labs.

If you have additional questions about the laptop specifications, please contact customer service.

Author Statement

Emerging technologies often bring substantial value, transforming industries and opening new possibilities. However, their rapid adoption also introduces complex risks that are frequently not fully understood at the outset. As these technologies evolve, the nature and scale of associated risks can shift in unexpected ways, making it challenging to anticipate their full impact. This pattern has been clear with technologies like cloud computing, where the pace of innovation often surpasses our understanding of its security implications. The greater the potential of a technology, the more complex its associated risks.

AI, particularly generative AI, represents the next major wave of transformation, with the potential to reshape nearly every application. This course aims to deepen students' understanding of GenAI and its security challenges, equipping them with the skills to proactively manage and mitigate these risks.

As the industry evolves, so will this course, ensuring that our approach to securing GenAI applications remains at the forefront.

-Ahmed Abugharbia

Register for SEC545

Learn about Group Pricing

Prices below exclude applicable taxes and shipping costs. If applicable, these will be shown on the last page of checkout.

Loading...