9:00 am - 9:10 am
MT
3:00 pm - 3:10 pm UTC | In-Person & Streaming Live Online Opening Remarks |
9:10 am - 9:40 am
MT
3:10 pm - 3:40 pm UTC | In-Person & Streaming Live Online Keynote: What to Expect When You’re Expecting Your GenAI Baby Sounil Yu, Co-founder and Chief AI Safety Officer, Knostic Many of us are scrambling to leverage GenAI, but it’s hard to anticipate the risks, challenges, and controls. Using various mental models, we can get a clearer understanding of what to expect in the next stages of the AI revolution and start building governance processes and security capabilities to get ahead of potential challenges.
Show More
|
9:45 am - 10:10 am
MT
3:45 pm - 4:10 pm UTC | In-Person & Streaming Live Online Fireside Chat Sounil Yu, Co-founder and Chief AI Safety Officer, Knostic |
10:15 am - 2:00 pm
MT
4:15 pm - 8:00 pm UTC | Workshops | In-Person Only Adversarial Machine Learning workshop Get ready to flip the script on the machines! During this 2-hour escapade, you will explore adversarial ML techniques, from exploiting the models to bypassing their predictions. We'll start from scratch to teach you how to turn the tables on ML models. No prior adversarial ML experience needed! Workshop Outline / Schedule
Introduction and Lab Setup (5 minutes)
- Who we are
- Instructions on getting lab setup
Instruction: Overview of Inference Style Attacks (10 minutes)
- Intro to Adversarial Machine Learning
- Intro to Inference Style Attacks
Labs: Bypassing ML Models (20 Minutes)
- Hands on conducting inference attack on Image recognition models
Instruction: Stealing ML Models (10 Minutes)
- How and Why models are stolen
- Examples of this done in the wild
Labs: Model Stealing (15 Minutes)
- Train and steal a model using open-source tooling
Instruction: Data Poisoning (10 Minutes)
- Examples of data poisoning in the wild
- How ML models consume data and where opportunities exist to poison the data
Labs: Data Poisoning (15 minutes)
- Train an image recognition model
- Create a dataset with a backdoor implanted into the images
- Retrain the model
- Test the data poison success
Instruction: Overview of Attacking ML Model Files (10 minutes)
- How model files are saved/loaded
- How we can take advantage of this process from an attacker’s perspective
Labs: Attacking ML Model Files (15 Minutes)
- Create new model files with arbitrary code injected into them
- Inject arbitrary code into existing model files
Labs: Prompt Injection Interactive Session (10 Minutes) Outcomes / Learnings
At the end of this session, attendees will have a basic understanding of adversarial machine learning techniques which are being used in the wild today. With hands-on experience they will get to know various tools available to test their own ML infrastructure, and learn strategies to counteract these styles of attacks. Required Materials
Attendees will need a laptop with an internet connection and the ability to run a Jupyter notebook via a local Jupyter instance, Visual Studio Code, Google Colab, or similar setup. [THIS MIGHT BE CHANGING BECAUSE THERE CAN BE DEPENDENCY ISSUES AND WE"RE LOOKING AT THE POSSIBILITY OF CREATING A DOCKER INSTANCE FOR IT] Intended Audience
This session is intended for people who are tasked with testing the robustness and security of their machine learning systems. While no background in machine learning will be necessary, experience with writing python code is highly recommended.
Show More
|
10:15 am - 10:35 am
MT
4:15 pm - 4:35 pm UTC | In-Person & Streaming Live Online AI in Cyber Security for Colour RED and Colour BLUE In this session, we describe and showcase the concept of AI agents, starting with using AI for red teaming and demoing how to build an AI agent that can automate the process of vulnerability discovery, exploitation, and reporting. We then share our experience leading the development of an AISecOps analyst to help the security operations blue team detect and respond faster and better, giving access to critical information, data, and insights across their threat landscape. Information comes in different forms and from different places: alerts, tickets, vulnerabilities, policies, processes, etc. How do we collect and provide this data in a single GenAI pane to the blue team? We share hands-on cyber security operations AI use cases, the approach, the architecture, and lessons learned. We explore the growth that AISecOps is experiencing to become a composite of expertise, allowing it to extend to cover other AIOps areas such as NetOps and DataOps. Finally, we discuss how secure the AI pipeline was as we built and operated the platform. We will demo the AISecOps analyst and demonstrate it in operation.
The session will help attendees:
- Understand the structure of AI agents and agentic workflows
- Demonstrating AI for red teams and AI for security operation centers.
- Discussion on the architrave of an AI platform
- Showcasing critical functions delivered by an AI architecture: text-to-SQL and retrievals With examples
- Sharing the lessons learned across the different sections
Show More
|
10:35 am - 11:00 am
MT
4:35 pm - 5:00 pm UTC | In-Person & Streaming Live Online Break |
11:00 am - 11:35 am
MT
5:00 pm - 5:35 pm UTC | In-Person & Streaming Live Online Using LLMs, Embeddings, and Similarity to Assist DFIR Analysis Generative AI and Large Language Models (LLM) have become increasingly popular over the last couple of years. The attackers are using this technology to gain advantages, thus, so should the investigators. This session will examine innovative methods of utilizing LLMs, embedding models, and Machine Learning clustering algorithms in concert to help you efficiently analyze data sets using similarity. The techniques demonstrated in this session can provide handy tools to have for triaging certain types of data points such as command line executions. This method will first be demonstrated via Jupyter Notebook to show the fun technical details of how and why this method works, then, an open-source tool that can run against native Windows Event Logs will be utilized and demonstrated which automates this method allowing you to partake in the fun! You will not only learn about new innovative methods but also get free tools to walk always with.
Show More
|
11:40 am - 12:15 pm
MT
5:40 pm - 6:15 pm UTC | In-Person & Streaming Live Online Securing the Grid: AI's Promise for Cyber Resilience in Power Systems
Show More
|
12:15 pm - 1:15 pm
MT
6:15 pm - 7:15 pm UTC | In-Person & Streaming Live Online Lunch |
1:15 pm - 1:50 pm
MT
7:15 pm - 7:50 pm UTC | In-Person & Streaming Live Online Unlocking Cyber Insights with AI: Using ML and LLMs for Next-Gen Analysis Explore how data-driven cybersecurity engineers can harness AI and ML to decode data. Using ML methods and LLMs, this talk showcases practical techniques for anomaly detection and behavior classification, featuring open-source tool demos. Through in-depth demonstrations moving from raw data to actionable models, attendees will reimagine cybersecurity analysis.
Full outline:
The next generation of cybersecurity engineers will be data engineers specializing in cybersecurity, leveraging modern technology to interpret the vast amounts of data they collect. We are constantly inundated with information about GPT, ML, AI, and various other acronyms. The critical question is: how can cybersecurity engineers utilize these tools effectively for more efficient analysis? Using ML and AI from a cybersecurity researcher's perspective, we will discuss concrete examples that demonstrate how to uncover insights and make discoveries. A key highlight will be a discussion on benchmarks—evaluating the most promising language models for applications of ML in cybersecurity.
To set the stage, we will examine the types of data commonly encountered in a cybersecurity ecosystem. Through a combination of the traditional matrix data structure, pandas, paired with the capabilities of LLMs, we will explore methods for extracting key patterns and features. The end goal of this exploration is to classify behaviors as either malicious or benign.
Next, we will delve into the framework of Exploratory Data Analysis (EDA), that is employing statistical methods and visualizations to make sense of opaque datasets. We will demonstrate how AI can assist in “questioning” data, drawing actionable conclusions, and building anomaly detection models.
Finally, we will showcase how practitioners can use and tune language models to classify data as malicious or benign, offering insights into the effectiveness of different LLMs for tackling cybersecurity challenges. This presentation includes open-source demonstrations using Jupyter notebooks and public datasets featuring known network attacks, allowing participants to reproduce and adapt these demonstrations for their own projects.
The ultimate goal of this talk is to demonstrate how defenders can use data to uncover malicious activity using traditional ML techniques, such as EDA and classification, implemented with the help of various language models. By comparing language models’ effectiveness in traditional ML tasks for cybersecurity applications, we aim to inspire creativity and resourcefulness among cybersecurity engineers. Through this journey from raw data to actionable models, we will highlight the vast potential ML and AI offer to redefine the field of cybersecurity.
Demo repository (subject to changes): https://github.com/mundruid/cyberdata-mlai
Show More
|
1:55 pm - 2:30 pm
MT
7:55 pm - 8:30 pm UTC | In-Person & Streaming Live Online Data to Defense: Generative AI and RAG Powering Real-Time Threat Response James Spiteri, Director of PM, Generative AI and ML - Security Analytics, Elastic See how Retrieval-Augmented Generation (RAG) transforms raw data into actionable intelligence for incident response. Drawing from extensive research, this session includes practical demonstrations on how generative AI empowers security teams, surfacing critical insights to accelerate threat detection beyond traditional methods
Show More
|
2:15 pm - 4:15 pm
MT
8:15 pm - 10:15 pm UTC | Workshops | In-Person Only Harnessing and Securing Large Language Models: A Hands-On Workshop for Cyber Defenders Sounil Yu, Co-founder and Chief AI Safety Officer, Knostic LLMs have revolutionized the way we do cybersecurity, but they also open new avenues for attackers. In this hands-on workshop, you will explore three core facets of LLMs in cybersecurity: - LLMs for Security
- LLMs against Security
- Security for LLMs
First, we will learn how to leverage LLMs to accelerate detection, investigation, and reporting of threats. Next, you will discover how adversaries might weaponize these same capabilities to craft sophisticated phishing attacks or malicious scripts. Finally, you will delve into the best practices for securing LLM-based applications against threats such as prompt injection, data leakage, and model poisoning. Expect a highly interactive session with live exercises, real-world scenarios, and actionable insights that you can immediately apply to enhance your organization’s defensive posture.
Show More
|
2:35 pm - 3:10 pm
MT
8:35 pm - 9:10 pm UTC | In-Person & Streaming Live Online Influence Operations with AI and Cyber Enabled Ops This presentation will explore the strategy behind influence operations conducted by state-sponsored actors and how they leverage AI to weaponize their campaigns.
Show More
|
3:10 pm - 3:30 pm
MT
9:10 pm - 9:30 pm UTC | In-Person & Streaming Live Online Break |
3:30 pm - 3:50 pm
MT
9:30 pm - 9:50 pm UTC | In-Person & Streaming Live Online Building an AI Pen-testing Assistant This talk will go over the tools and techniques used to build an AI assistant like the one developed for the new SEC535: Offensive AI - Attack Tools and Techniques course. Attendees will be able to take away how to leverage AI model tool calling to automate the running of tools like nmap and metasploit and how to keep the outputs of those tools in sync with an AI chat context. Attendees will also learn how to properly prompt the AI to use tools and associated commands correctly.
Show More
|
3:55 pm - 4:15 pm
MT
9:55 pm - 10:15 pm UTC | In-Person & Streaming Live Online The AI Security Gap: Addressing the Unique Vulnerabilities of GenAI-based applications Gabriele Zanoni, Mandiant Consulting Country Manager Italy and Principal Strategic Consultant, Google This presentation provides attendees with actionable insights and strategies to effectively secure their own AI systems against current and future threats in this rapidly evolving landscape.This presentation explores the distinct security risks introduced by the rapid expansion of AI-powered applications, particularly online chatbots. It goes beyond general AI security issues to address the growing threat of category drift, where applications are exploited by users for unintended and potentially harmful purposes.
The presentation examines practical strategies to mitigate a range of risks, including prompt injection, data poisoning, and application misuse. It investigates both proactive measures like robust input validation and output analysis, as well as reactive approaches for identifying and addressing malicious activity.
Attendees will leave this presentation with actionable insights and strategies to effectively secure their own AI systems against current and emerging threats in this rapidly changing environment.
Show More
|
4:20 pm - 4:40 pm
MT
10:20 pm - 10:40 pm UTC | In-Person & Streaming Live Online The Human Element in AI: Why People Still Matter in a Machine-Led Era Emphasizing the critical role of human intuition, ethical judgment, and interpersonal skills in an increasingly AI-driven world. This talk uncovers why, despite advancements in machine learning, the human touch remains indispensable for ethical and effective AI integration.
Show More
|
4:40 pm - 5:00 pm
MT
10:40 pm - 11:00 pm UTC | In-Person & Streaming Live Online Day 1 Wrap-Up |
5:00 pm - 7:30 pm
MT
11:00 pm - 1:30 am UTC | SANS360 @ Night! | In-Person and Streaming Live Online SANS 360 & Reception Kick off your evening with food, drinks, and networking at the SANS360 Reception! The 360 talks will run from 6:00 PM to 7:00 PM - 10 speakers x 360 seconds/each = 60 minutes of amazing AI content. Don’t miss this opportunity to connect, learn, and engage with AI and cybersecurity professionals in a high-energy setting.
SANS 360 Speakers: 6:00 - 6:06 | Chaitanya Rahalkar 6:06 - 6:12 | Jason Ross 6:12 - 6:18 | Vaishali Vinay 6:18 - 6:24 | Carolyn Duby 6:24 - 6:30 | Joy Toney 6:30 - 6:36 | Bethany Abbate 6:36 - 6:42 | Casey Bleeker 6:42 - 6:48 | Foster Nethercott 6:48 - 6:54 | Jess Garcia 6:54 - 7:00 | Gaurav Mehta
Show More
|