SEC595: Applied Data Science and AI/Machine Learning for Cybersecurity Professionals

In computer security, the CIA Triad represented the three security properties systems should have: confidentiality, integrity, and availability. Of the three, integrity has been the most elusive; and, in an AI-powered internet-of-things world, the most important. This talk explores all the facets of integrity: data, processing, storage, and contextual. Web 1.0 was all about availability; Web 2.0 about privacy. If we are ever going to build the distributed, decentralized, intelligent web of tomorrow - and trust these systems to take complex on our behalf - we are going to need to solve integrity.
A year ago, AI-assisted cyber operations were mostly a trouble-shooting story, threat actors trouble-shooting tasks faster. That's no longer the picture.
Topics to Include:
"Predictive AI Shrinks Brand Takedown Cycles: Weeks of Manual Triage to <7 Minutes, $12M ROI"
"AI Security Reality Check: Stop Chasing Shiny Threats, Do the Basics"
We all want to connect AI to everything and provide it with our data — as long as we can trust it. But how do we secure it? First, we need to understand what can go wrong: we need to identify and understand the threats.
12 AI-powered cyber threat challenges 6 hours across two days Live scoreboard + competition format Based on real-world adversary tactics
Participants will leave being able to perform both AI and ML techniques for surfacing attacker activity in logs. The participant will learn about LLM limitations and how they can be overcome with additional ML techniques that allows an analyst to analyze larger data sets.
AI is not replacing attackers, it is amplifying their capability, speed, and scale. This fully hands on workshop walks participants through deploying a vulnerable application locally using Docker and executing real exploitation exercises in a controlled environment.
This workshop introduces participants hands-on to the security of AI systems, using the OWASP AI Exchange (owaspai.org) as a framework. Attendees will gain insight through hacking labs how modern AI architecture operates, how it can be attacked, and how to secure them in real-world deployments. The workshop covers threats to LLMs and to conventional machine learning models, covering critical risks such as prompt injection, sensitive data leakage, model and data poisoning, supply chain threats, vector database vulnerabilities, excessive agent behavior, system prompt exposure, misinformation, and resource abuse.
Join us at SANS@Night for an evening of networking with AI/ML experts and industry leaders at the Continental Pool Hall & Beer Garden. Exchange ideas, spark collaborations, and unwind with the brightest minds in AI.
Location: Continental Pool Lounge & Beer Garden
The rise of OpenClaw hints at the pent-up demand for a truly useful personal AI assistant. Despite all our efforts to create layered security boundaries and implore adherence to principles like Zero Trust, it seems like all that was thrown out the window as millions rushed to install OpenClaw and experience an AI personal assistant that actually does things for you.
Indirect prompt injection is not just another vulnerability to patch. It is a structural reality of how large language models operate. This session explores how the context window, or "cram hole," contributes to the success of prompt injection exploits and why that reality fundamentally reshapes how we must think about trust, control, and data boundaries in AI systems.
The industry is fixated on the model. Jailbreaking it, guarding it, aligning it. But the most consequential AI security vulnerabilities aren't in the AI. They reside in the orchestration layer: serialization boundaries, state management, credential stores, and trust boundaries between agents. Old bug classes, new topology.
INVITATION ONLY
AI Security Policy Forum, April 21, 12:30-4:30 PM
A closed, invite-only gathering of selected policy stakeholders and standardization leaders. Convened by the OWASP AI Exchange, in partnership with SANS Institute. The forum will take place on April 21 alongside the SANS AI Summit at a venue nearby.
Topics to Include: "MCP Under Attack: Securing the New Trusted Control Plane"
"AI Isn’t the Risk. We Are, and That’s Where Security Must Change."
"120 Days to AI-Driven QA/QC for Power Systems: Governing Accountability and Cyber Risk"
In this immersive, hands-on workshop, participants will use FinBot—an interactive, multi-agent Capture-the-Flag (CTF) platform—to attack and then defend a realistic agentic financial workflow:
Invoice Intake → Validation → Approval → Funds Transfer → Reconciliation
Modern cyber defenders are inundated with vast volumes of raw threat reports, advisories, technical analyses, incident summaries, and narrative threat write-ups, which are rich in context but unstructured and difficult to operationalize. In this hands-on workshop, participants will learn how to build an AI-augmented threat intelligence platform using a popular data Lakehouse, the free edition of Databricks, that transforms unstructured reports into structured, actionable intelligence and then applies Generative AI features and analytics to extract high-value insights at scale.
12 AI-powered cyber threat challenges 6 hours across two days Live scoreboard + competition format Based on real-world adversary tactics
Please join us for an In-Person Networking Breakfast. Share stories, make connections, and learn how to make the most of your week in Arlington, VA. Complimentary coffee and breakfast items to be provided. Hope to see you there!