Balázs Kiss
A hundred ways to wreck your AI - the (in)security of machine learning systems
#1about 4 minutes
The security risks of AI-generated code
AI systems can generate code quickly but may introduce vulnerabilities or rely on outdated practices, highlighting that all AI systems are fundamentally code and can be exploited.
#2about 5 minutes
Fundamental AI vulnerabilities and malicious misuse
AI systems are prone to classic failures like overfitting and can be maliciously manipulated through deepfakes, chatbot poisoning, and adversarial patterns.
#3about 1 minute
Exploring threat modeling frameworks for AI security
Several organizations like OWASP, NIST, and MITRE provide threat models and standards to help developers understand and mitigate AI security risks.
#4about 6 minutes
Deconstructing AI attacks from evasion to model stealing
Attack trees categorize novel threats like evasion with adversarial samples, data poisoning to create backdoors, and model stealing to replicate proprietary systems.
#5about 2 minutes
Demonstrating an adversarial attack on digit recognition
A live demonstration shows how pre-generated adversarial samples can trick a digit recognition model into misclassifying numbers as zero.
#6about 5 minutes
Analyzing supply chain and framework security risks
Security risks extend beyond the model to the supply chain, including backdoors in pre-trained models, insecure serialization formats like Pickle, and vulnerabilities in ML frameworks.
#7about 1 minute
Choosing secure alternatives to the Pickle model format
The HDF5 format is recommended as a safer, industry-standard alternative to Python's insecure Pickle format for serializing machine learning models.
Related jobs
Jobs that call for the skills explored in this talk.
Matching moments
20:06 MIN
New security vulnerabilities and monitoring for AI systems
The State of GenAI & Machine Learning in 2025
10:01 MIN
Navigating the new landscape of AI and cybersecurity
From Monolith Tinkering to Modern Software Development
10:01 MIN
The complex relationship between AI and cybersecurity
Panel: How AI is changing the world of work
00:20 MIN
Understanding AI security risks for developers
The AI Security Survival Guide: Practical Advice for Stressed-Out Developers
09:22 MIN
Understanding the fundamental security risks in AI models
Can Machines Dream of Secure Code? Emerging AI Security Risks in LLM-driven Developer Tools
03:07 MIN
The dual nature of machine learning's power
Machine Learning: Promising, but Perilous
00:04 MIN
Understanding the current state of AI security challenges
Delay the AI Overlords: How OAuth and OpenFGA Can Keep Your AI Agents from Going Rogue
44:03 MIN
Mitigating the security risks of AI-generated code
Developer Productivity Using AI Tools and Services - Ryan J Salva
Featured Partners
Related Videos
Hacking AI - how attackers impose their will on AI
Mirko Ross
Machine Learning: Promising, but Perilous
Nura Kawa
The AI Security Survival Guide: Practical Advice for Stressed-Out Developers
Mackenzie Jackson
Beyond the Hype: Building Trustworthy and Reliable LLM Applications with Guardrails
Alex Soto
Skynet wants your Passwords! The Role of AI in Automating Social Engineering
Wolfgang Ettlinger & Alexander Hurbean
Staying Safe in the AI Future
Cassie Kozyrkov
Panel: How AI is changing the world of work
Pascal Reddig, TJ Griffiths, Fabian Schmidt, Oliver Winzenried & Matthias Niehoff & Mirko Ross
AI: Superhero or Supervillain? How and Why with Scott Hanselman
Scott Hanselman
Related Articles
View all articles



From learning to earning
Jobs that call for the skills explored in this talk.

Machine Learning (ML) Engineer Maitrisant - Python / Github Action
ASFOTEC
Intermediate
NoSQL
DevOps
Pandas
Docker
MongoDB
+8







Artificial inteligence / ML engineer
Ikerlan
Arrasate, Spain
Docker
PyTorch
Tensorflow
Data analysis
Machine Learning
