David vonThenen
Confuse, Obfuscate, Disrupt: Using Adversarial Techniques for Better AI and True Anonymity
#1about 1 minute
The importance of explainable AI and data quality
AI models are only as good as their training data, which is often plagued by bias, noise, and inaccuracies that explainable AI helps to uncover.
#2about 3 minutes
Identifying common data inconsistencies in AI models
Models can be compromised by issues like annotation errors, data imbalance, and adversarial samples, which can be measured with tools like Captum.
#3about 2 minutes
The dual purpose of adversarial AI attacks
Intentionally introducing adversarial inputs can be used for good to test model boundaries, or for bad to obfuscate data and protect personal privacy.
#4about 3 minutes
How to confuse NLP models with creative inputs
Natural language processing models can be disrupted using techniques like encoding, code-switching, misspellings, and even metaphors to prevent accurate interpretation.
#5about 4 minutes
Visualizing model predictions with the Captum library
The Captum library for PyTorch helps visualize which parts of an input, like words in a sentence or pixels in an image, contribute most to a model's final prediction.
#6about 6 minutes
Manipulating model outputs with subtle input changes
Simple misspellings can flip a sentiment analysis result from positive to negative, and adding a single pixel can cause an image classifier to misidentify a cat as a dog.
#7about 2 minutes
Using an adversarial pattern t-shirt to evade detection
A t-shirt printed with a specific adversarial pattern can disrupt a real-time person detection model, effectively making the wearer invisible to the AI system.
#8about 2 minutes
Techniques for defending models against adversarial attacks
Defenses against NLP attacks include normalization and grammar checks, while vision attacks can be mitigated with image blurring, bit-depth reduction, or advanced methods like FGSM.
#9about 2 minutes
Defeating a single-pixel attack with image blurring
Applying a simple Gaussian blur to an image containing an adversarial pixel smooths out the manipulation, allowing the model to correctly classify the image.
Related jobs
Jobs that call for the skills explored in this talk.
Matching moments
41:34 MIN
Understanding security risks from adversarial attacks on models
Explainable machine learning explained
04:21 MIN
Fundamental AI vulnerabilities and malicious misuse
A hundred ways to wreck your AI - the (in)security of machine learning systems
10:20 MIN
Deconstructing AI attacks from evasion to model stealing
A hundred ways to wreck your AI - the (in)security of machine learning systems
10:01 MIN
Navigating the new landscape of AI and cybersecurity
From Monolith Tinkering to Modern Software Development
24:59 MIN
Q&A on creating patterns and de-poisoning images
Hacking AI - how attackers impose their will on AI
00:53 MIN
Understanding the core principles of hacking AI systems
Hacking AI - how attackers impose their will on AI
25:33 MIN
AI privacy concerns and prompt engineering
Coffee with Developers - Cassidy Williams -
33:41 MIN
Manipulating AI with prompt injection and hidden commands
WeAreDevelopers LIVE - Is Software Ever Truly Accessible?
Featured Partners
Related Videos
Hacking AI - how attackers impose their will on AI
Mirko Ross
Beyond the Hype: Building Trustworthy and Reliable LLM Applications with Guardrails
Alex Soto
AI: Superhero or Supervillain? How and Why with Scott Hanselman
Scott Hanselman
A hundred ways to wreck your AI - the (in)security of machine learning systems
Balázs Kiss
The AI Elections: How Technology Could Shape Public Sentiment
Martin Förtsch & Thomas Endres
The AI Security Survival Guide: Practical Advice for Stressed-Out Developers
Mackenzie Jackson
Skynet wants your Passwords! The Role of AI in Automating Social Engineering
Wolfgang Ettlinger & Alexander Hurbean
Manipulating The Machine: Prompt Injections And Counter Measures
Georg Dresler
Related Articles
View all articles



From learning to earning
Jobs that call for the skills explored in this talk.



User Empowerment Engineer | AI Vertical Solutions
Neural Concept
Lausanne, Switzerland
Machine Learning




Front End Engineering Manager ( Generative AI experience )
Accenture
GraphQL
React Native
Continuous Integration

AI & Cybersecurity Platforms Engineer
X-TENTION Informationstechnologie GmbH
€50K
Linux
Ansible
Kubernetes
Elasticsearch

AI & Cybersecurity Platforms Engineer
X-TENTION Informationstechnologie GmbH
€50K
Linux
Ansible
Kubernetes
Elasticsearch