Keno Dreßel
Prompt Injection, Poisoning & More: The Dark Side of LLMs
#1about 5 minutes
Understanding and mitigating prompt injection attacks
Prompt injection manipulates LLM outputs through direct or indirect methods, requiring mitigations like restricting model capabilities and applying guardrails.
#2about 6 minutes
Protecting against data and model poisoning risks
Malicious or biased training data can poison a model's worldview, necessitating careful data screening and keeping models up-to-date.
#3about 6 minutes
Securing downstream systems from insecure model outputs
LLM outputs can exploit downstream systems like databases or frontends, so they must be treated as untrusted user input and sanitized accordingly.
#4about 4 minutes
Preventing sensitive information disclosure via LLMs
Sensitive data used for training can be extracted from models, highlighting the need to redact or anonymize information before it reaches the LLM.
#5about 1 minute
Why comprehensive security is non-negotiable for LLMs
Just like in traditional application security, achieving 99% security is still a failing grade because attackers will find and exploit any existing vulnerability.
Related jobs
Jobs that call for the skills explored in this talk.
Matching moments
22:43 MIN
The current state of LLM security and the need for awareness
ChatGPT, ignore the above instructions! Prompt injection attacks and how to avoid them.
14:26 MIN
Understanding the security risk of prompt injection
The shadows that follow the AI generative models
24:53 MIN
Understanding the security risks of AI integrations
Three years of putting LLMs into Software - Lessons learned
13:31 MIN
Understanding and defending against prompt injection attacks
DevOps for AI: running LLMs in production with Kubernetes and KubeFlow
19:14 MIN
Understanding the complexity of prompt injection attacks
Hacking AI - how attackers impose their will on AI
15:02 MIN
Strategies for mitigating prompt injection vulnerabilities
The AI Security Survival Guide: Practical Advice for Stressed-Out Developers
25:33 MIN
AI privacy concerns and prompt engineering
Coffee with Developers - Cassidy Williams -
00:43 MIN
The overlooked security risks of AI and LLMs
WeAreDevelopers LIVE - Chrome for Sale? Comet - the upcoming perplexity browser Stealing and leaking
Featured Partners
Related Videos
Manipulating The Machine: Prompt Injections And Counter Measures
Georg Dresler
Beyond the Hype: Building Trustworthy and Reliable LLM Applications with Guardrails
Alex Soto
ChatGPT, ignore the above instructions! Prompt injection attacks and how to avoid them.
Sebastian Schrittwieser
The AI Security Survival Guide: Practical Advice for Stressed-Out Developers
Mackenzie Jackson
Three years of putting LLMs into Software - Lessons learned
Simon A.T. Jiménez
Can Machines Dream of Secure Code? Emerging AI Security Risks in LLM-driven Developer Tools
Liran Tal
Hacking AI - how attackers impose their will on AI
Mirko Ross
Inside the Mind of an LLM
Emanuele Fabbiani
Related Articles
View all articles



From learning to earning
Jobs that call for the skills explored in this talk.








