Keno Dreßel
Prompt Injection, Poisoning & More: The Dark Side of LLMs
#1about 5 minutes
Understanding and mitigating prompt injection attacks
Prompt injection manipulates LLM outputs through direct or indirect methods, requiring mitigations like restricting model capabilities and applying guardrails.
#2about 6 minutes
Protecting against data and model poisoning risks
Malicious or biased training data can poison a model's worldview, necessitating careful data screening and keeping models up-to-date.
#3about 6 minutes
Securing downstream systems from insecure model outputs
LLM outputs can exploit downstream systems like databases or frontends, so they must be treated as untrusted user input and sanitized accordingly.
#4about 4 minutes
Preventing sensitive information disclosure via LLMs
Sensitive data used for training can be extracted from models, highlighting the need to redact or anonymize information before it reaches the LLM.
#5about 1 minute
Why comprehensive security is non-negotiable for LLMs
Just like in traditional application security, achieving 99% security is still a failing grade because attackers will find and exploit any existing vulnerability.
Related jobs
Jobs that call for the skills explored in this talk.
Matching moments
04:49 MIN
The current state of LLM security and the need for awareness
ChatGPT, ignore the above instructions! Prompt injection attacks and how to avoid them.
01:28 MIN
Understanding the security risk of prompt injection
The shadows that follow the AI generative models
01:37 MIN
Understanding the security risks of AI integrations
Three years of putting LLMs into Software - Lessons learned
01:43 MIN
Understanding and defending against prompt injection attacks
DevOps for AI: running LLMs in production with Kubernetes and KubeFlow
04:10 MIN
Understanding the complexity of prompt injection attacks
Hacking AI - how attackers impose their will on AI
01:48 MIN
Strategies for mitigating prompt injection vulnerabilities
The AI Security Survival Guide: Practical Advice for Stressed-Out Developers
03:43 MIN
AI privacy concerns and prompt engineering
Coffee with Developers - Cassidy Williams -
03:19 MIN
The overlooked security risks of AI and LLMs
WeAreDevelopers LIVE - Chrome for Sale? Comet - the upcoming perplexity browser Stealing and leaking
Featured Partners
Related Videos
Manipulating The Machine: Prompt Injections And Counter Measures
Georg Dresler
Beyond the Hype: Building Trustworthy and Reliable LLM Applications with Guardrails
Alex Soto
ChatGPT, ignore the above instructions! Prompt injection attacks and how to avoid them.
Sebastian Schrittwieser
The AI Security Survival Guide: Practical Advice for Stressed-Out Developers
Mackenzie Jackson
Three years of putting LLMs into Software - Lessons learned
Simon A.T. Jiménez
Can Machines Dream of Secure Code? Emerging AI Security Risks in LLM-driven Developer Tools
Liran Tal
Hacking AI - how attackers impose their will on AI
Mirko Ross
You are not my model anymore - understanding LLM model behavior
Andreas Erben
Related Articles
View all articles



From learning to earning
Jobs that call for the skills explored in this talk.


Cinemo GmbH
Remote
€77-101K
Senior
Linux
Elasticsearch
Machine Learning
+1



lucesem
AI/LLM-Entwickler - Automatisierung & KI-Lösungenlucesem
Klagenfurt, Austria
€40K




Apple Firmenprofil
Aachen, Germany
Confluence
Machine Learning