Johann Rehberger

Johann has over eighteen years of experience in threat analysis, threat modeling, risk management, penetration testing, and red teaming. As part of his many years at Microsoft, Johann established an offensive security team in Azure Data and led the program as Principal Security Engineering Manager for years. He also built out a red team at Uber and currently works as the Director of Red team at Electronic Arts

He enjoys providing training and was an instructor for ethical hacking at the University of Washington. Johann contributed to the MITRE ATT&CK framework (Pass those Cookies!), published a book on how to build and manage a red team, enjoys hacking machine learning systems and holds a master’s in computer security from the University of Liverpool. For latest updates and information visit his blog at embracethered.com

2024 Talk

Talk Title: New Important Instructions: Attend this talk about Indirect Prompt Injections in the Wild

Talk Abstract:
Last year we talked about the basics of Prompt Injection with many real-world examples of exploits in popular LLM applications and chatbots. This year we go further, and show how Prompt Injection impacts all three aspects of the CIA security triad: Confidentiality, Integrity, and Availability.

The LLM applications we are breaking include: ChatGPT, Claude, M365 Copilot, Google AI Studio, Google Gemini, Google NotebookLM and others.

The talk will provide deep-dives on the following threats with entirely new exploit demos:

  • Misinformation, Scams and Phishing: Including advanced attacks such as conditional prompt injection payloads in M365 Copilot delivered via email, or having Google Gemini connect the user directly to a scammer, live, via a Google Meet link.

  • Automatic Tool Invocation and Function calling without human in the loop

  • Data Exfiltration and how I helped fix 12+ of vulnerabilities across the industry, including a mass-data-exfiltration demonstration in Google AI Studio, Github Copilot Chat, and many more

  • Attacks on LLM memory: Persistent attacks that remain active across chat conversations.

  • ASCII Smuggling: A tricky technique to create hidden prompt injection payloads, and create invisible text for data exfiltration.

  • Sandbox Isolation Bug in ChatGPT: A discussion of a bug I found, responsible disclosed and OpenAI fixed in ChatGPT, which allowed any public GPT to interact with and access/modify the files of your main ChatGPT and your Private GPTs.

Finally, we will also discuss how various companies fixed these vulnerabilities to get a better understanding on mitigation strategies, leaving you with a fresh perspective on securing LLMs against the evolving threat landscape of prompt injection.


2023 Talk

Talk Title: New Important Instructions: Attend this talk about Indirect Prompt Injections in the Wild

Talk Abstract:
AI and Chatbots are taking the world by storm at the moment. Its time to shine more light on attack research and highlight flaws that the current systems are exposing.

Sending untrusted data to your AI can have disastrous consequences depending on how the results are being processed and used. Exploits can lead to the scams, data exfiltration and even remote code execution.

In this talk we will cover the basics of a novel attack category called Indirect Prompt Injection and Plugin Request Forgery. We will discuss and showcase these attacks with many examples and exploits, including Bing Chat, ChatGPT and Anthropic Claude. We will also discuss how various companies fixed vulnerabilities that were reported to them.