How to Seduce a Model (a Large Language Model)

Wednesday
 
19
 
March
, 
1:50 pm
 - 
2:30 pm

Speakers

Harrison Smith

Harrison Smith

Cyber Security Consultant
Phronesis Security

Synopsis

In recent years the motto for many companies has been to AI everything... The new question seems to be "Why don't we just add an AI Chatbot to that piece of functionality".

The more we plug in AI to our solutions, the more we increase our attack surface. It seems that large language model (LLM) security is an afterthought for companies, while a growing focus for adversaries.

In this talk, I'll unpack the latest attack techniques used against large language models (LLMs). With a proven track record in LLM penetration testing, I'll be drawing from cutting-edge research and real-world testing. We’ll explore adversarial prompts, prompt injection, model evasion, data poisoning, and more.

We'll examine emerging trends, dissect vulnerabilities exposed by recent attacks, and provide practical strategies for securing these AI systems. Whether you're seeking quick wins or long-term mitigation strategies, this session will equip you with actionable insights to defend the LLM landscape.

This session will help you answer:

  • What are LLM attacks?
  • What security risks do my LLM integrations raise... What's the worst that could happen?
  • But I just implemented a new LLM chatbot... How can I secure it?

Acknowledgement of Country

We acknowledge the traditional owners and custodians of country throughout Australia and acknowledge their continuing connection to land, waters and community. We pay our respects to the people, the cultures and the elders past, present and emerging.