Speakers
Synopsis
With the rise of Artifical Intelligence (AI) and conversational agents powered by Large Language Models (LLM's), organisations are unlocking new capabilities but also exposing themselves to unprecedented security risks. ""AI Assistant's"" are being deployed at a rapid pace and their adoption has seen significant uptick in the past few years especially since OpenAI released ChatGPT based on GPT3.5 in December 2022. It is important to understand that LLM's despite their powerful capabilities are susceptible to manipulation, leaking sensitive data or introducing harmful biases. Not just that, LLM's are also vulnerable to being eploited by attacks like prompt injection, data privacy violations, insecure outputs, model inversion and model theft to name a few. These risks are new and unique compared to traditional software security challenges which security practitioners have been focussed on dealing with all along.
Never before has it been so important for developers, data scientists and security professionals to collaboratively build strategies to mitigate these risks, ensuring safe and secure deployment of LLM-based systems. Such risk mitigation strategies must highlight the importance of data protection, secure model training, robust input validation and monitoring outputs. It would also need to factor transparency and human oversight in AI decision-making to deal with more complex issues like the pitfalls of overreliance on AI-generated outputs and the growing concerns around resource consumption.
Using guidance or reference frameworks such as the OWASP Top 10 for LLM Applications as an example, organisations can develop such key strategies for mitigating these risks and ensure responsible use of AI. More importantly, make better informed decisions when building and deploying LLM-based applications given the latest trend and scale at which LLM's are being adopted today.