Speakers
Synopsis
In the context of cyber security, artificial intelligence is a double-edged sword. Even as cyber professionals benefit from the automation of routine tasks and accelerated problem solving that AI offers, can they be sure that AI does not also introduce a whole new set of cyber security and governance problems? In this session, John Burgess explores recent initiatives that have been taken to formalise methods for ensuring the responsible, transparent, and accountable use of AI within organisations. Based on his own experience at EdTech Jolt Systems John offers practical suggestions for how these methods can be applied within the cyber security industry.
This presentation will be relevant to cyber security professionals, risk & compliance managers, C-level executives, and anyone interested in obtaining the benefits of AI without exposing their organisation to attendant risks.
AI offers many benefits to IT and cyber professionals. Already we have seen exciting examples of the application of AI across a wide range of domains, including threat detection, predictive analytics, automation of routine tasks, enhanced threat hunting, improved incident response, data protection and privacy, and scalability of operations.
Yet the un-managed use of AI, including large language models like ChatGPT, also comes with risks such as:
- The unintentional leakage of confidential information about an organisation’s current security posture
- The possibility of both false positives and false negatives arising from biases implicit in AI algorithms, or the presence of “hallucinations”
- Potential manipulation of AI systems by attackers to deceive the model into making inaccurate detections or predictions
- Intentional “poisoning” of training data to skew system outputs
- Over-reliance on technology and under-valuing human insight and intuition, and
- Potential legal and ethical issues, especially if AI is used for employee surveillance.
Late in 2023, the International Organization for Standardization (ISO) published a ground-breaking Standard that can be used by organisations to facilitate trusted AI. ISO/IEC 42001 enables organisations to balance the benefits and risks of AI by managing risk and encouraging the responsible use of AI to support innovation and improved productivity. This is achieved by implementing a range of controls in the design or use of AI systems. Borrowing heavily from the quality management frameworks such as ISO 9001 and ISO 27001, many cyber security professionals will already be comfortable with this approach.
In this presentation, we will explore the key principles and controls of ISO/IEC 42001 with specific examples of how the Standard can be applied by cyber teams.