In this episode, I’m joined by Doro Hinrichs and Kira Clark from Scott Logic and Peter Gostev, Head of AI at Moonpig. Together, we explore whether we can ever really trust and secure Generative AI (GenAI), while sharing stories from the front line about getting to grips with this rapidly evolving technology.
With its human-like, non-deterministic nature, GenAI frustrates traditional pass/fail approaches to software testing. We explore ways to tackle this, and discuss Scott Logic’s Spy Logic project which helps development teams investigate defensive measures against prompt injection attacks on a Large Language Model.
Looking to the future, we ask whether risk mitigation measures will ever be effective – and what impact this will have on product and service design – before offering pragmatic advice on what organisations can do to navigate this terrain.
Links from this episode
-
Prompt injection explained, with video, slides, and a transcript – Simon Willison’s Weblog
-
Spy Logic – Doro Hinrichs and Heather Logan
-
How the tables turned – My life with Spy Logic – Kira Clark
Subscribe to the podcast