I recently had the pleasure of attending the 2024 Public Sector Innovation (PSI) Conference held at the incredible, historic “Great Room” at the Royal Society of Arts (RSA) House in London. An amazing venue, with so much history, adorned with a breath-taking series of six paintings, ‘The Progress of Human Knowledge and Culture’ by artist James Barry which took 23 years to complete!
Now in its 6th year, the 2024 PSI conference looked at the potential to use AI to make our public services much smarter, more efficient and more productive. A focus this year was on real world ‘lighthouse’ projects already leading the way with innovative ways to embrace AI.
During the day, we considered the opportunities such technologies present for how our public services are currently organised and run. Can departmental structures and traditional organisational silos be modernised so we could potentially create a public sector Large Language Model (LLM) as well as cross-departmental data lakes? The benefits, if achieved, could free up resources and staff to deliver more valuable frontline tasks as well as reduce costs.
The day was chaired by Professor Mark Thompson, Professor in Digital Economy, University of Exeter Business School.
We started the morning with an introduction by Sabby Gill, CEO of Dext and Digital Leaders chair. Sabby summarised some findings from the recently published AI Attitudes Survey 2024, stating that with regards to AI, common themes reported by leadership included:
- A lack of confidence in AI Return on Investment (ROI)
- Data and privacy concerns
- Being unprepared for additional AI regulations.
The survey highlighted key areas that need to be addressed to ensure AI can thrive including:
- Data Infrastructure and data governance
- Talent acquisition and training in AI skills, including partnerships with AI consultancies
- Ethical implementation – testing and ensuring fairness throughout
- Informed leadership (not just technology leadership)
Next up was a panel on “Is AI innovation?” that included Number Ten’s Eoin Mulgrew, Informed’s David Lawton, NAO’s Yvonne Gallagher, and DWP’s Shruti Kohli. It was clear from the discussion that although AI can deliver innovation, in essence “it’s just another technology”. I resonated with the similarities between the early days of “digital” in the late 90s, and where we are now with AI – i.e. tactical use-cases that are just scratching the surface of what could be achieved in the future, with little strategic long lasting innovation (yet!).
However, I was particularly interested in the real world examples Eoin shared at No10 Downing Street, where the AI innovation incubator are trialing an “AI ministerial red box” and Shruti’s example at DWP of developing a POC through their GenAI Centre of Excellence using augmented development tools to enhance internal services for front line colleagues.
This was followed by “lighthouse” presentations by Swindon Councils Sara Pena, Norfolk Councils Geoff Connell, Natural England’s Alex Kilcoyne, FCDO’s David Gerouille-Farrell, MoJ’s Shelina Hargrove, and CDDO’s Clive Kelman. Using a speed-dating style that worked very well to quickly distil and present practical AI examples in 3 minutes each across their organisations. It was refreshing to see a variety of use-cases for AI, including the use of CoPilot to free up colleagues time, process automation (“hyper automation”) for internal IT & HR helpdesk tickets, using AI deep learning and object detection to highlight flood areas on aerial photos, using heavily sanitised data within LLMs to remove risk, and the use of LLMs for internal guidance searches to increase colleague productivity. The session was wrapped up by Clive who explained how his Cabinet Office AI team were providing a central hub for adoption of AI X-Government – providing frameworks such as AI risk management and an AI maturity index, to offer practical support to Government departments embarking on their AI journey.
After morning coffee, it was straight into the second panel, ‘AI and ethics in the public sector’, with FDM’s Sarah Wyer, TPXImpacts Imeh Akpan, and AWS’ Himanshu Sahni. Key themes that came across from an ethics perspective were that AI is a “mirror of society”, therefore removing all bias could be counter-intuitive when training AI models as it doesn’t fully reflect human nature. It was clear that we need to ensure when developing AI models that diversity is at the heart of everything we do, ensuring a diverse mix of backgrounds for creating and deploying AI in the real world. The need for explainability, and assessment of the usefulness to humans of all AI implementations was seen as a practical framework for addressing some of these issues.
I was particularly interested in the next “fireside” chat between Malcolm Harbour CBE (Connected Places Catapult) and Rebecca Rees (Solicitor, Trowers and Hamlins) addressing issues relating to innovative approaches to procuring AI. New procurement rules for AI have recently been developed (see the recent Catapult report – The Art of the Possible) which include “Competitive Flexible Procedures”, effectively a toolbox that government departments can use for procuring AI services (although these will not be mandated and hence doesn’t stop Government developing their own procurement procedures). It was encouraging to hear that the new AI procurement framework will be “small business friendly”, and that there will be a focus on pre-market engagement, with a need for early up-front partnerships between supplier and customer. I also really liked the idea of encouraging the use of hackathons as part of the pre-market engagement, to quickly prove the value a supplier can bring, and effectively kick-start the short listing process.
After lunch and some networking, we kicked off with a thought provoking keynote from Ollie Ilott, Director of the AI Safety Institute, who provided us with an overview of the Institute’s work. The message here being that there are unknown risks to AI, as due to its immaturity the capabilities are not yet fully understood. To mitigate risks, Ollie talked us through the approach to testing for misuse, societal impacts, autonomous systems and safeguards. Implemented through automated benchmarks, red teaming, human uplift studies and via agents and tooling. Although the AI Safety Institute is well funded with £100m over 2years, commitment to 2030 and a team of 24 AI & ML researchers and engineers, upskilling people in AI remains an ongoing challenge that needs to be addressed.
The third panel session covered the topic of “AI for good and bad” and included The Army’s Brigadier Stefan Crossfield, NCSC’s Ollie Whitehouse, Zuhlke’s Dan Klein, and Actionable Futurist’s Andrew Gill. There was a robust discussion about how regulators will need to be leveraged and the evidence they will need from public sector departments, with “transparency & explainability” being critical evidence for any AI project in the public sector. Another topic that came up several times was around legacy technology and how this can be the “drag anchor” to any AI implementation if it is not considered and mitigated when planning an AI project.
The final panel ‘There’s an AI for that’ focussed on the healthcare sector, with quick fire presentations from Beam’s Seb Barker, Skin Analytics’ Jack Greenhalgh, NHS Resolution’s Niamh McKenna, and Curistica’s Dr Keith Grimes. Again a really good set of disparate use cases and AI projects were covered which included diagnostics, assessment, and accessibility. My personal favourite was from Jack who discussed how his AI Skin Analytics product was using AI to identify skin cancer symptoms at an early stage. I can really see huge potential in the healthcare sector to leverage AI for good.
And last but not least, with an excellent closing speech was Harriett Harman, MP. She highlighted that AI implementation is a challenge for parliaments and can impact elections. There are a number of considerations for AI, including ensuring equality across the UK (embedding ‘levelling up’ of AI across all regions), building an AI workforce strategy to ensure we have the best talent available (again not London centric this must be a fair regional spread), and removing the “bro” culture within technology so diversity is at the heart of what we do. Harriett discussed how the current parliamentary processes are not aligned to the pace needed for change of AI, and hence (much like the laws that were quickly changed during COVID) a similar change in law should be made for AI purposes to ensure changes an be pushed through parliament at pace, with Harriet suggesting the granting of special statutory powers to the Science, Innovation and Technology and Business and Trade Select Committees in order to fast-track the state’s regulatory response.
So, all in all, a really interesting and thought provoking day at a beautiful historic London venue. Let’s see how AI has evolved in 12 months time, with the pace of change we are experiencing right now, I expect things may look very different!