The blueprint is based on 50 recommendations set out by Matt Clifford of the Advanced Research and Invention Agency (ARIA) in his AI opportunities action plan, commissioned by the Department for Science, Innovation and Technology (DSIT). The plan covers AI enablers across people, process, technology and data – which will all be addressed over the coming years in an effort to strengthen the UK’s global leadership in AI, protect its economic security and drive benefits for UK citizens.


 
Prime Minister Sir Keir Starmer spoke of AI’s “potential to transform the lives of working people” across sectors such as healthcare, education and construction, also highlighting the need for the UK to “move fast and take action to win the global race.”


 
It’s certainly encouraging to see the UK government commit to AI in such a tangible way – particularly considering that 2024 was the year when the adoption and exploitation curve for AI technologies increased dramatically with Generative AI at the forefront. This is an industry that the government estimates could be worth up to £47 billion to the UK each year over a decade. But, that value can only be realised if both the public and private sector are truly invested in realising the technology’s potential – and aligned in their efforts to do so.


 

AI’s potential impact in the public sector

 

There are many ways in which AI could be used to meet the UK government’s goal of delivering public services more efficiently by supporting public sector workers. As cited by Sir Keir Starmer in his announcement, examples include: 

 

  • Delivering better, faster and smarter healthcare through the NHS 

 

  • Speeding up planning consultations

 

  • Driving down admin time for teachers

 

  • Feeding AI through cameras to spot potholes and help improve roads

 
Currently, use cases are limited to relatively narrow productivity improvements. Looking ahead, a relatively straightforward area of improvement would be at point of service – i.e. how citizens consume public services, which can be augmented through conversational AI and natural language processing (NLP) capabilities.


 
Coming back to the NHS, AI is already enhancing diagnostic capabilities in areas such as cancer and diabetes diagnosis. Indeed, this year the NHS is set to commence a world-first trial of an AI tool that can identify patients at risk of type-2 diabetes more than a decade before they develop the condition.


 
A more general trend we’re seeing is the growth of ‘Agentic AI’, which is helping to increase levels of automation by providing AI agents working alongside public sector workers to complete tasks and deliver insights. This will undoubtedly deliver efficiency and productivity gains as adoption continues to accelerate, but comes with a cost in terms of surrendering human agency for certain decisions – therefore requiring justifiable trust in the system.


 
Of course, the key objective is to get AI to a place where it’s delivering tangible and enduring value across government digital services. This will involve three key pillars: 

 

  • Monitoring the horizon, i.e. carefully observing the market while taking the inevitable ‘next big thing’ claims with a pinch of salt 

 

  • Focusing on goal and value-driven development approaches to ensure that AI is actually solving problems

 

  • Establishing justified trust by ensuring that the relevant governance processes are in place 

 
You can read more about these three pillars here.

 

Key considerations

 

Despite the potential on offer, government and industry alike must be aware of the pitfalls of deploying AI – particularly for public services that typically leverage and have access to large amounts of sensitive citizen data.


 
The adoption and exploitation of AI when leveraging sensitive data requires significant trust in the technology. This cannot be understated. Users have to know how data is being used and be confident that outcomes aren’t based on any inherent biases in the system. What’s more, as citizen data is a prime target for nefarious actors such as cyber-criminals and hostile nation states, it must be kept secure at all times.


 
Similarly, human-AI teaming and interaction will rapidly evolve, impacting the future of work as well as human agency. This requires our vigilance. Gartner, for example, predicted that by 2028 25% of enterprise breaches will be traced back to AI agent abuse, from both external and malicious internal actors. With this in mind, the question becomes how do we manage this new human-machine dynamic? Similarly, who takes responsibility for machine actions or decisions which results in unintended outcomes?


 
What’s more, the ongoing move from narrow AI to deliberative AI will require careful oversight supported by robust governance structures; both human policy and machine-enforced guardrails and transparency. This is critical if the public is to be taken on this journey. Trust is hard earned yet easily lost, making through-life AI governance non-negotiable to provide citizens with full assurance that government’s AI tools will be secured against unintended outcomes.


 
A final consideration when exploiting this technology is that AI amplifies risk. As such, appropriate mitigations must be identified – the first being to recognise whether or not AI is a suitable technology for this application.


 
This risk element particularly relates to data. For example, when using probabilistic systems to predict future outcomes based on personal data, how can we mitigate the risk of the system remaining benevolent and not consigning groups in society to unfair treatment – or even worse, limiting how they access critical services? This goes to the heart of personal freedoms and risks significantly impacting social norms.

 

Driving assurance

 

Many of the questions highlighted above are extremely tricky to answer and illustrates that, in the complex and constantly evolving world of AI, there’s a critical need to keep AI risk and governance at the forefront of our minds. We as a collective must not lose sight of the need to evolve our usage and adoption of AI in a safe, ethical and responsible way.


 
The acceleration of AI and its convergence with autonomy will see many benefits for mankind. However, without responsible AI development, which is by no means universal, we are also entering unknown territory in our relationship with machines.


 
This is where our AI Assessment Service can support. The comprehensive maturity assessment gives organisations the foundation to adopt AI tools and technologies safely, ethically and securely by providing key insights into where they are on their AI journey and their specific operating environment.


 
This enables us to assess an organisation’s ability to exploit AI technologies in ways that align with relevant guidance, regulations and legislation, while adhering to the highest ethical standards. Whether building, refining, buying or using AI, our AI Assessment Service supports the safe adoption and exploitation of AI technologies throughout their lifecycle.

 

Can the UK lead the charge?

 

Ultimately, the UK government’s AI Opportunities Action Plan represents a positive and exciting commitment to keeping the UK at the forefront of the AI industry. As a country that has been a leading and influential voice in this space, particularly around safety and regulation, the UK is well positioned to take hold of the opportunities available and drive economic growth.


 
However, despite the excitement around the potential applications of AI across the public sector and beyond, we must not lose sight of the need to ensure safety, security and assurance.


 
It’s clear that the use of AI in government has huge potential to significantly impact both individuals and society at large. But it must be implemented in a responsible way with the right ethics and governance structures in place to guard against unfair, unjust or discriminatory outcomes which go against personal freedoms.


 
Only with a foundation of assurance will we as a nation truly be able to capitalise on the AI revolution.