Last week the government published its long-awaited AI Opportunities Action Plan setting out how the UK can take advantage of its strengths in AI. Developed by entrepreneur and Chair of the UK’s Advanced Research and Invention Agency Matt Clifford, the plan posits that AI is the government’s most powerful lever to achieve its five key missions – none more so than driving economic growth.
The strategy argues convincingly for the need to invest in the foundational infrastructure for AI: more ‘compute’; supporting access to high-quality public sector data to train AI, through a National Data Library; investing in home-grown skills and attracting international talent; and strengthening existing regulators to enable the development and adoption of safe and trusted AI, whilst avoiding over-regulation. Moreover, the Plan is clear that success will only be achieved if the government itself adopts AI, both improving public services and shaping the development of the market through its purchasing power.
However, I would argue that the plan at once goes too far on some issues, and not far enough on others.
Avoiding overfitting
A problem that data scientists seek to avoid when training an AI is ‘overfitting’; when a model works really well for the data that it is trained on, but performs badly when applied to new data. I would argue that the Action Plan is in places at risk of ‘overfitting’ its recommendations to being about AI itself, rather than looking more broadly at the context of government as a whole and then thinking about where AI as a technology fits into that.
Let me give a couple of examples of this. The first is AI Growth Zones. These are proposed areas where the planning process will be streamlined in order to accelerate the building of data centres. Certainly this is a laudable aim, but in the context of the government’s growth mission, should the aim of planning reform not be to support growth in all parts of the UK?
The second example relates to regulators. The plan recommends that regulators report annually on how they have enabled growth driven by AI in their sector. Another laudable aim, but is there not a risk of creating excess bureaucracy? Regulators will need to request this information from AI companies, potentially meaning firms have to respond to numerous forms from different regulators seeking similar information.
In some ways this risk is already being addressed. I note that the government’s response to the Action Plan, which it published in tandem, agrees on the recommendation about regulator annual reporting but actually only commits to “publicly reporting” on activities. Moreover, the Plan’s recommended AI leads for each mission should help to ensure that AI-related activities are in the service of wider goals. However, this broader purpose is also at the core of the second problem; the Plan’s failure to go far enough in encouraging adoption in public services.
Build it and they won’t come
The Action Plan is absolutely right to highlight the promise that AI holds for improving public services. Indeed, it was particularly powerful to hear the Prime Minister speak about how AI may help to free up public servants to spend more time on face-to-face work; the ‘service’ that drew many of them into public service in the first place.
Despite this potential, and the common sense of the scan-pilot-scale model that the Plan articulates, I worry that it significantly underestimates the scale of the challenge.
Let me take the NHS as an example. The Action Plan highlights the success of the £21 million NHS AI Diagnostic Fund in rolling out AI diagnostic tools nationwide. I’m not questioning the success of this fund, but if we look at the bigger picture things are not at all rosy. Lord Darzi’s Independent Investigation of the NHS in England is clear that technology has not been effectively utilised by the NHS over the last decade or so.
To be clear, this is not about the amount of money made available, although clearly money is required to invest in technology. The problem is the innovation model. What the NHS, and indeed public services more broadly, require is fundamental reform to the overall delivery approach.
The success of health services in recent decades means the challenges they now face are far greater in complexity, involving multiple overlapping issues. Technology, including AI can help this, but it does not define or drive the new model. All experts agree that complex health needs should be managed first and foremost in the community, not in hospital; and yet the Darzi report shows that the NHS has actually become more dominated by hospital based care over the last two decades.
Clearly an action plan on AI is not the place for the government to set out its model of public sector reform. However, once more in the spirit of cross-government working, I think that it is important to avoid an overly simplistic view that suggests that success will come from scanning for, piloting, and scaling some AI without an accompanying theory of overall reform.
Seizing the AI opportunity
AI has huge potential to improve the lives of British citizens, and the UK is not only well-placed to take advantage but is genuinely world-class in some areas, including on AI safety and governance. The AI Opportunity Action Plan, and the government’s acceptance of its recommendations, represents an important milestone in the UK seizing this potential.
The challenge now is to avoid the sugar-rush of a tech-first approach that makes bold announcements but leads to little real change – instead ensuring that AI is put to work in the service of the government’s five missions and transforming public services for the better. True success will mean shifting AI from a talking point to an invisible enabler of better services.
About Oliver
Oliver Smith is an independent consultant and the Director of Daedalus Futures, a responsible AI consultancy. He has held leadership positions in the private sector, civil society, and government, including stints at the Department of Health and the Prime Minister’s Strategy Unit under Tony Blair.