2. Solutions become problems

An alternative theory of power

This chapter sets out a process through which we can trace how power has shifted in Britain in recent decades – specifically:

Power’s three dimensions

In the relative calm of the post-war United States, one theory of power was dominant. As outlined by scholars such as the Yale political scientist Robert Dahl, this pluralist theory held that it was only tenable to measure who had how much power by focusing on visible conflicts, and who won them.

By the 1970s, this looked hopelessly complacent. In Power: A Radical View (1974), the British sociologist Steven Lukes set out an alternative. Lukes was writing in a Britain where, rather like today, the underlying political settlement was stuck in flux and crisis – when even ‘the governability of the state’ seemed in question.1

Scholars had already argued that there was a second dimension to how power worked, below the level of visible conflict, which functioned through the quiet influence of bias. Which interests, it asked, had the power to decide which issues were out of bounds? Beneath this, Lukes identified a third, more insidious, dimension of power. The bias of a system does not necessarily require individuals to make conscious choices. It can be sustained by group behaviours and institutional practice. He questioned the assumption that ‘if people feel no grievances, then they have no interests that are harmed by the use of power’. A grievance against such harms, he suggested, could be present in ‘a vague feeling of unease’.2 The feeling induced by watching your high street decline, for example, but not really knowing who to blame.

As Hilhorst writes in her ethnographic study of political attitudes in post-industrial English towns, the fact that many residents had come to see the political system through a ‘corruption frame’ undermined its legitimacy in their eyes – but also ‘fostered acquiescence to it, as it was hard to mobilise when confronted with ill-defined, overwhelmingly powerful forces’.3

A more structural understanding of power might restore agency and reduce the appeal of all-encompassing imagery of corruption, rigging and scams. Lukes’ model offers a way to locate where people are chafing against power without being able to identify its nature, and how, with a little help, they might do so.

How new ideas shift power

Recent British history suggests that ideas move through Lukes’ three dimensions over time. In the crises of the 1970s, heretical new ideas came into conflict with dominant old ones and the interests they protected, before eventually those new ideas won out, entrenching new concentrations of power – and laying the ground for what we now regard as political common sense. This process has begun to happen again, and must now be completed.

At one level, this is not an ideological phenomenon: any given distribution of power may well eventually become problematic. It is striking, for example, how similar the arguments in favour of nationalising Britain’s utilities in the 1940s were to the arguments for privatising them again in the 1980s. In each case, the dominant thinking concentrated power in the hands of one interest. In each case, heretics proposed to shake up the ownership of the system – and so break the concentration of power, driving out waste and inefficiency. Whether power was to be shifted from private sector to public, or from public to private, the promise was that the change would push prices down and investment up. Yet once this was done, in each case, customers eventually complained of prices rising, and of worsening service.

Over time, solutions become problems. As power concentrates, those holding it learn to game the systems designed to constrain them.

Drawing this together suggests a gradual process whereby today’s distribution of power and the ideas that underpin it became established, and have now come under challenge. In template form, it might run something like this:

I. Problem: In the 1960s and 1970s, the post-war political model, based on high-taxing, high-spending, assertively interventionist government, hits a series of crises. It is challenged by a series of heretical ideas (Lukes’ first dimension). These ideas call for the reduction of the power of the state and its allied interests, on the basis that this will solve problems and prevent them happening again.

II. Power shift: These heretical ideas and their political champions win power. After much contestation, these ideas become accepted as normal, and can rule the old, opposing ideas off the agenda (Lukes’ second dimension). This moves the boundaries of the politically ‘possible’. The power shift disempowers those groups whose interests were prioritised by the old ideas and concentrates power with those whose interests are prioritised by the new ideas (some groups remain disempowered throughout).

III. Overextension: As the new model is applied more broadly, and for longer, it starts to generate problems, partly caused by what it ignores, and whom it disempowers. It becomes a barrier to responding to new problems, some of which it is causing.

IV. Entrenchment: By this time, however, the new settlement is embedded in institutional culture, standards, norms, education and group behaviour (Lukes’ third dimension). This is underpinned by the fear of allowing a resurgence of the old crises that the now-dominant settlement was designed to prevent.

V. Public discontent: As challenges to this settlement grow, it has to assert its power and its founding ideas more overtly; politicians acknowledge public disempowerment and discontent but feel unable to respond because the orthodox approach, based on once-vivid problems and fears, has become embedded both psychologically and institutionally (back to Lukes’ second dimension). The public develops despairing theories about why they are disempowered, as outlined in Chapter 1. Extreme voices promise simple, illusory, all-out solutions.

VI. Government response: Public discontent compels government to acknowledge that old fears and norms are no longer the most pressing concern, that the long-dominant ideas they have held in place are outdated, and that new ideas emerging in response to new crises need to be heard – that power needs to shift again (back to Lukes’ first dimension). The challenge is to be radical enough to scrap outdated ideas and re-empower the public, and so to render more extreme solutions unnecessary.

‘Neutral’ ideas that hold the post-1979 settlement in place

The heretical ideas of the 1970s also became ‘common sense’ by another means. They were strengthened, particularly under New Labour, by the advance of a set of ‘neutral’ concepts aimed at constraining power in the name of inclusivity, objectivity and impartiality. But these concepts are not neutral; they have ended up helping to entrench the post-1970s political settlement. And what began as an effort to address distrust in the state has ended up exacerbating it.

Quantification

The first of these ideas is that good public policy should not be based, any more than absolutely necessary, on fickle human judgment, but on numerical data. In 1995, the historian of science Theodore Porter traced the roots of using quantification as a basis for policy to the expansion of the American federal state: its need to make decisions of greater scope and scale than before, and to justify them to the public.

The state’s expansion discredited the idea of relying on the supposed leadership qualities of ‘born-to-rule’ elites. Quantification offered a way to justify decisions more scientifically. It was a way to ‘break that [elite] culture down, or to compensate for its absence’. Porter suggested that this had likely helped loosen the grip of ‘old-boy networks’, opening professional culture to women and ethnic minorities.4

But something else was driving this change too: the sense that the American public did not trust the government. Reliance on nothing more than ‘seasoned judgment’ came to seem ‘undemocratic’.5 The ‘transition from expert judgment to explicit decision criteria did not grow out of the attempts of powerful insiders to make better decisions’. Rather, it ‘emerged as a strategy of impersonality in response to their exposure to pressures from outside’.6

Like many old solutions applied too extensively, quantification eventually caused problems. In response to that ‘overwhelming public distrust’, its signature method – cost-benefit analysis – was deployed more and more widely, becoming ‘a universal standard of rationality, backed up by thousands of pages of rules’.7

The underlying problem Porter identifies is that trying to exercise power objectively on the basis of faith in numbers involves ruling many things out of consideration – and choosing which people and issues are to be counted is intensely political. As Porter puts it, ‘numbers have often been an agency for acting on people, exercising power over them’ even turning people ‘into objects to be manipulated’.8 However much quantification was intended to democratise, it also disempowered. It was trying to be inclusive by excluding.

In Seeing Like a State (1998), the American political scientist James C. Scott cast this phenomenon as part of a broader administrative tendency to simplify the world to make it legible enough to rule. Administrators sought to mould reality to their models, marginalising much of value – particularly informal systems and local knowledge and culture. Instead, plans imagined ‘standardized citizens… uniform in their needs and even interchangeable’ who ‘for the purposes of the planning exercise, [have] no gender, no tastes, no history, no values, no opinions or original ideas, no traditions, and no distinctive personalities to contribute to the enterprise’.9

But this model misses out several vital things. First, ‘patterns and norms of social trust, community, and cooperation, without which market exchange is inconceivable’. Second, what the model doesn’t know. And third, what the people it belittles do know. Planners ‘regarded themselves as far smarter and far-seeing than they really were and, at the same time, regarded their subjects as far more stupid and incompetent than they really were’.10

Scott discerned this process at work not just in bureaucracies, but in big businesses:

‘large-scale capitalism is just as much an agency of homogenization, uniformity, grids, and heroic simplification as the state is, with the difference being that, for capitalists, simplification must pay. A market necessarily reduces quality to quantity via the price mechanism and promotes standardization; in markets, money talks, not people. Today, global capitalism is perhaps the most powerful force for homogenization, whereas the state may in some instances be the defender of local difference and variety.’11

This approach can spot bad projects and save money, but it can also close off transformative development. And trusting number-driven systems over humans can trigger disaster. Fujitsu’s Horizon software system for the Post Office was thought to be flawless, so logically any postmasters whose figures didn’t add up were thieves. Only after many lives had been ruined did it dawn on those in power that the fault might lie with Horizon.

Economic man

From the 1970s onwards, this focus on the measurable came together with one of the heretical ideas that proposed radical solutions to the decade’s crises. Homo economicus – ‘economic man’ – is a notional being motivated solely by rational self-interest based on sufficient information. This was a usefully computable figure for economists’ models. It was also what free market economists and theorists believed humans were actually like.

Crucially, ‘public choice’ theorists argued that self-interest drove not just businessmen, but state officials. As Gordon Tullock and James M. Buchanan argued: ‘politicians and bureaucrats, far from following any vocation or devotion to public service as they often professed, were in fact purely economically motivated’12

In the 1990s, when Porter and Scott were writing – the era of the ‘end of history’ and endless growth – the ‘economic man’ model seemed common sense. But if this version of the world were true, the 2008 financial crash would never have happened.

Amid the wreckage, horrified conservatives like the former banker and future Conservative MP Jesse Norman re-examined the foundational ideas the crisis had exposed, and found them to be decidedly rickety. The crash had come about, Norman wrote soon afterwards, ‘because people and markets did not behave in the standard way described in the economic textbooks’.13 He acknowledged that economists themselves had been concerned about the unreality of their models for decades, but argued that public policy had carried on regardless.

Norman argued that conventional economics in general and cost-benefit analysis in particular had led to several fallacies, including the public choice theorists’ insistence that ‘individuals maximise their… gain’ and ‘firms maximise their profits’.14 Such thinking suggested that ‘there can be only one, hyper-libertarian, variety of capitalism’. And so, ‘just at the point when we need an intelligent debate about how the UK and other modern market economies should develop, our most basic economic theory seems to make that debate impossible’.15

Once-controversial ideas had become norms, embedded by group behaviours and institutional practice. A system designed to constrain one form of power (the subjective judgement of politicians) had entrenched another (finance). Seventeen years and six prime ministers later, this settlement is still in place. Today, it is once again caught up in a crisis of its own making: one that is now not just economic, but political.

In Late Soviet Britain (2023), the political economist Abby Innes argues that the closed system through which Britain’s administrators see the world is delusional – in a way that is reminiscent of the sclerotic, dying USSR:

‘When it comes to the mechanics of government, both systems justify a near identical methodology of quantification, forecasting, target setting output planning in the neoliberal case and economy-wide outputs in the Soviet. … These techniques will tend to fail around any task characterised by uncertainty, intricacy, interdependence and evolution, which are precisely the qualities of most of the tasks uploaded to the modern democratic state.’16

The consequence has been that the state has slowly been ‘stripped of its capacity for economic government and, over time, for prudential, strategic action, as its offices, authority and revenues are subordinated to market-like mechanisms’.17 In Innes’ view, contextual knowledge has been stripped out; deep sector expertise is not rewarded. The British state’s long-nurtured capacity to solve problems has withered.

Trying to think in numbers, she argues, has weakened the state’s ability to distinguish between those parts of the economy that generate wealth and those which extract wealth. The private sector is seen always as the source of solutions, never of economic dysfunction. The state has developed ‘pathologies that span from administrative rigidity to rising costs, from rent-seeking enterprises to corporate state capture’. It has demoralised the state’s staff, and has now provoked ‘a crisis in the legitimacy of the governing system itself’18

The rule of rules

Trust in numbers – rather than political judgement – has a counterpart in words. Over the last few decades, the exercise of power in Britain has shifted away from emphasising political judgement towards a preference for ruling through rules.

A quarter of a century ago, Michael Moran identified this in ‘the rise of the regulatory state’, observing that:

‘Vast new areas of social and economic life have been colonised by law and by regulatory agencies. The food we eat, the physical conditions we work under, the machines and equipment we use in our home, office and on the road—all are increasingly subject to legal controls, usually administered by a specialised agency.’19

As with the adoption of quantification to overcome elitism, exclusion, and public distrust, the turn from fallible human judgement to impartial depersonalised rules was made with good intentions, and doubtless produced some positive outcomes. Moran attributes the shift in part to privatisation and the interventions of the European Commission, but also to the impact of a run of scandals, and of citizens’ declining tolerance of risk.

But solutions, pursued for long enough, tend to create problems. The urban management scholar Marc J. Dunkelman argues that in the post-war United States, progressives worked hard to diffuse power, in order to protect the public against arrogant, domineering planners who cut swathes through working-class neighbourhoods, for example. In Why Nothing Works, Dunkelman proposes that, however well-intentioned, this allergy to power ‘now serves not only to thwart abuse, but also to undermine the government’s ability to do big things’. Reformers have inserted ‘so many checks into the System that government has been rendered incompetent’20

Once again, one development that eventually made this difficult to ignore was the 2008 Crash. Looking back from 2024, the former investment banker Dan Davies argued that we had constructed a system so predicated on rules that when it went catastrophically wrong, it seemed that nothing was anyone’s fault.

Davies contextualises this as one instance of a much more general phenomenon: the ‘accountability sink’. He defines this as ‘the delegation of the decision to a rule book, removing the human from the process and thereby severing the connection that’s needed in order for the concept of accountability to make sense’23. This is at once a concentration and a dispersal of power. This goes some way to explaining why systems like this can disempower both the customers and citizens engaging with organisations run this way, and the employees within them. Yet this approach is continually reaffirmed and reinforced by professional norms and networks.

Looking at this in terms of Lukes’ theory of power prompts a question: what ideas does a given accountability sink render unassailable – and what ideas does it rule out of bounds? (Some ideas should be out of bounds, but once this goes too far it can start to corrode the legitimacy of the system, and start to give extremists a way to lend specious legitimacy to toxic ideas.) As we will see, accountability sinks are one way that an ideological model can become entrenched, defining what matters and what does not through apparently impartial rules, to the point where questioning the underlying idea seems impolite, or mad.

Economic man respects the guidelines

In its economic application, Davies’ analysis suggests that the rule of rules has dovetailed with post-1979 ideology: ‘the political system has used “the market” as an accountability shield since the early 1980s’.24

As with quantification, this way of thinking seemed like common sense, until it didn’t. Once again, if this version of the world were true, the 2008 financial crash would never have happened. As Davies writes, ‘the basic problem is that systems in general need to have mechanisms to reorganise themselves when the complexity of their environment gets too much to bear. But the high-level governing systems of the industrial world – economic policy and business management – had some defects and blind spots which prevented this from happening.25

The American political scientist Lisa Miller argues that rules-based power actually weakens pro-democracy politicians’ capacity to respond in a politically effective way:

‘constraining government power does not eliminate the problem of concentrated power. On the contrary, it provides narrowly focused, resource-rich private interests with opportunities to constrain policy reforms that do not serve their interests. Political systems with many checkpoints have a powerful bias in favor of the status quo, which generally benefits elites—particularly economic elites.26

British government is not tied down by the extreme constraints of the US constitution – but as we will explore, we are witnessing a British version of this phenomenon too.

Disempowering the public

One reason for moving to more explicitly rules-based systems was to eliminate abuses of power by authority figures who had been trusted until their exploitation of that trust was exposed, as in various instances of horrifying criminality revealed in the police service, children’s homes and general medical practice. Quantification and rules were both adopted in response to public distrust.

While this may have had some success, however, the prolonged and expanding use of these approaches has institutionalised the distrust they were meant to address. Bureaucratic and legalistic systems feed populist theories of uncaring power. And as David Willetts has suggested, one reason politicians’ language often sounds so empty of human feeling is that ‘they’re working on an income analysis shaped by economists’.

27

At the same time, both markets and regulation have been extended (often in symbiosis) beyond domains where their use makes clear sense. This disempowers the public, not least by depriving them of an effective means to register their objections within the system.28

Given the rise of Donald Trump’s ‘Make America Great Again’ (MAGA) movement, Van Dunkelman argues that ‘by helping to render government incompetent’, progressives ‘have pried open the door for MAGA-style populism. We share culpability for the public’s frustration.29 Davies warns that ‘the only ways [the public] have of expressing their discontent seem to be highly destructive of the system itself’.30

The sense that sticking stubbornly to pre-ordained rules is blocking necessary, common sense action – as articulated by ministers on planning, for example – is a marker of how the way this approach disperses power has become seriously problematic. This has corroded trust that the system has ordinary people’s best interests at heart.

But this problem has been exacerbated by some of the phenomena visible in the opinion research explored earlier. The public’s frustrating encounters with accountability sinks, whether as citizens, employees or customers, are bad enough. But the public has also noticed that the rules do not appear to apply to those in positions of power – from the bankers whose hubris caused the crash, to richly compensated water company chiefs, to those who turned a blind eye to the crimes of grooming gangs. This gives people the impression that we have gone beyond ‘one rule for them, another rule for us’, to something more like ‘implacable rules-based power imposed on us, impunity for those imposing it’. What was the point of institutionalising distrust by imposing onerous systems of rules to constrain the powerful if, in many areas of life, the powerful have found a way round them?

At the same time, it seems ever more evident to working-class voters that ‘things are no longer working as they used to, and that the old rules no longer apply’ – and that, in this context, ‘radical change has powerful appeal’.

31

Where might we find these gradual power shifts at work, in ways that drive the public’s sense of disempowerment, harming the reputation of mainstream democracy, while making it more difficult for democratic politicians to demonstrate that they can exercise the power of the state for good?