The regulatory framework around the use of artificial intelligence by local authorities is inadequate but existing governance mechanisms can address concerns, writes Sue Chadwick.

The Local Digital Fund has recently announced funding awards for a range of “council-led digital projects” including a housing management platform, a digital marketplace for child placements, a Special Educational Needs top-up funding service, and a digital waste service.

It is clear that automation of routine local government functions, and the use of data analytics to process large amounts of information from a range of sources offers benefits in terms of saving time and money and in freeing up officers to do more complex work.

It is equally clear that the regulatory framework for this transformation is inadequate. The LDF announcements coincided with the publication of a report “The New Frontier: Artificial Intelligence at Work” by All Party Parliamentary Group on the Future of Work that highlighted the increasing prevalence of AI in the workplace, noted “marked gaps in legal protection” and called for “robust proposals to protect people and safeguard our fundamental values”.

These concerns are relevant to all use of AI in a local government context. The risks were comprehensively explored in Artificial Intelligence and Public Standards a 2020 Review by the Committee for Standards in Public Life that highlighted issues relevant to the Nolan Principles as follows:

While the issues with the use of AI are common to everyone using it, regulatory responses diverge. At an international level Canada has a Directive on Automated Decision-Making, New Zealand has an Algorithm Charter, while the USA has developed an Accountability Framework. The EU is proposing a new law – it published a draft AI Regulation earlier this year that included definitions of “AI systems”, and placed severe restrictions on the use, or continued use, of technology that came within its definition of a “high risk” AI system, including some public services.

The UK is developing its own approach. An AI Strategy was published in September signalling the Government’s intention “to build the most pro-innovation regulatory environment in the world”. Pillar 3 of the Strategy promises the development of an AI Governance Framework, piloting AI Standards Hub and developing an AI standards engagement toolkit. A White Paper on AI Governance is due to be published in the new year.

AI also features in the current consultation on a new regulatory regime for data, where one of the proposals is “compulsory transparency reporting on the use of algorithms in decision-making for public authorities, government departments and government contractors using public data.” This proposal has been supported by the Information Commissioner's Office in its response to the consultation – where the ICO stated that “this proposal would help provide scrutiny” and encouraged the Government to consider how FOI and EIR could be used to support implementation.

These proposals fall some way short of the APPG recommendations, which proposes an Accountability for Algorithms Act, and a new public sector duty to “undertake, disclose and act on pre-emptive Algorithmic Impact Assessments”. In addition they do not specifically address key recommendations in the CSPL report on artificial intelligence and public standards mentioned above such as:

For a local authority looking to adopt new digital tools there are some difficult choices to make. On the one hand there is a need for speedy progress so that the benefits of the technology can be realised. On the other hand there are at the very least reputational risks connected with the adoption of new technologies at a time when levels of public trust in both government and new technologies are low. There may also be legal risks looming - algorithmic bias has already been used as the basis for a successful challenge to the use of facial recognition technology by the South Wales police.

On a more optimistic note, the CSPL report comments that “effective governance of AI in the public sector does not require a radical overhaul of traditional risk management”. Existing governance mechanisms can be adjusted to accommodate the challenges of new technologies; here are some ideas:

Finally, when Alan Turing considered whether computers could ‘think’ in the same way as humans, he concluded that “We can only see a short distance ahead, but we can see plenty there that needs to be done.” The Government has proposed the use of regulatory sandboxes as a way of testing new technologies within the existing regulatory context while minimising risk. Given the increasing prevalence of the use of AI in local government functions, there is a golden opportunity for an enterprising local authority to set up a regulatory sandbox for regulation itself.

Sue Chadwick is a strategic planning advisor at Pinsent Masons.