GLD Vacancies

We need real decisions about artificial intelligence

The regulatory framework around the use of artificial intelligence by local authorities is inadequate but existing governance mechanisms can address concerns, writes Sue Chadwick.

The Local Digital Fund has recently announced funding awards for a range of “council-led digital projects” including a housing management platform, a digital marketplace for child placements, a Special Educational Needs top-up funding service, and a digital waste service.

It is clear that automation of routine local government functions, and the use of data analytics to process large amounts of information from a range of sources offers benefits in terms of saving time and money and in freeing up officers to do more complex work.

It is equally clear that the regulatory framework for this transformation is inadequate. The LDF announcements coincided with the publication of a report “The New Frontier: Artificial Intelligence at Work” by All Party Parliamentary Group on the Future of Work that highlighted the increasing prevalence of AI in the workplace, noted “marked gaps in legal protection” and called for “robust proposals to protect people and safeguard our fundamental values”.

These concerns are relevant to all use of AI in a local government context. The risks were comprehensively explored in Artificial Intelligence and Public Standards a 2020 Review by the Committee for Standards in Public Life that highlighted issues relevant to the Nolan Principles as follows:

  • Openness: the CSPL noted a lack of information about the use of AI, and lack of transparency.
  • Accountability: the use of AI makes it more difficult to show who is responsible for decision making or explain those decisions.
  • Objectivity: AI can embed and even magnify existing bias, which the CSPL noted as a “serious concern”.

While the issues with the use of AI are common to everyone using it, regulatory responses diverge. At an international level Canada has a Directive on Automated Decision-Making, New Zealand has an Algorithm Charter, while the USA has developed an Accountability Framework. The EU is proposing a new law – it published a draft AI Regulation earlier this year that included definitions of “AI systems”, and placed severe restrictions on the use, or continued use, of technology that came within its definition of a “high risk” AI system, including some public services.

The UK is developing its own approach. An AI Strategy was published in September signalling the Government’s intention “to build the most pro-innovation regulatory environment in the world”. Pillar 3 of the Strategy promises the development of an AI Governance Framework, piloting AI Standards Hub and developing an AI standards engagement toolkit. A White Paper on AI Governance is due to be published in the new year.

AI also features in the current consultation on a new regulatory regime for data, where one of the proposals is “compulsory transparency reporting on the use of algorithms in decision-making for public authorities, government departments and government contractors using public data.” This proposal has been supported by the Information Commissioner's Office in its response to the consultation – where the ICO stated that “this proposal would help provide scrutiny” and encouraged the Government to consider how FOI and EIR could be used to support implementation.

These proposals fall some way short of the APPG recommendations, which proposes an Accountability for Algorithms Act, and a new public sector duty to “undertake, disclose and act on pre-emptive Algorithmic Impact Assessments”. In addition they do not specifically address key recommendations in the CSPL report on artificial intelligence and public standards mentioned above such as:

  • Assessment of the risks of proposed AI systems on public standards at project design stage, and an ongoing standards review (Recommendation 9).
  • Conscious engagement with issues of bias and discrimination (Recommenation 10).
  • Formal oversight mechanisms that allow scrutiny of AI systems (Recommendation 13).
  • Continuous training and education for employees of providers of public service, whether they are public or private (Recommendation 15).

For a local authority looking to adopt new digital tools there are some difficult choices to make. On the one hand there is a need for speedy progress so that the benefits of the technology can be realised. On the other hand there are at the very least reputational risks connected with the adoption of new technologies at a time when levels of public trust in both government and new technologies are low. There may also be legal risks looming - algorithmic bias has already been used as the basis for a successful challenge to the use of facial recognition technology by the South Wales police.

On a more optimistic note, the CSPL report comments that “effective governance of AI in the public sector does not require a radical overhaul of traditional risk management”. Existing governance mechanisms can be adjusted to accommodate the challenges of new technologies; here are some ideas:

  • It should not be too difficult to develop and adopt clear strategic policies on the use of new technologies. The GLA has recently published a Technology Charter accurately described as a “set of practical and ethical guidelines” and has encouraged other local authorities to adopt it or use it as the basis of their own guidance. Local authorities who want to develop more detailed guidance could also use the 7 points in the Government’s own Ethics, Transparency and Accountability Framework for Automated Decision-Making or the CBI guidance on AI ethics in practice for inspiration.
  • Putting policies and codes into action is the next step. Although they are easier to draft and adopt then they are to implement, there is some useful guidance out there on how to make transparent decisions about the use of AI using existing decision-making processes. For example, the Government’s Guidelines for AI procurement are a useful starting point when buying external software or services, and the emerging ICO toolkit for AI complying with data protection law | ICO is a good way of assessing data protection risks.
  • For the local authority using complex analytical modelling and/or machine learning, guidance developed jointly between the ICO and the Alan Turing Instititute: Explaining Decisions made with AI includes an appendix of algorithmic techniques and their relative explainability that could be used as the basis of a transparent risk assessment.
  • There are even some pre-existing decisions that could be used as a template – for example this decision by the GLA to procure a Retrospective Facial Recognition System.

Finally, when Alan Turing considered whether computers could ‘think’ in the same way as humans, he concluded that “We can only see a short distance ahead, but we can see plenty there that needs to be done.” The Government has proposed the use of regulatory sandboxes as a way of testing new technologies within the existing regulatory context while minimising risk. Given the increasing prevalence of the use of AI in local government functions, there is a golden opportunity for an enterprising local authority to set up a regulatory sandbox for regulation itself.

Sue Chadwick is a strategic planning advisor at Pinsent Masons.