Managing AI Risks in Local Government
- Details
Amardeep Gill and Tayler-Mae Porter provide practical guidance for legal teams on the use of AI in local government.
The integration of artificial intelligence into local authority operations represents both an unprecedented opportunity and a formidable governance challenge. From automated planning application assessments to predictive models for social care interventions, AI systems promise efficiency gains and enhanced service delivery. Yet for local government lawyers the path forward demands careful navigation through a complex landscape of legal obligations, ethical considerations, and emerging regulatory requirements. This article provides practical guidance on AI deployment that protects both the public interest and organisational integrity.
Getting Data Protection Right
The deployment of AI systems within local authorities introduces multifaceted risks that require systematic mitigation strategies. At the forefront stands data protection compliance, where the UK data protection regime creates stringent obligations that many AI applications struggle to satisfy. Local authorities must recognise that AI systems processing personal data trigger heightened scrutiny, particularly when automated decision-making affects individual rights and freedoms.
Data protection impact assessments (DPIAs) are mandatory where personal data or automated decision-making is involved, which, covers almost every AI system a local authority might deploy. But these assessments cannot be perfunctory box-ticking exercises. Your DPIA must describe the processing (what AI system is being deployed, what does it do, what data does it process); genuinely interrogate and assess necessity and proportionality (why is the AI system needed, is it proportionate to the benefits, could the same outcome be achieved with less intrusive means); and identify risks to individuals including discrimination or bias, inaccurate predictions leading to inappropriate interventions, privacy intrusion from processing sensitive data, and lack of transparency making it difficult for individuals to understand or challenge decisions.
Any processing of personal data needs a lawful basis under the UK GDPR, and when it comes to AI processing the lawful basis demands particular attention. Whilst public task typically provides the foundation for most local authority functions, AI systems often require additional personal data or novel processing activities that stretch beyond traditional service delivery. Authorities must carefully assess whether their existing legal basis genuinely covers AI-enhanced operations or whether supplementary justification is required. The temptation to rely on legitimate interests should be approached carefully, given that public authorities face restrictions on this basis and must demonstrate compelling justification.
Cybersecurity measures
Cybersecurity measures for AI systems extend beyond conventional IT security protocols. AI systems present unique vulnerabilities, including adversarial attacks designed to manipulate algorithmic outputs, corruption of training datasets, and model extraction attempts that steal proprietary algorithms. Local authorities must ensure that AI vendors provide robust security architectures, including encryption for data at rest and in transit, access controls that limit system interaction to authorised personnel, and audit trails that document all system queries and outputs.
Procurement processes offer a critical opportunity to embed security requirements. When commissioning AI systems, authorities should demand comprehensive security documentation, including penetration testing results, vulnerability assessments, and incident response protocols. Contracts must clearly allocate responsibility for security breaches and establish service level agreements that mandate prompt patching and updates. The Procurement Act 2023, with its increased flexibility and emphasis on transparency, provides a framework for ensuring that security considerations receive appropriate weight in supplier selection.
Tackling Algorithmic Bias Head-On
Local authorities subject to the Public Sector Equality Duty face particular risks from AI's well-documented algorithmic bias issues. The reality is sobering. In healthcare, predictive and diagnostic AI systems have shown reduced accuracy for women and minority groups due to underrepresentation in training data. A Guardian investigation found UK agencies using AI with inadvertently biased outcomes - for example, a DWP fraud detector flagged disproportionate numbers of certain nationalities, including Bulgarian nationals. These are not theoretical risks but real failures with real consequences: legal liability, regulatory action, and most importantly, harm to individuals.
Getting bias mitigation right starts with thorough data auditing. Authorities need to, or mandate their suppliers to, examine training datasets for representational imbalances and historical discrimination that correlate with protected characteristics. Contracts must require comprehensive bias testing before deployment and regularly throughout the term, covering all nine protected characteristics (not just race and gender), using appropriate statistical methods with suppliers providing detailed bias testing reports.
Just as important is ongoing monitoring once systems go live. Supply contracts should require regular reports analysing AI decisions by protected characteristic groups and statistical analysis showing whether outcomes differ across groups. When disparities emerge, authorities must be ready to act, whether through algorithmic adjustment, additional human oversight, or system suspension.
Even when humans are involved in reviewing AI decisions, there's a well-documented phenomenon called "automation bias". This is the tendency to over-rely on automated systems and accept their recommendations without sufficient critical evaluation. This happens because humans assume AI is more accurate than it actually is, feel pressure to process decisions quickly, lack confidence to override "sophisticated" AI systems, and performance metrics reward speed over quality.
Authorities must actively combat automation bias through training that emphasises human responsibility and accountability, performance metrics that value good decision-making not just speed, culture that empowers staff to override AI when appropriate, and regular review decision quality.
Transparency Frameworks for AI Decision-Making
Transparency in AI-driven public decision-making is not merely good practice but a legal imperative flowing from multiple statutory obligations. Yet AI systems, particularly those employing machine learning techniques, often operate as "black boxes" whose decision-making logic resists straightforward explanation.
Explainability requirements must be tailored to the specific context and consequences of AI deployment. For high-stakes decisions affecting individual rights, such as benefit eligibility determinations or child protection risk assessments, authorities must be able to provide meaningful explanations of how specific outcomes were reached. This extends beyond generic descriptions of algorithmic functioning to case-specific explanations identifying which factors influenced particular decisions and how they were weighted. Technical explainability tools can support this obligation, but authorities must ensure that technical outputs are translated into accessible language that affected individuals can genuinely understand.
When procuring AI systems, embed transparency requirements into contracts from the outset. Demand transparency clauses requiring explanations of how AI works, with explanations in plain English within specified timeframes.
One of the most common obstacles legal teams encounter when procuring AI systems is suppliers claiming they cannot provide transparency about how their algorithms work because the technology is "proprietary" or "commercially confidential." This is often used as a blanket refusal to explain decision-making logic, provide access for audits, or demonstrate how the system reaches specific conclusions. Do not accept "It's proprietary" as an excuse for lack of transparency - you can require transparency whilst protecting commercial confidentiality. Transparency and commercial confidentiality can coexist through a crucial distinction between what you legitimately need to understand versus what suppliers want to protect.
What you legitimately need includes understanding how the AI reaches decisions, what factors it considers, how it weighs different inputs, what data it uses, and how it can be audited for fairness and accuracy.
What suppliers legitimately want to protect includes specific code, proprietary algorithms, training methodologies, and competitive advantages. These commercial interests can be safeguarded through appropriate confidentiality arrangements.
Preparing for the UK AI Regulation Bill
The UK's approach to AI regulation is distinctive: a principles-based, regulator-led model rather than the EU's comprehensive, prescriptive AI Act. The UK AI Regulation White Paper, published in March 2023, established five cross-sectoral principles applying across all sectors and all AI deployments: safety, transparency, fairness, accountability, and contestability. Rather than creating a single AI regulator, the government has empowered existing sectoral regulators (the ICO, CMA, FCA, EHRC, Ofcom, MHRA and others) to apply AI governance within their existing frameworks. For local authorities, this means understanding which regulators' guidance applies to your specific AI use case and building compliance with these five principles into your specifications, evaluation criteria, and contracts.
The principles-based approach gives you flexibility to tailor AI governance to your specific context, but it also places responsibility on you to think critically about what these principles mean for your specific AI procurement and how you'll ensure they're embedded throughout the contract lifecycle. For local authorities, this suggests a risk-based approach where AI systems affecting fundamental rights or public safety - such as those involved in social care decision-making, education assessments, or benefit determinations, face heightened regulatory requirements.
Critically, the public sector is expected to lead by example in ethical AI adoption. You're not just procuring AI for efficiency or cost savings, you're demonstrating to the wider economy what responsible AI deployment looks like. When public sector organisations procure AI systems with robust bias testing, meaningful explainability, and genuine human oversight, you're setting standards that influence private sector practice. This creates both responsibility and opportunity: the responsibility is to get AI deployment right, to build in the safeguards, transparency, and accountability that the public expects, and the opportunity is to drive innovation whilst maintaining trust and to influence the development of AI governance frameworks through your procurement practice.
Preparatory steps that authorities can take now will ease the transition as regulatory requirements crystallise. Conducting an AI inventory represents an essential starting point, cataloguing all AI systems currently in use or under consideration, their purposes, risk levels, and compliance status. Many authorities lack comprehensive awareness of AI deployment across their organisations, with individual departments procuring systems without central oversight. Developing internal AI governance policies provides a framework that can be adapted as regulatory requirements evolve, establishing approval processes for AI procurement and deployment, mandating risk assessments and impact evaluations, and creating clear accountability structures.
The Government Digital Service has published detailed guidance on responsible AI in government procurement that provides valuable frameworks local authorities should consider. The Algorithmic Transparency Recording Standard also encourages public bodies to disclose their use of AI in decision-making, and whilst compliance is currently voluntary, there's increasing expectation that public bodies will adopt it.
Concluding thoughts
The path forward demands both caution and pragmatism. The challenges that AI presents to local government are not transient difficulties that will resolve as technology matures. Rather, they represent a fundamental shift in how public services are delivered and how authorities must approach governance and accountability. Building sustainable AI governance capabilities requires strategic investment and cultural change.
Legal teams should position themselves as AI governance leaders rather than reactive advisers. This means proactive engagement with AI initiatives from inception, participation in procurement decisions, and ongoing oversight of deployed systems. It requires developing collaborative relationships with IT departments, data protection officers, service delivery teams, and elected members. AI governance cannot be siloed within legal teams, but legal expertise must inform every stage of AI deployment.
Amardeep Gill is the National Head of Public Sector and Tayler-Mae Porter is an Associate in the Commercial team at Trowers & Hamlins
Governance Lawyer
Legal Director - Government and Public Sector
Regulatory/Litigation Lawyer
Locums
Poll



