Predictive policing tools are proliferating in the UK. For example, the US firm Palantir has quietly won contracts to link police databases across multiple forces in the East of England (the 'Nectar' project), promising a 'single, unified view' of suspects, even flagging people 'about to commit a criminal offence'. Bedfordshire police has even been seen to boast that it was 'the first county in Britain to be policed by AI'. Civil liberties groups warn these systems are not neutral. Liberty’s investigation found at least 14 forces using data-driven crime prediction programs, painting a future of discriminatory programs with no direction of responsibility.
In practice, private algorithms operate in secret, applying opaque, subsurface rules instead of individual suspicion. This outsourcing of ‘reasonable suspicion’ creates a policing regime that is harder to audit, more intrusive on communities, and inherently racially biased.
The accountability gap: Operating in a regulatory vacuum
The traditional stop and search practices are subject to oversight:
- PACE Code A (2023) – under the Police and Criminal Evidence Act, officers must have 'reasonable grounds for suspicion' before searching. Code 2.4A requires those grounds to be based on “accurate and current intelligence or information”. Officers must record every stop and search and outcome.
- Best Use of Stop and Search Scheme (voluntary) – introduced by the Home Secretary in 2014, this framework (adopted by many forces) demands better data recording, community patrol observations, and stricter control of no-suspicion Section 60 stops, therefore seeking “greater transparency” and community input on how stop-search is used.
- Independent oversight (IPCC/IOPC) – if a stop-search is misused, individuals can complain to the police or the (Independent) Office for Police Conduct. Misconduct investigations can follow, and repeated abuse is reportable to parliament.
Yet a loophole is found, as private tech tools evade these rules. Companies like Palantir or PredPol are not public bodies and have no legal duty to be transparent. There is no statutory requirement to publish their algorithms or data. Senior MP David Davies noted that with the Palantir project, police are adopting powerful new tools 'without the necessary statutory underpinning', effectively 'appropriating the powers they want' beyond what the law authorises.
It’s without debate that the general public lacks information regarding how these algorithms reach their answers and how they are programmed. A private risk score could lead to searches, but there is no parallel of PACE’s accountability: no independent audit, no public review, and no legal remedy if someone is wrongfully targeted by a black-box system.
What is being shown is a loophole created by the government, to allow it to sidestep its own safeguards. This leaves an absence of due process and responsibility, as police can now point to AI 'risk maps' rather than individual grounds for suspicion.
Supercharging bias: The algorithmic ‘grounds’ for a stop
Predictive tools rely on past police data, a classic 'garbage in, gospel out' problem. UK stop and search statistics show stark racial bias: the latest Home Office figures report that Black people were 4.1 times more likely to be stopped than white people (in 2022/23).
In recent years, StopWatch has highlighted even higher disparities (Black Londoners were 7.0 times more likely than white Londoners to be stopped at one point). Which, when over three quarters of those stops lead to no further action and only 11% lead to arrests, suggests the criteria is broad and not genuinely intelligence based.
When biased data positively feeds a predictive system, it doesn’t find new criminals but reinforces old prejudices and directs police to already over policed areas. In practice, a tool might mark a neighbourhood or person as 'high risk' merely because they live or walk through areas heavily targeted before. This creates a veil of technological justification for searches lacking any individualised suspicion.
Avon & Somerset’s individual risk score system has even produced very worrisome, dystopian stories. One man reports being stopped over 50 times for trivial reasons, after a sticker on a lamppost flagged him as 'high risk'.
The 2025 Notting Hill carnival: A preview of predictive policing
The Metropolitan police confirmed 423 arrests over the two main days of Notting Hill carnival 2025, with only two non-life-threatening stabbings reported.
Assistant commissioner Matt Ward said the force had used a 'proactive' strategy, deploying live facial recognition (LFR) cameras and police knife-arch scanners at entrances, along with broad and opaque stop and search powers.
Officers were specifically authorised to invoke section 60 of the Criminal Justice and Public Order Act (1994), which permits weapon searches without needing reasonable suspicion.
Police noted that 52 of the arrests were directly triggered by LFR identifications, and they credited the arrests with helping prevent violence at the event. These figures show how surveillance technology has been woven into routine stop and search practice at the carnival.
Under LFR, an algorithmic 'match' or alert serves as the basis for a search, replacing the independent reasonable grounds that law normally requires, effectively inverting the usual safeguards. This moves from an “innocent until proven guilty” stance, to everyone being treated as a potential criminal. As one civil-rights campaigner memorably put it, LFR is 'stop and search on steroids'.
Campaigners warn that LFR-driven stops amount to a form of outsourced, algorithmic policing that delegitimises the legal norms of reasonable suspicion and risks deepening the very abuses stop and search rules aim to limit. Ultimately following a pattern historically observed at the Notting Hill carnival, where stops have disproportionately targeted Afro-Caribbean attendees.
To prevent further rights abuses, parliament must urgently regulate predictive policing, requiring full transparency about security contracts. Ban biased stop and search data from training algorithms, mandate independent and public audits of any policing software, and clarify in law that an algorithmic score can never constitute 'reasonable suspicion' for police stops under PACE.
The 2025 carnival shows what happens without these safeguards, driving police suspicions to be outsourced to secret code, disproportionately targeting Black Britons, and sacrificing police transparency. As Amnesty’s Sacha Deshmukh warned, 'the evidence it keeps us safe isn’t there; the evidence it violates rights is clear as day.'
Public safety demands public accountability, not privatised suspicion, the state cannot outsource our justice to private code. Without accountability, where will scrutiny go?
By Thomas Dixon, StopWatch volunteer
All blogposts are published with the permission of the author. The views expressed are solely the author’s own and do not necessarily represent the views of StopWatch UK.