The Metropolitan Police have launched a controversial trial of AI-powered surveillance cameras across London’s West End. The system, which began operation this week, uses real-time facial recognition and behaviour analysis to identify potential threats. Critics call it a step towards a surveillance state.
Shoppers and commuters in Soho, Covent Garden and Piccadilly Circus are now under constant watch. The cameras, mounted on lampposts and buildings, are linked to a central control room at New Scotland Yard. Police say the technology will help prevent crime and terrorism.
“This is about keeping Londoners safe,” said Commander Adrian Usher, head of the Met’s Counter Terrorism Command. “The system can spot suspicious behaviour patterns that a human operator might miss. It is not about spying on law-abiding citizens.”
The trial is funded by a £5 million grant from the Home Office. It is part of the government’s 2026 “Safe City” initiative. Similar trials are planned for Manchester and Birmingham later this year.
But civil liberties groups are alarmed. The Big Brother Watch campaign has called the trial “an unprecedented intrusion into public life”. Director Silkie Carlo said: “Facial recognition technology is unreliable and racist. It misidentifies women and people of colour at higher rates. This is a dangerous experiment with our rights.”
Research from the University of Essex supports these concerns. A 2024 study found that commercial facial recognition systems had error rates of up to 34% for black women, compared to 1% for white men. The Met insists its system is more accurate, but has not published independent test results.
The surveillance system goes beyond facial recognition. It also uses artificial intelligence to analyse body language and crowd movements. Software can flag individuals who walk with a “furtive gait” or linger in certain areas. Police say these alerts help officers intervene before crimes occur.
Legal experts question whether the trial complies with the UK’s data protection laws. The Information Commissioner’s Office (ICO) is monitoring the scheme. A spokesperson said: “We are in discussions with the Met about ensuring their systems are lawful and transparent.”
Under the Data Protection Act 2018, the use of biometric data requires explicit consent or a clear legal basis. The Met argues that the trial falls under its duty to prevent crime. Critics say this is a loophole.
London’s West End is already one of the most heavily surveilled areas in Britain. There are over 600 public cameras in the district, operated by the police and the local council. The new AI cameras add another layer of scrutiny.
Some residents welcome the move. “I feel safer knowing the police are watching,” said Julie Brennan, a shop assistant in Covent Garden. “There is so much pickpocketing and phone snatching around here.”
Others are less convinced. “It’s creepy,” said Tom Harding, a musician who performs in Leicester Square. “They are watching everyone, not just criminals. What if they make a mistake?”
That question is central to the debate. The Met acknowledges that false positives are possible. Officers receive alerts from the system and must use their judgment. But there is little public oversight. The trial’s results will not be published until 2027.
Home Secretary Yvette Cooper has defended the scheme. “We must use every tool available to keep the public safe,” she told Parliament last week. “This technology is the future of policing. We cannot afford to fall behind.”
Opposition MPs are not convinced. Labour’s shadow home secretary, Diane Abbott, called for a halt to the trial. “We are sleepwalking into a surveillance state,” she said. “The Met should focus on community policing, not spying on innocent people.”
The trial is set to run for 12 months. If deemed successful, the Met plans to expand it to other parts of London. For now, everyone in the West End is being watched. Whether they know it or not.








