Google’s artificial intelligence systems are under constant siege. Not from bored teenagers in basements, but from state-sponsored actors and criminal syndicates wielding their own AI. This is the new arms race, and it is unfolding in the shadows of Google’s sprawling data centres.
Sources inside the company describe a relentless cat-and-mouse game. “Every day, we see attacks that learn and adapt faster than any human hacker could,” said a senior engineer who spoke on condition of anonymity. “They don’t just probe for weaknesses: they study our AI’s behaviour, find its blind spots, and exploit them.”
The target is often Google’s machine learning models, used in everything from search to self-driving cars. Hackers have found ways to trick these models into misidentifying objects, bypassing security checks, or revealing private data. One technique, called adversarial machine learning, involves feeding the AI carefully crafted inputs that look normal to humans but cause the system to malfunction.
Google’s response is a secretive project codenamed “Redwood”, according to internal documents seen by The British Wire. Redwood is a dedicated team of engineers and researchers tasked with building an AI that can detect and defend against these new forms of attack in real time. It is an AI built to fight AI.
“Redwood is our immune system,” a project manager told us. “But viruses evolve. So must we.”
The scope of the threat is staggering. Google processes billions of queries each day. Each query is a potential attack vector. The company’s AI models are some of the most valuable intellectual property in the world. Corrupting them could mean leaking user data, manipulating search results, or even causing physical harm if a hack targets autonomous vehicles.
The stakes are existential. “If we lose this battle, the trust in all AI collapses,” said Dr. Elena Markov, a former Google AI ethicist now at the University of Cambridge. “And that trust is already fragile.”
Google is not alone in this fight. Microsoft, Amazon, and Facebook have all built similar defensive AI systems. But Google’s scale makes it uniquely vulnerable and uniquely powerful. The company’s vast data centres allow it to train its defensive models on more attack patterns than any competitor.
Critics argue that Google’s approach is too opaque. “They are building a black box to protect other black boxes,” said Sameer Jindal, a security researcher at the Electronic Frontier Foundation. “We need transparency. We need oversight. Otherwise, we are sleepwalking into a future where AI runs our lives and we have no idea how it is being manipulated.”
Google insists that secrecy is necessary. “If we reveal our defenses, we give attackers a road map,” the project manager said. “We are playing chess, not checkers.”
But the arms race shows no signs of slowing. Last month, a state-sponsored group hacked into a Google AI model used for image recognition. The attack came through a popular photo-editing app. The model started misclassifying faces. It could not tell the difference between a cat and a dog. Worse, it started returning false matches in a police facial recognition system that uses Google’s cloud AI.
Google patched the vulnerability within 48 hours. But the episode exposed a chilling reality: no system is immune.
The race is accelerating. In 2023, Google filed more than 200 patents related to AI security. That is triple the number from two years earlier. The company is hiring PhDs in adversarial machine learning as fast as it can find them.
Yet the attackers are also investing. AI hacking tools are now sold on the dark web. Some are sophisticated enough to adapt their strategies mid-attack. “It is the wild west out there,” said the engineer. “And everyone is armed.”
Google’s fight is our fight. Every time we use Google Search, every time we upload a photo, we rely on the integrity of these models. If they fall, our digital world fractures. The new arms race is quiet, invisible, and relentless. And it is only just beginning.








