In a stark warning issued today, Google's Threat Analysis Group (TAG) released its annual report detailing the alarming proliferation of artificial intelligence in cyberattacks. The report, titled 'AI-Powered Threats 2025,' describes a paradigm shift where state-sponsored and criminal hacking groups have industrialised their operations using generative AI, marking a 'new era of threat sophistication.'
According to the report, the volume of AI-assisted cyberattacks increased by 450% year-over-year, with adversaries leveraging large language models (LLMs) to automate reconnaissance, craft highly credible phishing emails, and even generate polymorphic malware that evades traditional detection. 'We are witnessing the commoditisation of hacking,' said Shane Huntley, director of TAG. 'What previously required months of manual effort can now be executed in hours with AI.'
The report identifies three primary vectors for AI-powered attacks. First, advanced social engineering: AI-generated deepfakes and voice cloning have been used to impersonate C-suite executives, leading to fraudulent wire transfers totaling over $100 million in known cases. Second, automated vulnerability discovery: adversarial machine learning models scan codebases for zero-day exploits at unprecedented speed. Third, AI-generated disinformation campaigns, where models produce targeted propaganda at scale, influencing public opinion in geopolitical conflicts.
Google's findings corroborate earlier warnings from the US Cybersecurity and Infrastructure Security Agency (CISA), which last month issued a directive urging federal agencies to adopt 'AI-resilient' authentication measures. The report also notes that adversaries are increasingly using publicly available AI tools, such as modified versions of open-source LLMs, to lower the barrier to entry for less skilled hackers. 'The democratisation of AI is a double-edged sword,' Huntley added. 'While it empowers defenders, it also empowers attackers.'
In response, Google has accelerated its own AI defenses. The company announced a suite of new security tools, including 'SynthID for Threat Hunting,' which embeds invisible watermarks in AI-generated content to trace its origin, and an expanded version of its 'Project Zero' team focused on AI system vulnerabilities. 'We are in an arms race, and we must stay ahead,' said Sundar Pichai, CEO of Alphabet, in a statement.
The report concludes with a call for international collaboration on AI security standards, urging governments to invest in 'AI-aware' workforce training and to establish norms against the weaponisation of AI. As the line between human and machine-generated attacks blurs, the report serves as a clarion call for the tech industry: the era of industrial-scale AI hacking has arrived, and the window to build resilient defenses is narrowing.







