
Google said it has identified what may be the first known case where cybercriminals used AI to discover and weaponize a previously unknown zero-day vulnerability. The company’s threat intelligence group said in a report Monday that it found evidence of several "prominent cyber crime threat actors" partnering to identify a bug in a Python script that would let them bypass two-factor authentication on a popular open-source system. The groups, which Google did not identify, then used AI-assisted code to weaponize the previously unknown vulnerability, according to the report.
Who Gets Hit First
The target in this story is not the people building the tools of control, but the users left exposed when those tools fail. Google said the attempt to exploit the unidentified open-source system was thwarted, and that it has since disclosed the flaw to the vendor. That is the basic pattern of the digital hierarchy: a vulnerability exists, attackers probe it, and the people relying on the system are left waiting for the next patch, the next disclosure, the next scramble to stay ahead of the damage.
Google based its assessment on characteristics common in AI-generated code, including overly explanatory comments in the code, a made-up severity rating for the bug and coding patterns commonly seen in AI-generated Python scripts. The company’s own description shows how the machinery of AI is already being folded into offensive use, with code that carries the fingerprints of automation while being aimed at bypassing protections that ordinary users are told to trust.
The Arms Race Is Already Here
John Hultquist, chief analyst at Google's threat intelligence group, said, "There's a misconception that the AI vulnerability race is imminent. The reality is that it's already begun." He added, "For every zero-day we can trace back to AI, there are probably many more out there." Those lines strip away the comforting fiction that this is some distant future problem. The race is not coming; it is underway, and the people with the most power over the tools are also the ones trying to weaponize them.
Google said advanced AI models are getting better at finding subtle security weaknesses in software that conventional cybersecurity tools often fail to catch. In the zero-day example, the model appeared to identify a hidden trust assumption in the software's login logic that could be exploited to bypass two-factor authentication protections. That is the kind of quiet failure that makes the whole security apparatus look less like a shield and more like a brittle layer of trust assumptions waiting to be gamed.
States, Spies and the Corporate Gatekeepers
Google said the AI-assisted exploit was one of several cases it uncovered in recent months highlighting growing interest among both cybercriminals and nation-state hackers in using AI to supercharge attacks. The report said North Korean and Chinese state actors are experimenting with AI in a variety of ways to exploit vulnerabilities. In one case, researchers found APT45, a North Korean military group, using AI to test and validate thousands of exploits targeting software flaws.
That detail matters because it shows the same old hierarchy wearing a new mask. State actors are not standing outside the system; they are using the same AI tools to intensify the same logic of surveillance, intrusion and control. The report also said Google uncovered malware, dubbed PromptSpy, that uses Gemini to autonomously navigate Android devices by interpreting on-screen activity and generating commands in real time. The machine is not just assisting humans anymore; in this case, it is being used to move through devices on its own terms.
U.S. AI companies are increasingly grappling with how to prevent their more sophisticated AI models from being abused by cybercriminals and state-backed hackers. The article leaves that struggle exactly where it usually lands: inside corporate risk management, with the public expected to live under the consequences while the companies and state actors fight over who gets to control the machinery.