As President Donald Trump prepares to travel to a summit in China, his administration is sharply split over a plan to give U.S. intelligence agencies a bigger role in evaluating AI models, according to two people familiar with the matter, who spoke on the condition of anonymity to discuss a proposal that is not yet public. The fight is over who gets to police a technology that ordinary people will live with while the machinery of state power argues over how much more control it should have.
Who Has the Power
The White House is grappling with cybersecurity threats from advanced artificial intelligence models, and national security officials want more sway in AI regulation. That means the same institutions that already command surveillance, secrecy, and force are pushing for a larger hand in deciding how AI gets judged and controlled. David Sacks, former White House AI czar, remains active in discussions about how the administration should respond to AI advances.
The proposal is not yet public, but the split inside the administration shows the familiar pattern: decisions about systems that affect everyone are being handled behind closed doors by officials and advisers, with the public left outside the room. The source material says the administration is sharply divided, but the people most exposed to the consequences are not the ones shaping the rules.
Testing the Machines, Then Scrubbing the Record
U.S. officials also sought to stress-test the security of AI systems from Microsoft, Google and xAI to identify threats ranging from cyberattacks to military misuse. The stated purpose was to look for dangers in systems built by some of the biggest corporate players in the field, including risks tied to cyberattacks and military misuse. That testing was later deleted from a government website, and the reason for the deletion was not clear.
The deletion matters because it leaves another layer of institutional opacity in place. Officials can test, publish, and erase, while the public gets a disappearing record and no explanation. The apparatus wants the authority to inspect AI systems, but not necessarily the accountability that should come with such scrutiny.
Corporate Systems, Criminal Use, and the Cost Below
Google said criminal hackers used AI to locate a major software flaw, underscoring the company’s view that AI can be used both to improve cybersecurity and to introduce new risks. In the company’s framing, the same tools that promise protection can also widen the field of harm. Google said the field is still early and requires ongoing effort to build safer code.
That statement places the burden of “ongoing effort” on the same corporate sector that builds and deploys the systems in the first place. The people who will absorb the fallout from software flaws, cyberattacks, and military misuse are not the executives or national security officials debating oversight. They are the users, workers, and communities left to deal with the consequences of systems designed and governed from above.
The administration’s internal split, the stress-testing of corporate AI systems, and the deletion of those testing details all point to a familiar hierarchy: state agencies seeking more leverage, companies warning about risks while continuing to build, and the public expected to trust both. The source material does not show any grassroots response or mutual aid effort here, only top-down management of a technology whose dangers are already being acknowledged by the institutions profiting from it.