The White House’s approach to artificial intelligence policy is under internal strain as chief of staff Susie Wiles appeared to walk back comments from an administration official that the government would regulate AI models “just like an FDA drug.” The fight is not happening in public for ordinary people to shape; it is a tug-of-war inside the machinery of power as officials weigh how far to go in regulating the technology.
Who Has the Power
The debate comes as AI is being discussed as more than a technology issue. At SCSP’s AI+ Expo, leaders were thinking about how AI fundamentally changes the workforce, warfare, intelligence gathering and supply chains. That is the language of institutions that manage labor, surveillance, and logistics from above while everyone else lives with the consequences below.
The Washington Post also reported that a new generation of powerful artificial intelligence models, including Anthropic’s Mythos, has begun to crack the White House’s hard-line stance on promoting the technology. Top officials are confronting security risks posed by tools that can easily find flaws long buried in computer code. The same systems being pushed as progress are also exposing the brittle seams in the digital infrastructure the powerful depend on.
Who Gets Crushed
The White House’s hard-line stance is being tested by the speed at which AI systems like Anthropic’s Mythos expose hidden security flaws. That pressure is prompting Trump administration officials to rethink their hands-off approach to the technology. The people left to absorb the fallout are not the officials debating policy, but the workers, users, and communities living inside systems built on code they did not write and cannot control.
The administration is facing a tug-of-war over AI policy as officials weigh how far to go in regulating the technology. The framing is all about regulation, but the real issue is who gets to decide how these systems are deployed, who profits from them, and who bears the risk when they fail.
What They Call “Order”
President Donald Trump showed off a signed executive order in December targeting onerous artificial intelligence regulations in states. The order is part of the same top-down contest over who gets to set the rules: state governments, the White House, and the corporate interests orbiting the technology. The public gets the spectacle of regulation fights while the apparatus keeps deciding the terms.
Susie Wiles appeared to walk back comments from an administration official that the government would regulate AI models “just like an FDA drug.” That reversal shows how unstable the official line is when the technology starts revealing security risks too quickly for the people in charge to keep up.
The debate is not confined to one office. It is tied to a broader scramble over AI’s role in warfare, intelligence gathering, supply chains, and the workforce. Those are the domains where centralized power reaches deepest into everyday life, and where decisions made at the top are handed down as inevitabilities to everyone else.
The White House’s hard-line stance on promoting AI is being tested by tools that can uncover flaws buried in computer code. The same institutions that sell innovation are now forced into a defensive posture when the technology they celebrate starts undermining their own security assumptions. The result is not liberation, but another round of managed adjustment inside the same hierarchy.
The administration’s internal strain over AI policy shows a familiar pattern: officials debate how to contain a force they helped unleash, while the rest of society is expected to live with the consequences. The language changes, the executive orders shift, and the power structure remains where it is.