Five Takes logo
Five Takes News
HomeArticlesAbout
Michael
•
© 2026
•
Five Takes News - Multi-Perspective AI News Aggregator
Contact Us
•
Legal

science
Published on
Tuesday, May 5, 2026 at 02:09 PM
State Expands Grip on AI Before Public Release

The government is deepening its oversight of cutting-edge AI, signing new agreements with Google DeepMind, Microsoft and xAI to test powerful models before they reach the public, according to a Commerce Department announcement. The deals give the state a bigger hand in evaluating systems built by some of the most powerful companies in tech, with government assessments now stretching from pre-deployment review to post-deployment checks and related research.

Who Gets to Judge the Machines

Commerce said CAISI will conduct pre-deployment evaluations and targeted research to better assess frontier AI capabilities and advance the state of AI security. In plain terms, the apparatus is inserting itself into the pipeline before release, not after harm is done. The announcement makes clear that the government is not just watching from the sidelines; it is building a formal role in deciding how these models are measured and when they are considered fit for public use.

CAISI director Chris Fall framed the expansion in the language of expertise and public interest, saying, "Independent, rigorous measurement science is essential to understanding frontier AI and its national security implications." He added, "These expanded industry collaborations help us scale our work in the public interest at a critical moment." The quote lays out the logic of the arrangement: state oversight, corporate collaboration, and national security all folded together into one managed process.

Corporate Power, Government Access

The new agreements involve Google DeepMind, Microsoft and xAI, three major players whose models are now subject to government evaluation before public release. The arrangement gives the state access to the products of private firms while those firms retain their central role in building and deploying the systems. In the same breath, the announcement says the government will also carry out post-deployment assessments and related research, extending oversight beyond launch and into the life of the model.

A spokesperson said previously announced partnerships with Anthropic and OpenAI, first launched in 2024, are ongoing and reflect updated MOUs. Those deals have been renegotiated to reflect the Center for AI Standards and Innovation's directives, the Commerce secretary and President Trump's AI action plan. The language shows a familiar hierarchy at work: corporate partnerships, rewritten agreements, and top-down directives shaping the terms under which powerful technology is tested and released.

What They Call Oversight

The announcement comes a day after reports that the Trump administration is considering increased oversight of AI models via potential executive action on cybersecurity and pre-clearance of new models. That means more control from above, with executive power potentially used to tighten the gate before new systems can move into public circulation. The same machinery that claims to manage risk is also the machinery deciding what gets built, tested and approved.

Under the Biden administration, a 2023 executive order established the AI Safety Institute, which was re-named under the Trump administration. Axios previously reported that CAISI underwent significant changes at the beginning of Trump's term and was expected to pivot from AI safety to AI acceleration. But the institute has continued conducting AI testing and evaluations, publishing an evaluation of China's DeepSeek and soliciting comment on secure deployment of AI agents. The shifting name and mission show how quickly the language of safety, acceleration and security can be repackaged while the institution itself remains in place.

Fall was recently announced as director of CAISI after former Anthropic staffer Collin Burns was reportedly pushed out after just four days on the job. That brief tenure underscores how unstable even the personnel inside these institutions can be when the priorities at the top shift. The public is told this is all being done in its interest, while the actual decisions remain concentrated in the hands of government officials and corporate executives managing the terms of access, testing and release.

Previous Article

Parliament Topples Romania's Fragile Minority Rule

Next Article

U.S. Ties Health Aid to Zambia’s Mineral Wealth
← Back to articles