Who Gets the Power
The Pentagon said Friday that it has reached deals with seven tech companies to use their artificial intelligence in its classified computer networks, giving the military access to AI-powered capabilities to help it fight wars. The arrangement puts more machine power inside the war apparatus, with the people who will live under its consequences having no say in the deal.
The Washington Post described the moment as part of a fundamental shift underway in U.S. AI policy. In a separate Washington Post newsletter, Dean Ball, a former senior adviser on AI policy for President Donald Trump’s administration, said that a fundamental shift is underway in AI policy. The language is polished, but the direction is plain: more state and corporate integration, more investment, more control.
The Public Gets the Bill, the Institutions Get the Tools
Politico reported that the world is adopting artificial intelligence at such a rapid pace that it is unable to measure AI’s capabilities, assess its progress or even understand what’s training AI models. Those themes emerged from the newly released Stanford AI Index Report, an annual AI report card tracking the tool’s speed, scale and influence, now in its ninth year. The report found that people are adopting generative AI tools faster than they started using the internet.
The same report found public resentment toward AI boiling over in America, with only 30 percent of Americans trusting their government to regulate the rapidly scaling tech, the lowest rating among all 30 countries surveyed. That gap matters because the institutions claiming to manage the technology are the same ones pushing it deeper into military, corporate, and bureaucratic life.
Sha Sajadieh, Stanford’s AI Index lead, said, “Where there are high levels of adoption and enthusiasm, there also seems to be a high level of trust that their governments will protect them and regulate this technology effectively,” and added, “In the U.S., not only is there not as much enthusiasm or adoption [of AI], but there’s not as much trust in the government to regulate it in a way that might protect the public.”
Black Boxes, Big Budgets, and No Public Control
Forecast said it spoke with Sajadieh, who oversees the report’s development and the steering committee that assembles it, about the year’s biggest trends in AI. Asked why the data is not available, Sajadieh said, “This is a personal hypothesis. There’s no data around this in the index. But our institutions weren’t necessarily built for technological transformation that happens this fast.” He added, “That’s moving faster than anything before, and I think that’s what we’ll see in the years to come, technology gets adopted quicker and quicker. And our education system, our governance policy, all of those things are not designed to keep up as fast.”
Asked why America is very aggressively developing AI and investing in AI while the research shows a divergence between America’s policymakers and the general public, he said the coverage tends to fall into two camps: “the hype headlines” and the headlines about “mass displacement” and “the disruption that’s going to be very bad for humanity.” He added, “Where there isn’t enough objective information being put out there for the U.S. public, it leads to this void — folks turn to what’s easy to digest, but may not be wholly reliable.”
On what data would be needed, he said, “For different regulations, it looks different, right? Whether it’s how AI is deployed in hospitals, or in school systems. But first and foremost, transparency from the frontier labs is the most important thing. So, how are these models being developed? What are they being trained off of? A number of parameters are not being disclosed. Over the years, it’s becoming more and more of a black box.”
The Rival Powers All Want In
On China’s AI output, Sajadieh said, “It’s too early to say that. The U.S. and China lead in very different ways, and have competitive advantages in different ways.” He said the U.S. leads in private investment and that “No one pours more dollars into AI companies than the U.S.” He added that China has “a lot of government funding or public private investment funding.”
He also said, “We can’t really compare the two countries and say that one is winning against the other. What’s interesting to see is that — despite building advantages in different directions — the models they’re putting out, their capabilities are converging. There’s lessons that can be shared between the two.” He said, “China has a more open ecosystem than the U.S. does. That’s potentially something we could look at developing. … There’s a lot of different opportunities for these two countries at the cutting edge to learn from each other.”
On smaller countries, he said South Korea seems particularly well-positioned to develop highly capable models and maintain its talent pipeline, citing its focus on AI sovereignty as a key part of national objectives. He also named Singapore and Switzerland in Europe, and said, “And other countries are starting to participate in this, more and more than we’ve ever seen before. Next year, maybe there will be more along this middle spectrum that we’ll be able to talk about, [countries] that are building their advantages in different verticals. Maybe Germany will pick up AI in the automotive space and really lead there.”
The result is a world where militaries, governments, and tech firms are racing to lock down AI inside their own systems while the public is told to trust institutions that admit they cannot keep up, cannot see inside the models, and cannot clearly explain what is being trained or why.