
Who Gets to Set the Rules
Executives and engineers in Silicon Valley said this week that AI agents remain difficult and costly to deploy at scale, even as companies keep selling them as the next great leap in artificial intelligence. At the same time, the White House is pursuing a policy effort to identify vulnerabilities in AI models before they are released by major providers, including Anthropic and OpenAI, amid rising concern about AI-enabled fraud and security threats. The people building and selling the systems are still struggling to make them work, while the state moves in to inspect the machinery before it reaches the public.
At the Generative AI and Agentic AI Summit in San Jose, Kevin McGrath, the CEO of the AI startup Meibel, said the biggest problem in AI right now involves the mistaken idea that everything needs to be processed by a large language model, or LLM. He said, "Just give all of your tokens and all of your money to an AI Claw bot that will just waste millions and millions of tokens," before saying companies need to be more deliberate about which tasks are best suited for AI agents. The line lands like a confession from inside the industry: the hype machine burns through money and compute while pretending that every problem can be automated.
The Costs Beneath the Hype
Nvidia CEO Jensen Huang told CNBC's Jim Cramer in March that AI agents "is definitely the next ChatGPT." But technical staff from companies including Google and its DeepMind AI unit, Amazon, Microsoft and Meta said creating and operating AI agents is not easy. Google software engineer Deep Shah said new techniques are being developed to help manage the operational costs of running large numbers of AI agents. "If you think of a machine learning system or any multi-agent system, there are multiple challenges you will find when you try to deploy that system at scale," Shah said. "The first one is the inference cost."
Ravi Bulusu, CEO of the startup Synchtron, said the complexity of AI agents touches the ways companies organize data, choose tech platforms, and build and run software and workforces. Because of that, he said, "No single dimension is solved in isolation and the interdependencies are what make this hard, in fact chaotic even." In other words, the problem is not just technical; it is built into the corporate systems trying to force these tools into every corner of work.
At another AI event in Mountain View, Calif., ThinkingAI and MiniMax, both headquartered in Shanghai, China, also discussed the complexity of AI agent management. ThinkingAI recently rebranded as an AI agent management platform after starting as a mobile game analytics company known as ThinkingData. It partnered with MiniMax, which went public in Hong Kong in January and is one of China's leading AI labs. MiniMax has released powerful models for free to the open-source community and is one of the country's so-called "AI Tigers."
Enterprise Dreams, Security Fears
ThinkingAI co-founder Chris Han said the company is trying to expand from the video game sector to other industries that are interested in AI agents but lack the expertise. He said OpenClaw is too complicated and too prone to security flaws for businesses. "OpenClaw is a good tool for personal things, but definitely cannot reach the enterprise level," Han said. "In terms of the enterprise level, you have to figure out a lot of things, your memory, how to manage your agents, teams, communications; there are a lot of things you have to figure out."
Han declined to comment on any possible national security concerns over Chinese AI models that might affect ThinkingAI's strategy, but said the service can also support AI models from companies like OpenAI and Google. He added that if the U.S. government were to ban Chinese open-weight AI models in the country, he might take that as a positive sign. "If that happens, maybe we are successful," Han said. The remark shows how quickly these platforms get folded into geopolitical competition, with workers and users left to live inside the fallout of decisions made far above them.
The White House effort comes as concerns grow about AI-enabled scams targeting older Americans and other users, and as policymakers look for ways to reduce risks before new models are released. The policy push is aimed at identifying vulnerabilities in AI models before major providers release them. That means the same institutions that helped create the rush are now trying to manage the damage after the fact, with the public expected to absorb the risks either way.