Five Takes logo
Five Takes News
HomeArticlesAbout
Michael
•
© 2026
•
Five Takes News - Multi-Perspective AI News Aggregator
Contact Us
•
Legal

technology
Published on
Sunday, April 19, 2026 at 09:10 PM
Private AI Gatekeepers Hoard Cyber Power

Canadian computer scientist Yoshua Bengio says the biggest problem with Anthropic’s restricted release of Claude Mythos is that a single private company gets to decide who can protect their infrastructure from emerging cyber risks. The model, capable of identifying thousands of previously unknown “zero-day” vulnerabilities, is being shared selectively with a small group of primarily US-based companies and government entities, leaving large parts of the global ecosystem outside the circle of access.

Who Gets Access, Who Gets Left Out

Bengio, known for his contributions to deep learning and described as one of the Godfathers of AI, told Fortune, “It doesn’t make sense that private individuals are deciding the fate of infrastructure for everyone else. What about all the companies and all the countries that didn’t get access?” That is the core of the arrangement: a private company, not any public process, is controlling access to a system that can reveal weaknesses in critical infrastructure.

Anthropic has justified the limited rollout by citing Mythos’s dual-use nature. The company said the model can help identify vulnerabilities and strengthen systems, but could also be misused to launch cyberattacks that disrupt critical infrastructure. To manage that risk, Anthropic opted for a controlled release, initially granting access to select American technology companies whose platforms underpin widely used systems and briefing the US government as it prepares to extend access to federal agencies.

The result is a hierarchy of access built around corporate and state power. Several governments and institutions have reportedly sought access to the model to assess vulnerabilities within their own systems. The Bank of England publicly stated that Anthropic had assured UK banks of near-term access. Discussions at the IMF and World Bank spring meetings were dominated by concerns about the model’s ability to expose weaknesses in global financial systems, particularly given that many regulators and companies outside the US have yet to evaluate its findings.

State Power Enters the Queue

The US government is also trying to make sure it can get in. Bloomberg obtained a memo from the White House Office of Management and Budget stating that several federal departments, including the Department of Defence (War), the Treasury, and Homeland Security, will begin using a version of Mythos. This comes even though Anthropic and the Pentagon are still in a legal fight over earlier supply-chain risk designations.

That detail shows how the apparatus moves: private firms decide the terms, governments scramble for access, and the people whose infrastructure is at stake are expected to trust the process after the fact. Bengio warned that limiting access to such a system allows one organisation to determine which companies and countries can secure their infrastructure against emerging cyber risks.

He said the situation calls for greater involvement of more people in AI regulation on an international scale and suggested establishing an international authority to regulate the production and use of highly sophisticated AI technology. Bengio said governments should impose strict rules and regulations on businesses to prevent misuse of advanced AI technology from affecting other nations’ infrastructure. He added, “There needs to be an agency really in charge of overseeing these kinds of decisions. As the power of AI continues to grow, this question of international commitment becomes pressing. There’s no reason that it’s going to limit itself to attacking US infrastructure or US citizens. So this has to be an international affair.”

Global Governance, Corporate Control

The debate is also feeding into a broader push for “AI sovereignty,” as countries seek to reduce their dependence on foreign technology providers. Concerns have been amplified by geopolitical tensions and fears that access to critical technologies could be influenced by national interests or policy shifts. In other words, the same system that concentrates power in a private company also turns access into a geopolitical bargaining chip.

Bengio also cautioned about the dangers of using open-source AI models. He said that while open-source technology has been seen as advantageous because of its openness and the ability to improve security through collaboration, AI has become advanced enough to search open-source software for vulnerabilities. He also emphasised the importance of including China in any global AI governance framework, given the ongoing race between the U.S. and China in developing advanced AI systems. Although he estimates Chinese models may lag behind US counterparts by a few months, Bengio said the gap does not significantly reduce the associated risks.

The controversy around Mythos is not just about one product. It is about who gets to decide how powerful systems are used, who gets protected first, and who is left waiting while private companies and state agencies sort out the terms. Bengio’s criticism points to the basic structure of the problem: decisions about infrastructure, security, and access are being concentrated in a few hands, while everyone else is told to live with the consequences.

Previous Article

AngloGold Ashanti Expands Its Grip Through STEM

Next Article

420 Grew From Teen Code Into a Mass Holiday
← Back to articles