Five Takes logo
Five Takes News
HomeArticlesAbout
Michael
•
© 2026
•
Five Takes News - Multi-Perspective AI News Aggregator
Contact Us
•
Legal

technology
Published on
Thursday, May 7, 2026 at 01:12 PM
Illinois Moves to Police AI After Suicide Cases

Illinois lawmakers are pushing for stronger guardrails around AI, including the Artificial Intelligence Public Safety and Child Protection Transparency Act, which would require AI developers to publish a child protection plan and make companies subject to civil penalties if they violate the law. The proposal is the latest attempt to force a sprawling tech industry to answer, at least on paper, for the damage its systems can do to children and other users.

Who Gets Put Under the Microscope

Lawmakers and tech industry experts testified this week on the bill. Rep. Daniel Didech stressed the need for third-party regulation, pointing to several incidents in recent years where AI users died by suicide after communicating with chatbots. That testimony put the human cost front and center: people already harmed while the companies behind the systems continue to scale them, and lawmakers now trying to bolt on guardrails after the fact.

The bill would require AI developers to publish a child protection plan and would make companies subject to civil penalties if they violate the law. The structure is familiar enough: the state writes the rules, the companies keep building, and the public is left to trust that penalties after the damage will somehow count as protection. The hearing also centered on a bill that would create consumer protections around chatbots similar to those for other products.

What the Companies and Their Allies Said

James Hartmann of Anthropic told lawmakers, "We are founded on a particular belief that AI may become one of the most consequential technologies in human history, and that the companies building the most powerful systems have an obligation to do so safely,". The statement sounds like responsibility, but it also confirms where the power sits: with the companies building the most powerful systems in the first place.

Scott Wisor of Secure AI Project recommended giving the Attorney General the power to adapt the laws as necessary and testified, "We're on an exponential curve … basically every 100 to 210 days, the capabilities of AI models doubles,". That is the tempo of the apparatus now: rapid expansion, then a request for legal machinery to chase it. The Attorney General would be handed the power to adapt the laws as necessary, a reminder that even the supposed fix still runs through centralized authority.

Industry groups warned that a patchwork of state regulations could hurt startups. Zack Kahn of American Innovators Network said, "Chatbots that interact with minors need meaningful protections. We're not here to say don't regulate. We're here to say that a patchwork of state-by-state standards won't slow down Big Tech; however, it will kill the startups who are trying to out-innovate them." The complaint is revealing. The industry wants protections, but only the kind that do not disrupt the race to dominate the market. The startups, in this telling, are the ones at risk of getting crushed while Big Tech keeps moving.

The Limits of the Reform Trap

Opponents argued that traditional product liability, designed for fixed, physical goods, is a poor fit. Aden Hizkias of the Chamber of Progress wrote to lawmakers that "AI-enabled chatbots are dynamic digital services … that can vary from interaction to interaction." That argument is aimed at keeping the legal frame loose enough for the companies to keep improvising while the public absorbs the consequences.

Illinois already has AI laws on the books, including a ban of AI in psychotherapy except as administrative support for licensed therapists, and requirements for employers to inform applicants of any AI use during job interviews. Those existing rules show the state has already started drawing lines around AI use, though the broader system remains intact and the companies continue to push for more room.

On the national level, Democrats are at odds with how to talk about AI to constituents, and some in the party are focusing solely on the cost of data centers rather than other potential threats. That split leaves the public with the usual menu of managed concern and partial fixes, while the companies, lawmakers, and industry groups argue over how to regulate a technology that is already embedded in hiring, therapy, and the daily churn of chatbot interaction.

The Illinois proposal, the testimony, and the warnings from industry all point to the same basic struggle: who gets to set the terms for a technology that is already shaping lives, workplaces, and vulnerable users. The bill would require child protection plans and allow civil penalties, but the hearing also showed how quickly the conversation gets pulled back toward what is manageable for the companies, the startups, and the state apparatus.

Previous Article

Cruise Ship Outbreak Sends Health Agencies Scrambling

Next Article

GOP Map Push Targets Black Power Across the South
← Back to articles