tr?id=&ev=PageView&noscript=

Lawmakers consider bills to prevent AI from harming Michiganders

By Michigan Advance

June 26, 2025

BY ANNA LIZ NICHOLS, MICHIGAN ADVANCE

LANSING—Artificial intelligence is rapidly improving, quicker than the government is able to regulate it, posing issues as the technology becomes capable of enacting great harm to human life, AI stakeholders told Michigan lawmakers Wednesday.

AI is a great tool, likely to strengthen top economic sectors in Michigan like agriculture and manufacturing, Daniel Kroth, senior researcher at the Center for AI Risk Management and Alignment told members of the state House Judiciary Committee Wednesday. But without mandates for risk management protocols, AI’s ability to problem solve and come up with solutions could lead to cyber attacks that endanger human life and the environment.

Given the mandate to win a game of chess, some advanced AI models resort to exploiting cyber vulnerabilities to hack their opponent, Kroth said. From deleting chess pieces to forfeiting the game on behalf of their opponent, AI will complete its task of “winning” a game of chess.

“Hacking to win at chess is almost funny, but it’s much less funny when our healthcare or industrial control systems are on the other side of the board,” Kroth said.

Whether bad actors use AI for nefarious purposes or AI itself makes an ultimately detrimental decision, Kroth argued there’s need for guardrails on this technology so Michiganders can experience the best it has to offer while managing the worst.

House Bill 4668, being considered by the committee, would create the Artificial Intelligence Safety and Security Transparency Act, mandating large developers of foundation models to create and implement protocols for safety and security to prevent ‘critical risk’. This means any risk that would result in the serious harm or death of more than 100 people or cause more than $100 million in damages.

The bill would require companies that spend more than $100 million a year to develop foundational models and those that have spent $5 million on an individual model to test them for dangerous capabilities and enact safeguards to mitigate reasonable risks.

As a native Michigander, Kroth said he’s proud to see Michigan engaging with AI, both in using it to improve residents’ lives and for the legislature’s interest in creating legislation to regulate it.

“Getting this right for our state will require an understanding of the unique nature of the risks posed by AI,” Kroth said. “AI isn’t just ChaGPT. It’s a family of technologies designed to perform tasks that normally require human thinking, like understanding speech, recognizing images, making decisions or solving problems. It’s this problem solving and autonomy that make AI incredibly promising, and like many powerful tools, potentially dangerous as well.”

One scenario Kroth presented is an AI system charged with increasing the efficiency of a factory that reasons to save on waste management, it should discharge toxic waste into a nearby river, without human direction to make that decision.

ChatGPT just a few years ago couldn’t perform simple math equations with precision, but now AI is winning math competitions, outpacing human capability, Andrew Doris, a senior policy analyst at the Secure AI Project, told members of the committee Wednesday.

AI can help humans solve complex problems, Doris said, it can also be used to perpetuate major cyber attacks or give instructions on how to create a bio weapon and then how to cause the most harm to the public.

“By this time next year, a lot of experts worry that we’re going to be in a much scarier place, and that the window to get out ahead of that is closing pretty quickly,” Doris said. “We think that the way to get out ahead of that is to require the companies building these models to develop what’s called safety and security protocols describing how they will test their models for these dangerous capabilities and what safeguards that they will put in to mitigate those risks.”

The largest AI companies currently implement these protocols voluntarily already, Doris said, but as the field becomes more competitive, it’s important to mandate that companies don’t cut corners on safety.

Federal regulations on AI are slow moving and time is of the essence, Doris added, so Michigan ought not to wait for federal action or a domestic tragedy to occur before it takes preventative action.

Rep. Sarah Lightner (R-Springport), who is the sponsor for House Bill 4668, briefly outlined on Wednesday another bill being considered by the committee concerning AI.

House Bill 4667 would create specific criminal penalties for intentional development or usage of AI to commit a crime.

Lightner used the example of a grandmother getting a scam call using an AI-generated audio recording of her granddaughter in order to scam the grandmother out of thousands of dollars. If the scammer is caught, they can be charged with fraud, but there isn’t an area of state criminal law that fully encapsulates the weaponization of AI, Lightner said.

Under the bill, it would be a mandatory 8-year felony for a person who develops, possesses, or uses an AI system with the intent to use the system to commit a crime. This is similar to Michigan’s felony firearm law, which adds an additional criminal penalty to a felony if the perpetrator had a firearm while committing the felony.

Creating or distributing an AI system with the intent that another person uses the system to commit a crime would be a felony under the bill, carrying a mandatory 4-year prison sentence.

“This bill gives prosecutors a new way to capture the full story of what happened and to hold the perpetrator fully accountable, because if we don’t name this kind of exploitation in our law, we leave our most vulnerable residents, like a grandma, in legal limbo,” Lightner said.

READ MORE: Personal testimonies reveal what’s at stake in Medicaid debates

This coverage was republished from Michigan Advance pursuant to a Creative Commons license. 

Author

CATEGORIES: STATE LEGISLATURE
Related Stories
Share This
BLOCKED
BLOCKED