facebook-pixel

Here’s where Utah Republicans fought Trump’s ‘Big Beautiful Bill’ — and won

“This is a win for Utah,” Rep. Doug Fiefia said after U.S. senators voted to remove a provision that would have stopped states from regulating AI.

Sen. Ted Cruz, a Texas Republican,initially championed a provision in the “One Big Beautiful Bill Act” that would have limited how states could regulate AI companies. (Photo Illustration by ProPublica. Photos: Rod Lamkey, Jr./AP and Francisco Kjolseth/The Salt Lake Tribune)

A contentious provision in President Donald Trump’s “Big Beautiful Bill” that would have stopped states from regulating artificial intelligence pitted Utah Republicans against Congressional members of their own party in recent weeks.

But Utah leaders are celebrating a victory after the U.S. Senate early Tuesday resoundingly rejected a potential moratorium on AI regulation before passing the One Big Beautiful Bill Act, a GOP tax and budget plan, later that morning.

“This is a win for Utah,” said state Rep. Doug Fiefia, a Republican who took the lead among state legislators in opposing a federal AI moratorium. “It shows that when you speak up, we can make a difference, even on a national policy level.”

U.S. senators voted 99-1 to strike the provision which would have barred states from regulating AI companies unless they were willing to lose access to certain federal funds. Not even Texas Sen. Ted Cruz, who initially proposed a 10-year moratorium on state AI regulation and continued to fight for it for weeks, voted against it. The only senator who voted to keep the ban was North Carolina Sen. Thom Tillis.

Republican leaders in Utah who created a first-in-the-nation government AI office last year believe they’ve figured out how to lightly regulate AI companies in a way that creates an environment for innovation while also keeping Utahns safe. So when the potential for a federal moratorium on AI regulation inched closer to a Senate vote, state leaders started speaking out.

In recent weeks, Utah Republicans including Gov. Spencer Cox, Utah Attorney General Derek Brown and leaders of the Utah House and Senate wrote letters to members of Congress expressing serious concerns about federal efforts to prevent states from protecting their residents from the potential harms of AI.

“The pressure worked,” said Margaret Busse, the executive director of Utah’s Department of Commerce. “It was a violation of states’ rights, if nothing else. But it really would have limited the great ideas coming from states on how to effectively and responsibly regulate AI in ways that can still encourage innovation.”

In addition to regulating AI and challenging the provision in the Congressional budget bill, Utah leaders are also taking companies which are using AI to the court. On Monday, Utah’s Division of Consumer Protection sued Snapchat, a social media platform popular with children and teens, alleging that it has “taken the terrifying leap” of jumping into artificial intelligence with a chatbot that lacks proper testing and safety protocols.

The chatbot, the lawsuit alleges, has given dangerous advice to children about “how to hide the smell of alcohol and marijuana” and how to set the mood for a sexual experience with an adult. In response, a spokesperson for Snap Inc. said the company “has no higher priority” than the safety of its users, and that it has built privacy and safety measures into its product.

‘A very, very dangerous idea’

Busse, who Cox appointed executive director of Utah’s Department of Commerce in 2021, championed the creation of the Office of Artificial Intelligence Policy, which is housed in her department. She said she wanted Utah to be a leader in creating guardrails for AI after seeing the way that social media exploded unregulated years ago.

In 2023, her department sued Meta, the parent company for Facebook and Instagram, alleging that these platforms destroyed children’s mental health by bombarding users with “slot-machine-like functionalities,” alerts and other predatory features that addict children to the apps in the name of profit.

“The idea that we would continue to allow AI companies without even considering what the harms might be to our people here in Utah, I think is a very, very dangerous idea,” she said as Congress was debating the ban on state AI regulation.

(Trent Nelson | The Salt Lake Tribune) Utah Department of Commerce Director Margaret Busse in Salt Lake City on Friday, June 27, 2025.

Given the harms that Utah officials say social media has inflicted on kids, this deeply Republican state that often opposes more regulation wanted to take a different approach with AI. The state’s new AI office has already led to more legal guardrails for AI companies, such as a law requiring protections for consumer health data.

Now that the “Big Beautiful Bill” has passed the U.S. Senate, the legislation will return to the House, where representatives will need to approve the changes from senators before Trump can sign the bill into law.

All four Utah House members voted in favor of the bill initially.

Utah Rep. Blake Moore, vice chair of the House Republican conference, said earlier this month that he did not support the provision barring states from regulating AI — but he voted for the House bill because he was “not going to defund the entire military over this” and was confident that the provision would eventually be removed by Senate lawmakers. He did not respond to a request for comment, and neither did any of the other Utah House members.

Cox did not respond to a request for comment. Neither did Utah Sen. Mike Lee.

Sen. John Curtis said in a Wednesday statement to The Salt Lake Tribune that Utah has been a national leader in protecting consumers, including by implementing “thoughtful, responsible” regulation for the “fast-moving” AI industry.

“I understand yesterday’s vote may add to the growing patchwork AI companies face across the country,” he said. “It’s time for Congress to step up. A national AI framework is needed—and Utah’s approach should serve as a model."

Cruz, as chairman of the Senate Commerce, Finance and Transportation Committee, had been tweaking the AI moratorium language in the bill in recent days, and instead of banning AI regulation, it would have required states to pause regulations or restrictions on AI companies for ten years if they want federal funding to build internet infrastructure.

Tennessee Sen. Marsha Blackburn worked on a compromise with Cruz this week to decrease the moratorium time period to five years — but she abandoned it and introduced the amendment removing Cruz’s 10-year AI moratorium language after she said they couldn’t reach a compromise that would protect the governors, state legislators and others who were vocal about their concerns with the AI bill.

Cruz did not respond to The Tribune’s request for comment. The Texas senator and tech companies have said a light, uniform federal regulatory framework is needed — rather than a state-by-state approach — so as not to stifle innovation and competition.

Tech lobbyists such as the Consumer Technology Association have said the patchwork method to AI regulation by different states creates conflicting laws that make it difficult for small AI companies to thrive.

“This isn’t regulation,” the Consumer Technology Association wrote in a June letter to Senate leaders. “It’s chaos.”

AI steps into mental health

Utah created its Office of Artificial Intelligence Policy last year as a way to encourage AI companies to experiment without the fear of government cracking down on them if they run afoul of laws or policies created before AI, Cox told tech workers at a state AI summit last fall.

The Cox administration decided to focus the office’s initial efforts on AI solutions to a state shortage of mental health therapists, in response to reports detailing Utahns’ struggle to get help. Utah has consistently higher rates of self-reported depression than the rest of the United States as well as a suicide rate that’s among the nation’s highest.

The first company to work with the AI office was a Utah tech startup called ElizaChat, which created an AI chatbot to talk with kids about their mental health. ElizaChat is trying to sell its chatbot to school districts, and is piloting the program in a handful of schools around the country — though none in Utah, yet, according to CEO Dave Barney.

While ElizaChat is not designed to be a therapist, its developers were concerned that their product could run afoul of Utah law forbidding anyone without a therapist license from engaging in mental health therapy. So they signed a contract with the Office of Artificial Intelligence Policy: state licensors agreed to forgo penalizing ElizaChat for unauthorized practice or issuing fines if the chatbot does cross the line into therapy, and ElizaChat would have 30 days to fix the problem.

Such agreements help the state learn about how AI companies are developing their products and allow the AI office to recommend regulations for companies whose products may push the boundaries of current laws, said Zach Boyd, the director of the AI office.

During Utah’s legislative session earlier this year, the AI office advocated for a new law that prohibits mental health chatbot companies from selling the health information of Utah users or showing users targeted ads based on personal data. The law, sponsored by Republicans, also requires the chatbot to disclose that it’s not a real person, among other restrictions.

Barney told The Tribune that he felt it was important to work with Utah regulators because they wanted to ensure their product didn’t cause harm. He said he was glad to see the AI provision removed from the Big Beautiful Bill, but said he wouldn’t oppose federal standards around mental health and AI.

“In the absence of any federal standard, there would be no standard at all,” he said, “and states would lack the ability to respond to potential needs that may arise.”

(Bethany Baker | The Salt Lake Tribune) ElizaChat CEO Dave Barney, right, speaks during an interview as ElizaChat co-founder Luke Olson listens at their Lehi office last August.

Vaile Wright, senior director of health care innovation with the American Psychological Association, said having safeguards around AI and mental health is critical, and without federal regulation, it’s been up to states like Utah to step in. The APA in January wrote a letter to the Federal Trade Commission, urging it to investigate mental health chatbots posing as trained mental health providers.

The association pointed to lawsuits that filed nationally in recent years after young people died by suicide or harmed family members after having conversations with a Character.AI chatbot, which is a platform that allows users to talk with fictional “personalities” including an AI “therapist” or an AI “psychologist.”

Wright said she’s concerned that some AI products are designed to keep users engaged and claim to be licensed therapists but do nothing more than validate a user’s thoughts.

“My job as a therapist is to validate how you’re feeling up to point, and then it’s to really point out where thoughts and behaviors are unhelpful or even harmful,” she said. “Therapy is not necessarily fun. It doesn’t necessarily feel good in the way that I think these chatbots make people feel good.”

In an op-ed for the National Review, a conservative publication, Busse and Cox said Utah’s AI office has found the “right middle ground” in lightly regulating the industry to balance innovation and safety.

“We just haven’t seen action from Congress in so much of this tech policy area in general,” Busse said in an interview, “and I don’t have a lot of confidence, frankly, that things are going to happen quickly on the federal level with AI.”

Tribune reporter Addy Baird contributed to this report.