facebook-pixel

‘The algorithms are just destroying us’: Can you sue a social media company for harm?

Sen. John Curtis’ new bill would allow people to sue social media companies if a court finds the algorithm caused someone physical harm.

(August Miller | UVU Marketing) Sen. John Curtis, R-Utah, and Sen. Mark Kelly, D-Ariz., speak during a town hall at Utah Valley University in Orem on Wednesday, Nov. 12, 2025.

Have you been harmed by a social media algorithm? If so, U.S. Sen. John Curtis wants to let you sue.

The Algorithm Accountability Act, introduced Tuesday by Curtis and co-sponsored by Arizona Democrat Sen. Mark Kelly, would allow people to sue tech companies if they have been physically harmed by the company’s algorithm.

“A provider of a social media platform shall exercise reasonable care in the design, training, testing, deployment, operation, and maintenance of a recommendation-based algorithm on the social media platform to prevent bodily injury or death,” the bill text reads, according to a draft from Curtis’ office.

Under the bill, if it becomes law, “If a person suffers bodily injury or death as the result of a violation … by the provider of a social media platform,” the individual may “bring a civil action in a district court of the United States of competent jurisdiction against the provider for compensatory and punitive damages.”

In order to allow people to pursue these damages, the bill would revoke Section 230 protections — a piece of federal code originally passed in 1996 that protects internet publishers from being liable for the content someone uses their platform to publish — for tech companies.

It would not limit what people can post on social media platforms, but it would mark a major shift in protections for big tech companies — which some legal experts say could cause a “chilling effect” for users and tech companies.

‘Social media is a cancer’

Curtis and Kelly first announced the bill at a joint event last week at Utah Valley University that the pair called “Modeling Civility.” There, in the wake of the shooting of Charlie Kirk on the campus in September, the senators discussed political violence and divisive political rhetoric — and one of the main culprits, they argued, is social media.

“Social media is a cancer,” Curtis said during the event, referring to comments made by Utah Gov. Spencer Cox. “It’s driving us to division, and it’s driving us to hate, and the algorithms are just destroying us.”

And in a Wall Street Journal op-ed published Monday, Curtis said he was inspired to introduce the legislation after asking Utahns to share their thoughts following Kirk’s shooting death.

“Online platforms likely played a major role in radicalizing Kirk’s alleged killer,” Curtis wrote. “Tyler Robinson’s path to extremism didn’t begin with secret backrooms or meetings after school. It began online, on apps that most of us scroll through every day. What used to be a gradual drift into extremism has become a rapid slide, driven not by ideology alone but also by algorithms — code written to keep us engaged and enraged.”

Curtis compared his bill to holding tobacco companies liable for harm caused by cigarettes or a car company held liable for faulty brakes. But, Curtis told reporters last week, the question of what is “harmful” would be ultimately up to the courts to decide.

“That’s the beauty of this,” Curtis said. “One of the problems is Congress has tried to define what’s acceptable and what’s not acceptable. That is impossible, right?”

In their bill text, the senators do, however, make some attempt to define “harm.” The bill says that a social media platform could be held responsible for bodily harm or death “that a reasonable and prudent person would agree was… reasonably foreseeable by the provider; and… attributable, in whole or in part, to the design characteristics or performance of the recommendation-based algorithm.”

This legislation, Curtis and Kelly said, could apply, for example, in cases where the algorithmic promotion of diet pills contributed to a harmful eating disorder, or if the promotion of carjacking videos led someone to attempt the same crime.

The bill would not, however, apply to AI chatbots. Asked specifically about cases where chatbots have encouraged people to take their own lives, Kelly and Curtis said other legislation was required to address that issue.

“That’s something else we’ve got to start thinking about,” Kelly said. “I think we do have to address it in a similar way. But it’s not exactly the same thing.”

And asked if he had had conversations with tech giants, like Meta and Google, who would be impacted by the legislation, Curtis said, “It’s fair to say they’re not super excited about it.”

“Let me be really clear,” Curtis told reporters, “this does not impact free speech.”

Kelly said he thinks online influencers will “pretty quickly figure out ‘Where’s the line?’” Social media personalities, he said, will weigh the nature of the content they’re producing against the liability each platform is willing to risk.

The chilling effect is ‘the whole point’

Nick Hafen, the head of legal technology education at Brigham Young University Law School, said in an interview Wednesday that Curtis’s bill could have a chilling effect on speech for users — including political commentators or journalists — despite the fact that its authors have been careful in the drafting to avoid restricting constitutional free speech rights.

“One of the tricky things about the bill is that it’s clearly trying to stay away from that constitutional line,” Hafen said. “It’s easy to say that, but in practice, it gets messy.”

By leaving the ultimate definition of “harm” up to the courts, he said, social media companies would not have initial clarity on exactly how the law could be enforced, which could lead companies to take an overly cautious approach in what they promote via the algorithm in an effort to avoid being sued under the new law.

Ed Carter, a BYU communications professor and copyright law expert, said he thinks it’s also possible that the bill, as constructed, could result in social media platforms simply refusing to moderate at all.

Taking Section 230 protections away could be challenging, Carter said, “because then the incentive for the platforms is just don’t moderate at all,” Carter said in an interview.

“I mean, just don’t do anything, and then you won’t get sued,” he added.

That the bill specifically addresses bodily injury or death, Hafen said, is valuable for limiting the way it could be used for censorship, but, he said, “It takes away some teeth.”

Even with a narrow scope, he said, “I think it is still chilling.” But that, he said, is really “the whole point.”

One other quirk of the bill, Hafen added, is that although it was inspired in part by the Kirk shooting, it would not actually cover the platforms reportedly used by Kirk’s alleged killer — who, according to law enforcement, was active on the messaging platform Discord, which falls outside the scope of social media algorithms that Curtis’s law aims to address.

“It’s not clear to me that if this bill had been in effect, it would have made a difference in that case,” Hafen said.

And while Carter said he, too, thinks lawmakers could do well to get more specific about the harm they aim to address with the legislation — and not just to leave the definition of “harm” up to the courts — he said he is glad to see some attempt at reform.

There’s been an idea since 1996 that the internet can’t be regulated, he said, “but we’ve seen enough now to know where that takes us.”

“What we’ve seen,” Carter said, “is allowing the platforms immunity really just kind of encourages large platforms to make a lot of money — maximize users’ attention, maximize money."