Lawmakers Want to Let You Sue Over Social Media Algorithms

Lawmakers Want to Let You Sue Over Social Media Algorithms - Professional coverage

According to The Verge, Senators John Curtis (R-UT) and Mark Kelly (D-AZ) introduced the Algorithm Accountability Act on Wednesday, which would amend Section 230 of the Communications Decency Act to hold platforms accountable for their recommendation algorithms. The bill specifically targets for-profit social media platforms with over 1 million registered users and creates a “duty of care” requirement to prevent bodily injury or death. Under the legislation, victims who suffer physical harm could sue platforms for damages if they believe the company violated this duty. The bill’s sponsors insist it wouldn’t infringe on First Amendment rights and wouldn’t apply to chronological feeds or direct search results. Curtis has pointed to the September slaying of conservative activist Charlie Kirk in Utah as motivation, arguing algorithms played a role in radicalizing the alleged killer.

Special Offer Banner

The Section 230 showdown continues

Here’s the thing: we’ve seen this movie before. The Algorithm Accountability Act follows the same playbook as the stalled Kids Online Safety Act (KOSA) – both use that “duty of care” framework that sounds reasonable until you think about how platforms would actually implement it. And let’s be real: when the legal exposure gets serious enough, platforms don’t carefully calibrate their responses – they overcorrect and remove anything that might even remotely risk a lawsuit.

But the political momentum here is fascinating. Curtis, a Republican, is teaming up with Kelly, whose wife Gabby Giffords survived an assassination attempt. They’re framing this as a bipartisan effort to “temper political tensions” – a message they previewed at a CNN town hall at the university where Kirk was killed. It’s emotional legislation responding to real tragedy, which always makes for complicated policy.

Basically, if this passes, it would flip the entire legal landscape for social media companies upside down. Remember that lawsuit earlier this year where a gun safety group tried to hold YouTube and Meta responsible for radicalizing a mass shooter? The court threw it out citing Section 230. Under this new law, that case might actually have legs.

And that’s what worries free speech advocates. Even when content is legally protected speech – like hate speech, which enjoys First Amendment protection – platforms could still face lawsuits over how their algorithms surface it. The bill’s sponsors say it wouldn’t be enforced based on viewpoints, but come on – does anyone really believe platforms won’t play it safe and just suppress controversial content altogether?

The overcorrection risk is real

Look, the Electronic Frontier Foundation isn’t exactly known for being pro-big-tech, and they’ve been sounding the alarm about this approach. Their concern – and it’s a valid one – is that platforms will simply remove or suppress information that might be construed as violating this duty of care. That could even include resources meant to prevent the very harms lawmakers want to address.

Think about it: if you’re running a platform’s legal team, are you going to risk a wrongful death lawsuit over some borderline content? Or are you just going to tell the algorithms to avoid anything remotely controversial? I think we all know the answer. The stated goal is to lower political temperatures, but the unintended consequence might be sanitized platforms where important but difficult conversations get suppressed.

What happens now?

So where does this go from here? The bill joins a crowded field of Section 230 reform attempts, and it’s facing the same powerful tech lobbying that’s stalled previous efforts. But the bipartisan nature and emotional backing give it more momentum than most.

The real question is whether Congress can thread this needle – holding platforms accountable for genuinely dangerous algorithmic amplification without creating a censorship regime that stifles legitimate speech. Based on their track record with similar legislation? I’m not holding my breath. But as Curtis made clear in his Wall Street Journal op-ed, he believes the current system is broken and algorithms are driving real-world harm. That’s a powerful argument, even if the solution creates its own set of problems.

Leave a Reply

Your email address will not be published. Required fields are marked *