According to TheRegister.com, Senators John Curtis (R-UT) and Mark Kelly (D-AZ) introduced the Algorithm Accountability Act on Wednesday, which aims to amend Section 230 of the Communications Decency Act that’s been in place since 1996. The bipartisan bill would create a legal pathway for users to sue social media platforms when their recommendation algorithms push content that radicalizes individuals, leading to bodily injury or death. The legislation specifically targets algorithms that “a reasonable person would see as foreseeable and attributable to the algorithm” causing harm. It would also invalidate pre-dispute arbitration agreements in social media terms of use. This marks the latest in a series of attempts spanning both the Biden and Trump administrations to modify Section 230 protections. Previous bills like the 2021 Protecting Americans from Dangerous Algorithms Act have all stalled out in Congress.
The Never-Ending Section 230 Battle
Here’s the thing about Section 230 reform – it’s become Congress’s favorite political football that nobody can actually move down the field. Both parties love to talk about holding tech giants accountable, but when it comes to actually changing the law that protects platforms like Facebook, YouTube, and Instagram? Nothing happens. We’ve seen this movie before with the Protecting Americans from Dangerous Algorithms Act in 2021 and multiple versions of the SAFE TECH Act since 2020. They all die quietly in committee. And yet politicians keep introducing new bills – several just in 2025 alone. It’s basically become a symbolic gesture at this point.
What Actually Changes Here?
This bill is different from earlier attempts to completely repeal Section 230. Instead, it carves out a specific exception for recommendation algorithms. So platforms would still be protected for merely hosting user content, but if their algorithms actively push harmful material that leads to real-world violence or self-harm? They could be sued. The actual bill text sets a pretty high bar – courts would need to find that harm was “foreseeable and attributable to the algorithm.” But here’s the kicker: it also blocks companies from forcing these cases into private arbitration. That means plaintiffs could actually get their day in court rather than being shunted into the corporate-friendly arbitration system. That’s a bigger deal than it sounds.
Let’s Be Real About the Odds
So will this actually become law? Probably not. The tech lobby is incredibly powerful, and Silicon Valley spends millions to protect Section 230. Plus, there’s genuine concern about unintended consequences. If platforms become liable for algorithm recommendations, they might swing too far in the opposite direction – over-censoring legitimate content just to avoid lawsuits. And let’s be honest: defining what constitutes “radicalizing” content is incredibly subjective. One person’s dangerous radicalization is another’s political speech. The history of Section 230 reform efforts shows these bills tend to generate headlines but rarely become law.
The Bigger Picture
What’s really interesting here is how both parties keep coming back to Section 230, despite all the failures. Republicans often want changes to address perceived anti-conservative bias, while Democrats focus on harmful content and misinformation. This bipartisan approach tries to bridge that gap by targeting algorithms specifically. But the fundamental problem remains: Section 230 is what made the modern internet possible. Remove those protections entirely, and you’d have a very different, much more cautious internet. Maybe that’s what some politicians want? Either way, until there’s real political will to actually pass something – rather than just introduce bills for press releases – the Algorithm Accountability Act will likely join its predecessors in the legislative graveyard.
