Prison Phone Calls Are Now Training AI Crime Prediction Models

Prison Phone Calls Are Now Training AI Crime Prediction Models - Professional coverage

According to MIT Technology Review, Securus Technologies, a major provider of prison communication services, began building AI crime-detection tools in 2023. The company’s president, Kevin Elder, says they trained one model using seven years of recorded inmate calls from the Texas prison system and are developing other state-specific models. For the past year, Securus has been piloting these tools to monitor conversations in real-time across jails, prisons, and ICE detention facilities. The AI analyzes phone calls, video calls, texts, and emails, flagging sections for human review when it detects potential criminal planning. Elder claims the monitoring has disrupted crimes like human trafficking, but the company provided no specific cases uncovered by the new AI. Inmates and their contacts are told calls are recorded, but not necessarily that their data trains AI models.

Special Offer Banner

Here’s the thing that really sticks in the craw. Bianca Tylek from Worth Rises nails it: this is “coercive consent.” Inmates have literally no other way to communicate with their families. So they “agree” to be recorded. But are they agreeing to have their most intimate, vulnerable, and yes, sometimes conspiratorial conversations used as fodder to train a commercial AI system? Almost certainly not. And the kicker? They’re paying for the privilege. Securus charges inmates exorbitant rates for these calls. So not only is their data taken without compensation, it’s taken while they’re being charged. It’s a hidden data tax on some of the most marginalized people in society.

The black box of pre-crime

Now, let’s talk about the tech itself. The pitch is seductive: catch crimes “much earlier in the cycle,” as Elder puts it. But we’re basically talking about pre-crime detection. The AI is looking for when crimes are “being thought about or contemplated.” Think about that for a second. How does an AI model, trained on past data, reliably identify a *thought* or an *intention* in messy, coded, emotional human conversation? The risk of false positives is enormous. A joke, a story, a hypothetical rant—all could be flagged. This puts immense, unaccountable power in the hands of the human agents who review the AI’s flags. And what’s the recourse for someone wrongly accused of “contemplating” a crime based on an AI’s interpretation?

A pattern of failure and secrecy

We’ve seen this movie before, and it doesn’t have a happy ending. Predictive policing and risk-assessment algorithms in the justice system have repeatedly been shown to be biased, opaque, and flawed. They often bake in historical prejudices. So what makes anyone think this will be different? Securus won’t say exactly where the pilot is happening, and they couldn’t provide a single concrete example of the AI’s success. That’s a huge red flag. They’re asking us to trust that a black-box system, trained on non-consensual data, deployed in secret locations, works flawlessly. I’m deeply skeptical. The potential for abuse—to silence dissent, to extend surveillance, to justify more isolation—is staggering.

Where do we draw the line?

This isn’t just a prison issue. It’s a blueprint. If it’s acceptable to train surveillance AI on a captive population with no real alternative, where does it stop? The technical infrastructure being built here—mass data harvesting, real-time sentiment and intent analysis—will inevitably seek new markets. The ethical lines are being erased before most people even know they exist. The conversation we need to have isn’t about making this AI slightly more accurate. It’s about whether a profit-driven company should be allowed to build a pre-crime machine using the voices of people who have no choice but to speak into its microphone.

Leave a Reply

Your email address will not be published. Required fields are marked *