According to Fast Company, a new survey from the UK’s National Society for the Prevention of Cruelty to Children reveals that about one in five parents and carers know and have supported a child who experienced online blackmail. The survey also found that one in ten of these individuals’ own children have faced online blackmail. Bad actors typically start communicating with young people on public platforms before moving conversations to end-to-end encrypted messaging services to avoid detection. Only 43% of parents found tech companies effective in preventing online blackmail, and just 37% thought the government was doing enough. NSPCC policy manager Randi Govender said the findings show the scale of online blackmail happening across the country while tech companies continue falling short in protecting children.
The encryption dilemma
Here’s the thing about that move to encrypted messaging – it creates this impossible tension. On one hand, encryption protects our privacy from corporations and governments. But it also creates perfect hiding spots for predators. So what’s the solution? We can’t exactly demand backdoors without undermining security for everyone. Yet we’re leaving kids vulnerable in these digital spaces where no one can see what’s happening. It’s a classic case of technology creating problems faster than we can solve them.
makes-a-bad-situation-terrifying”>AI makes a bad situation terrifying
And now we’re adding AI to the mix. Think about it – these bad actors can use AI to create convincing fake content, generate persuasive messages, and scale their operations. Basically, they’re getting better tools while we’re still struggling with the basics. The survey didn’t even dive deep into AI’s role, but anyone paying attention knows it’s becoming a force multiplier for digital predators. How many parents even understand what’s possible with today’s AI tools? Probably not enough.
Why tech companies keep failing
That 43% effectiveness rating for tech companies tells you everything. After decades of promises about safety features and moderation, parents still don’t trust them to protect kids. And honestly, why should they? These platforms are designed for engagement, not protection. Their business models depend on keeping people online longer, which naturally creates more opportunities for bad actors. So we get half-measures and PR campaigns while the problem keeps growing. It’s the same pattern we’ve seen with cyberbullying, inappropriate content, and now blackmail – reactive rather than proactive solutions.
The harsh reality for parents
Look, I get why parents feel overwhelmed. You’re trying to balance giving kids independence with keeping them safe in environments you might not fully understand yourself. The survey shows most parents don’t think anyone – tech companies or government – has this under control. And they’re probably right. We’re asking individual families to solve systemic problems. Meanwhile, the tools available to predators keep getting more sophisticated while our protection methods feel stuck in 2015. Something’s got to change, but honestly, I’m not seeing where that change will come from.
