According to The Economist, Britain’s Online Safety Act passed in 2023 and has been slowly phased in, with key provisions like age verification for pornography sites taking effect in July 2024. The law requires platforms to remove illegal content, protect children from legal-but-harmful material like suicide-related posts, implement age verification for pornography, and offer content filtering tools. Ofcom, the regulator, can impose fines up to 10% of global revenue for non-compliance, as seen with their £20,000 fine against 4chan in October. Despite critics like Nigel Farage calling the law “borderline dystopian” and comparing Britain to North Korea, the actual effects have been mild so far, with Pornhub reporting a 77% traffic drop after implementing age checks but no widespread censorship emerging.
The reality versus the rhetoric
Here’s the thing about apocalyptic predictions – they rarely age well. Critics warned this law would create a “great firewall” and destroy free speech. But what’s actually happened? Well, if you watch porn, you’ve noticed age verification. And if you run a conspiracy theory forum like 4chan, you got a relatively small fine. For everyone else? Basically business as usual.
The free speech concerns aren’t completely unfounded – look at the Graham Linehan arrest that Farage cited. But that’s the kicker: Linehan was arrested under the 1986 Public Order Act, not the Online Safety Act. Britain’s speech laws were already messy long before this legislation came along. Critics are blaming the new law for problems that existed for decades.
How platforms are actually responding
Remember all those warnings about profit-driven platforms going censorship-crazy to avoid fines? That hasn’t really materialized either. Ofcom’s guidance actually tells companies to focus on “high harm” content – child exploitation, sex trafficking, weapons sales. Not your average offensive political post.
And get this – behind the scenes, platforms are working with Ofcom rather than fighting them. The regulator is behaving more like a financial watchdog than a speech police, nudging companies to properly implement their own existing moderation policies. Nobody’s anticipating a major confrontation. It’s all surprisingly… cooperative.
privacy-and-unintended-consequences”>What about privacy and unintended consequences?
Sure, there are legitimate concerns. Requiring adults to provide personal information to access content feels invasive. And some worry this will just push kids toward VPNs and darker corners of the web.
But here’s a thought – we already accept similar privacy trade-offs elsewhere. You show ID to buy alcohol. You verify your age to enter certain venues. Is it really that different to verify you’re an adult before accessing pornography? As one academic noted, it’s “hardly an unjustifiable breach of privacy” for that specific purpose.
The surprisingly gradual approach
Maybe the most interesting thing about Britain’s experiment is how incremental it’s been. Despite the dramatic rhetoric, implementation has been careful and measured. There have been missteps – like inappropriate age-gating of war-related content from Ukraine and Gaza – but the system seems designed to learn and adapt.
Basically, this isn’t the heavy-handed censorship regime critics predicted. It’s a work in progress that acknowledges algorithms and human moderators will struggle with context and nuance. And honestly? Given how complex online content moderation is, a gradual approach might be the only sane way forward.
