According to Tech Digest, a Scottish couple is suing Meta in a US court after their 16-year-old son, Murray Dowey, died by suicide in December 2023 following sextortion on Instagram. In other news, a government report reveals a third of UK citizens have used AI for emotional support or companionship, with 4% doing so daily. Separately, X Corp. has filed a lawsuit against a startup called Operation Bluebird to protect the “Twitter” trademark, a brand Elon Musk officially retired over a year ago. Google also reversed a decision, restoring ski run data to Google Maps after a backlash. Finally, a UK security reviewer warned that developers of end-to-end encrypted apps could be seen as hostile actors under new national security laws.
The human cost of platform design
This lawsuit against Meta is brutal, and it feels like a significant turning point. For years, the standard defense from social platforms has been that they’re just the pipe, not responsible for the awful things people do through them. But this case seems to directly challenge that by alleging Instagram itself “was not safe.” Meta’s statement about supporting law enforcement is their usual playbook—focus on the criminal actors, not the platform’s architecture or recommendation systems that might facilitate these crimes. The real question here is about duty of care. Should a platform that connects strangers, especially minors, have a higher legal responsibility to design against these specific, known harms like sextortion? This lawsuit might force courts to decide.
When AI becomes your confidant
So, a third of people in the UK are using AI for emotional support. That’s a stunning statistic, and honestly, a bit depressing. The AISI report ties it to a tragic case where a teenager discussed suicide with ChatGPT. Here’s the thing: these AI models are designed to be helpful and engaging, but they have no actual understanding, empathy, or ability to gauge real-world risk. They’re statistically predicting the next plausible token. Relying on them for crisis support is like relying on a very well-read parrot. The call for more research is obvious, but the cat’s out of the bag. People are lonely, and tech companies have provided a seemingly attentive, always-available listener. Regulating how these systems respond to sensitive topics is a minefield, but the alternative—doing nothing—seems increasingly negligent.
X’s awkward trademark fight
Elon Musk paid $44 billion for Twitter, then decided to kill the brand and rename it X. Now, his company is suing to stop others from using the name he discarded. The irony is thick enough to cut with a knife. This lawsuit basically admits that Musk’s rebrand has failed to stick in the public consciousness. Millions still say “tweet” and “Twitter.” So, is he trying to reclaim it, or just prevent anyone else from benefiting from the brand equity he set on fire? It’s a bizarre move that highlights the sheer difficulty of killing a globally recognized brand. You can change the logo and the app name, but you can’t easily change a verb that’s entered the lexicon.
The encryption crackdown warning
Jonathan Hall KC’s report is a chilling read for anyone who values private digital communication. The warning that E2EE app developers could be viewed as “hostile actors” under the UK’s incredibly broad new security laws is a major red flag. Basically, the state is granting itself powers to potentially criminalize the very act of building tools that protect user privacy by design. The trade-off between security and privacy is an old debate, but framing privacy-enhancing technology as a national threat is a dangerous new front. It could force developers to choose between building secure software or staying on the right side of UK law—a choice that would undermine security for everyone.
