EU’s plan to weaken privacy laws for Big Tech AI training

EU's plan to weaken privacy laws for Big Tech AI training - Professional coverage

According to Reuters, the European Commission is proposing significant changes to landmark EU privacy laws that would allow Big Tech companies like Google, Meta, and OpenAI to use Europeans’ personal data for training AI models based on “legitimate interest.” EU antitrust chief Henna Virkkunen will present the Digital Omnibus proposals on November 19, which aim to simplify overlapping legislation including GDPR, the AI Act, and e-Privacy Directive. The changes would also exempt companies from bans on processing special categories of personal data to avoid “disproportionately hindering” AI development. Privacy group noyb calls this “death by a thousand cuts” that would massively downgrade European privacy protections just 10 years after GDPR adoption.

Special Offer Banner

The slow dismantling of privacy

Here’s the thing about GDPR – it was supposed to be Europe’s gold standard for privacy protection. But now we’re seeing what happens when regulatory ambition meets corporate pressure. The proposed changes aren’t just minor tweaks; they’re fundamental shifts in how personal data can be used. And the “legitimate interest” loophole? That’s basically a get-out-of-jail-free card for tech giants who’ve been complaining about GDPR’s strict consent requirements for years.

What’s particularly concerning is the exemption for processing special categories of data. We’re talking about health information, biometric data, political opinions – the stuff that should have the highest protection. The justification that companies can “identify and remove” this data later feels naive at best. Once that data’s in the training pipeline, it’s there forever.

Another sneaky move is merging the ePrivacy Directive into GDPR. Remember all those annoying cookie pop-ups? That was ePrivacy at work. Now imagine if companies could bypass those entirely by claiming “legitimate interest” or “audience measurement.” Your phone, your computer, your smart devices – all potentially fair game without clear consent.

EDRi’s warning hits hard: this changes how the EU protects what happens inside your devices. And when security and fraud detection become broad exemptions, we’re looking at surveillance creep disguised as convenience. It’s the classic “we need to see everything to protect you” argument that rarely ends well for individual privacy.

Who really benefits here?

Let’s be honest – this isn’t about helping European startups compete in AI. The main beneficiaries are the usual American tech giants who’ve been fighting GDPR since day one. They’ve paid billions in fines, and now they might get the rules rewritten in their favor. The timing is suspicious too – just as AI training requires massive data scraping, suddenly the regulations need “simplifying.”

But here’s the kicker: these proposals still need to go through EU countries and Parliament. There’s going to be a fight. Privacy activists like noyb have proven they can take on tech giants and win. The question is whether political pressure for “innovation” will override fundamental rights. After a decade of GDPR being the global privacy standard, watching it get hollowed out would be quite the reversal.

Leave a Reply

Your email address will not be published. Required fields are marked *