According to Forbes, an analysis by Lynda Gratton, a professor at London Business School and founder of HSM Advisory, published in the Harvard Business Review, warns that generative AI poses a serious threat to foundational workplace skills. Gratton, who consults with executives, identifies a growing concern that AI is undermining the interpersonal and leadership instincts needed for success. She argues that while AI accelerates output, it risks gutting the human experiences that foster deep mastery, deliberative thinking, empathy, and personal agency. The core danger is that by making tasks too quick and easy, AI shortcuts could cause a permanent loss of learned edge and independent judgment across organizations.
The Mastery Short-Circuit
Here’s the thing about getting good at something: it usually involves a lot of time and a lot of mistakes. That’s how you build the neural pathways, the intuition, the deep knowledge. Gratton’s first warning is that AI, by making task completion instantaneous, completely bypasses that messy, essential learning process. It’s the corporate equivalent of the student who uses AI to write a paper but can’t discuss the topic. You get the artifact—the report, the code, the presentation—but you lose the builder’s understanding of why the walls are load-bearing. If the AI is always the one building, what happens when you need to repair or innovate? The skill atrophies before it’s even formed. Organizations might end up with a workforce that can prompt brilliantly but can’t think critically about the output.
Drowning In Frictionless Noise
And then there’s the sheer volume. AI doesn’t just do your work; it does everyone’s work, instantly. So the flood of slide decks, reports, and strategy drafts becomes a tsunami. Gratton calls it a “frictionless flow of content that overwhelms attention and obscures insight.” That’s a perfect phrase for it. The AI even offers to summarize it all for you, saving you the “rigor” of actually reading and synthesizing. But synthesis *is* the insight! That’s where connections are made, where creative leaps happen. If we outsource digestion to the machine, we’re just left with a faster conveyor belt of noise. Goodbye, reflective work. Hello, burnout from trying to keep up with a productivity monster we created.
The Empathy Gap
This might be the most chilling risk. Empathy, discernment, managing conflict—these aren’t software features. They’re human skills forged in the awkward, difficult, vulnerable moments of actual interaction. As Gratton puts it, empathy “grows through practice.” So what happens when AI drafts the difficult feedback, mediates the team dispute via chatbot, or handles the sensitive client email? We avoid the discomfort, sure. But we also avoid the growth. The executive’s quote hits hard: “If AI handles the difficult conversations, how will people learn to have them?” We won’t. We’ll have managers who can optimize a workflow in seconds but can’t look an employee in the eye and have a tough, necessary talk. That’s not a leader; that’s a middleman for an algorithm.
Automating Agency Away
The final, and perhaps most existential, risk is to our own decision-making and agency. Modern AI tools don’t just execute; they nudge, propose, and increasingly, decide. Gratton warns this strips people of the capacity to reflect, choose, and take ownership. It’s a slow-motion robbery of judgment. Think about it: if the tool always suggests the “optimal” next step, why would you ever develop your own heuristic? Your personal choice muscle weakens. This is fine for trivial decisions, but what about the big, gray-area, moral calls? If we’re not in the habit of self-authorship in our daily work, we’ll be utterly unprepared for the moments that truly define a career or a company. We’re trading control for convenience, and it’s a Faustian bargain. The goal should be using AI to augment human judgment, not replace the human forming the judgment in the first place.
