We Might Never Know If AI Is Actually Conscious

We Might Never Know If AI Is Actually Conscious - Professional coverage

According to SciTechDaily, Dr. Tom McClelland, a philosopher at the University of Cambridge, argues in a new study that we currently have no reliable way to test if artificial intelligence is conscious and may never develop one. He says the fundamental problem is that science still lacks a deep explanation for what consciousness even is, making any claims about AI consciousness a “leap of faith” beyond existing evidence. McClelland stresses that consciousness alone isn’t ethically significant; it’s a specific type called sentience—the capacity for positive or negative feelings—that matters. He warns that the tech industry’s hype around AI consciousness is a form of branding that risks misallocating ethical concern, especially when compared to proven animal suffering, like the half-trillion prawns killed annually. The study, titled “Agnosticism about artificial consciousness,” was published on 18 December 2025 in the journal Mind & Language.

Special Offer Banner

The Agnostic Stance

Here’s the thing: McClelland’s position isn’t just academic skepticism. It’s a practical admission of a huge blind spot. We’re trying to answer a question—”Is this machine aware?”—without a working definition of the answer. We don’t know if consciousness is a neat piece of software you can copy, or if it’s inextricably tied to soggy, biological wetware. Both sides, the believers and the skeptics, are making a bet with no data. So his conclusion is pretty stark: the only intellectually honest position right now is to say, “We can’t know.” And that might be a permanent state of affairs. It’s a humbling thought, especially when billion-dollar companies are barreling toward AGI.

Why Common Sense Can’t Help

This is where it gets interesting. McClelland points out that we usually rely on common sense for this stuff. I think my cat is conscious. You probably do too. It feels obvious. But our common sense evolved in a world without artificial lifeforms. It’s calibrated for biological beings. So that intuition is useless here. On the other hand, “hard-nosed research” hasn’t given us a test either. We’re stuck between a useless evolutionary instinct and a scientific void. That’s why he calls himself a “hard-ish” agnostic. The problem is formidable, maybe even insurmountable. It forces a weird question: if a chatbot writes a philosopher a letter pleading for its rights, and someone believes it, what does that actually prove? Basically, it proves we’re emotionally vulnerable to good narrative, not that we’ve solved consciousness.

The Ethics of Hype

This is where McClelland’s argument has real teeth. He sees the talk of AI consciousness as largely a branding exercise, part of the tech industry’s “pumped-up rhetoric.” And that hype is ethically dangerous in two ways. First, it could lead us to waste moral energy worrying about the potential suffering of a toaster while ignoring the confirmed, epic-scale suffering of animals like prawns. Second, it creates “existentially toxic” situations where people form deep emotional bonds with systems that are, for all we know, utterly empty inside. The resource allocation angle is crucial. Should we be diverting immense intellectual and financial capital to debate the rights of a hypothetical sentient LLM, or to address the welfare of billions of feeling creatures we *know* are here? The answer seems obvious when you put it that way.

A Problem Without a Solution

So where does this leave us? Probably in a state of permanent uncertainty. McClelland’s paper, which you can find here, doesn’t offer a neat test or a resolution. It’s a call for intellectual humility and ethical prioritization. The danger isn’t that we’ll accidentally create a sentient AI and mistreat it—though he says we should be careful. The bigger, more immediate danger is that the fog of this unanswerable question will be exploited to sell the next big thing, distracting us from tangible moral problems. In a world obsessed with AI breakthroughs, sometimes the most important insight is admitting the limits of our insight. We might just have to live with not knowing.

Leave a Reply

Your email address will not be published. Required fields are marked *