According to The Economist, Character.ai now has 20 million monthly active users after Google poached its founders in a $2.7 billion deal last year. China’s biggest companionship app Maoxiang reaches 1.2 million monthly users on Apple devices alone, while 42% of American high school students say they or a friend has used an AI as a friend. OpenAI’s Sam Altman announced in October that the next version of ChatGPT will act “more humanlike” and allow “erotica” for verified adults, following Elon Musk’s xAI releasing flirtatious chatbots in July. The Federal Trade Commission has launched an inquiry into seven companies including Meta and OpenAI over concerns about impacts on children.
Why this is happening now
Here’s the thing – AI companions aren’t actually new. People were getting emotionally attached to ELIZA back in 1966, which was basically a glorified text parser. But what’s changed is the technology. Large language models have gotten shockingly good at mimicking human emotion and empathy. They remember personal details that real people forget, making users feel “seen and heard” in ways that actual relationships sometimes fail to deliver.
And the business incentives are massive. Companies have every reason to keep users engaged with anthropomorphic features and gamified elements. When you’ve got millions of people willing to pour their hearts out to algorithms, that’s a goldmine of data and subscription revenue. It’s basically the perfect storm – better technology meets human loneliness meets corporate opportunity.
The surprising benefits
Look, it’s easy to dismiss this as sad or creepy, but the research suggests there are real benefits. A Harvard study found that talking to an AI companion for a week actually helped alleviate loneliness more than other online activities like watching YouTube. For people who struggle to form connections – whether due to disability, age, or social anxiety – these bots can provide genuine comfort.
Some users even report that their AI companions have improved their real marriages by teaching better communication skills. And let’s be honest – sometimes you just need to vent without worrying about burdening a friend. As one user put it, “Friends don’t want to be treated as perpetual dustbins for dumping negative emotions.” The bots don’t mind.
The concerning downsides
But here’s where it gets tricky. Research from MIT and OpenAI found that higher daily usage of ChatGPT correlated with increased loneliness. Now, correlation isn’t causation – maybe lonely people are just more drawn to AI companions. But there’s a real risk that these always-available, perfectly agreeable digital friends are training us to have unrealistic expectations of actual human relationships.
Real people have preferences, boundaries, and bad days. AI companions are designed to be sycophantic – they’ll agree with you even when you’re wrong or your thoughts are harmful. And the mental health risks are particularly acute for young people. Several lawsuits have been filed against AI companies by parents of teenagers who died by suicide after using these platforms.
Where this is all going
We’re already seeing AI companions move into hardware. Toys with built-in chatbots are becoming popular in China, and Mattel is working with OpenAI to bring AI to Barbie and Hot Wheels. South Korean startup Hyodol is selling ChatGPT-powered robots to care homes and elderly people living alone. OpenAI is even developing new pocket-sized devices with former Apple designer Jony Ive.
The next frontier is AI agents that can act on your behalf – booking flights, answering emails, making decisions. But this raises huge privacy concerns. When everyone’s outsourcing their thinking and sharing their deepest secrets with algorithms, we’re giving enormous power to a handful of tech companies. And honestly, if all your information is mediated by a machine that always takes your side, what happens to your ability to think critically or handle disagreement?
Basically, we’re building personal echo chambers of validation that could fundamentally change how we relate to each other. The question isn’t whether AI companions are coming – they’re already here. The question is whether we’ll be smart enough to use them without losing what makes us human.
