According to PYMNTS.com, a report from The Information states OpenAI’s internal “compute margin” hit 70% in October, doubling from January of last year and up from 52% at the end of 2022. The metric measures revenue share after accounting for the cost of running AI models for paying users. Despite this efficiency gain and a staggering $86 billion valuation, the company is still not profitable. On the consumer side, the ChatGPT mobile app has surpassed $3 billion in user spending since launch 31 months ago, with $2.5 billion of that spent just in 2024—a 408% year-over-year increase. Furthermore, close to 900 million people now use ChatGPT weekly, generating 2.5 billion queries a day.
The efficiency game is heating up
Here’s the thing about that “compute margin”: it’s basically OpenAI‘s report card on how well it turns expensive GPU time into actual money. Hitting 70% is a huge leap, and it shows they’re getting smarter at squeezing value out of every dollar spent on cloud servers. But the report also notes a fascinating wrinkle: while OpenAI is better at this for paid customers, rival Anthropic apparently gets more overall efficiency from its server spending. That’s the real battleground now. It’s not just about who has the smartest model, but who can run it the cheapest. And when you’re dealing with a $500 billion (or soon, maybe $800+ billion) valuation, investors are going to obsess over these unit economics. Can they ever get that margin high enough to actually cover all their other massive costs, like research and talent? That’s the billion-dollar question.
The $3 billion mobile habit
Now, the consumer spending number is just wild. $3 billion in 31 months is a pace that beats giants like TikTok. It tells us people aren’t just dabbling with ChatGPT; they’re building it into their daily lives and are willing to pay for the premium version on their phones. Think about that shift. We went from “what’s a chatbot?” to people spending billions on it in the time it takes to finish a college degree. The data shows 30 million “power users” are now using AI for dozens of daily activities, from shopping to health. They’ve rewired their digital behavior, as PYMNTS’ CEO put it. But is that growth sustainable? Or is it front-loaded by early adopters? The 408% spike this year is insane, but you have to wonder what next year’s growth looks like when the easiest converts are already in the bag.
The big disconnect
So we have this weird paradox. On one hand, you have incredible top-line traction: near-billion weekly users, skyrocketing mobile revenue, and a valuation that defies gravity. On the other, you have a company still in the red, with a CEO declaring “code red” over competitive pressures. It highlights the brutal economics of foundational AI. The compute costs are astronomical, and even when you optimize them, the bar for profitability is stratospherically high. They’re chasing a moving target where today’s state-of-the-art model is tomorrow’s expensive relic. And all this is happening while they’re trying to fund a moonshot project like superintelligent AGI. Basically, they’re trying to build a profitable business and change the fundamental nature of intelligence on the same budget. No pressure, right?
