AI token consumption, a key indicator of demand, may have been significantly inflated, CNBC reported on April 17.
Tokens are the basic unit of AI use and are the words and characters that make up user prompts and model-generated answers. A typical conversation uses hundreds of tokens per paragraph, but agent AI that writes code, browses the web and handles multi-step workflows can consume thousands of times more tokens per session.
The problem is that as companies begin measuring AI adoption by token consumption, employees are focusing on raising the metric rather than delivering results, CNBC said.
Meta and Shopify built internal leaderboards that track employees' token use, and Nvidia Chief Executive Jensen Huang said he would be "very worried" if an engineer with an annual salary of $500,000 did not use $250,000 worth of computing.
Databricks Chief Executive Ali Ghodsi (Ali Ghodsi) said, "If the goal is to spend a lot of money, there are easy ways," adding, "Resending the same query to 10 places or repeatedly running a loop costs a lot but produces nothing."
Jen Stave, at Harvard Business School's AI lab, said, "I spoke with 12 chief technology officers and chief information officers, and they all said they are having difficulty calculating ROI."
Anthropic CEO Dario Amodei described this as a "cone of uncertainty" and said, "Data centres take 1 to 2 years to build. You have to invest billions of dollars now for demand that is not yet proven. If you are off by even a few years, it can be catastrophic."
In Anthropic's case, it appears to be preparing for this situation by shifting its pricing model to pay-as-you-go. Anthropic replaced its existing flat-rate enterprise plan with a structure that charges based on actual usage.
The practice of subscribers to the $200-a-month Max plan linking their Claude subscriptions to third-party tools such as OpenClaw and running agents around the clock was something Anthropic could not afford to handle, CNBC said.