The case shows it is hard to maintain user satisfaction in the generative AI services race based on model performance alone. [Photo: Shutterstock]

German engineer Niki Reinert said he canceled the paid plan for Anthropic's generative AI service Claude. He cited token usage limits, a drop in response quality and poor customer support as the main reasons.

On April 30, local time, online media outlet Gigazine reported that Reinert wrote in a recent post that he was highly satisfied when he first used Claude Code. He said the service was fast in the early weeks, token allowances were reasonable and output quality was strong.

He claimed his experience changed sharply over time. His biggest complaint was token limits. Reinert said he sent two simple questions to Claude Haiku one morning, thinking the usage cap would have reset, but token usage immediately jumped to 100 percent. He said he asked an AI support bot but received only templated guidance, and it took several days to get an actual answer even after requesting a human agent.

Reinert said the customer support reply that arrived later was also far from resolving the issue. He said the support representative explained in a long notice how usage limits work in the Pro and Max plans, but he called it an automated-response email that did not match his real problem. He criticized it, saying, "They sent an automated email that did not address the real issue at all and effectively ended the conversation."

Reinert also said the practical impact of token limits was very different from before. He said he previously could run up to 3 projects in parallel with Claude Code, but recently hits the limit after using just 1 project for about 2 hours.

He also cited examples of quality deterioration. Reinert claimed that when he asked Claude Opus to handle refactoring for an ongoing project, it produced a stopgap response at the level of an inexperienced developer. He said that after pointing out the problem, Claude Opus replied to the effect of, "That was an easy fix. I'll redo it properly." He said the exchange alone consumed about half of his daily token allowance.

He said difficulties maintaining conversational context were another frustration. Reinert said Claude repeatedly lost the conversation cache and began reading the codebase again, and tokens were spent again to retrieve what had already been handled. He said this structure could be favorable for Anthropic in terms of revenue, but works against the user experience.

Reinert said he was not rejecting Claude Code outright. "I'm still a big fan of Claude Code," he said, adding that it is theoretically a very powerful tool and has helped him in real work.

He also spoke positively about his writing experience using Claude CoWork. At the same time, he added that he understands that running a reasoning-focused AI service carries a heavy technical and organisational burden, and that securing computing resources can become more difficult as new customers increase.

Reinert said of his decision to cancel, "It looks like Anthropic is struggling to handle too many new users at once," and "I canceled my account to reduce the burden." He also claimed that the current problems may not be merely operational difficulties, but could be the result of poor judgment at the service design stage.

In the market, analysts say the case shows that in the competition among generative AI coding tools, not only model performance but also usage policies and customer support systems can shape user satisfaction. Reinert also maintained that while Claude Code remains a powerful tool usable in real work, he has recently felt a decline in quality and a slowdown in processing speed.

Keyword

#Anthropic #Claude #Claude Code #Claude Haiku #Claude Opus
Copyright © DigitalToday. All rights reserved. Unauthorized reproduction and redistribution are prohibited.