The debate shows the focus shifting beyond whether AI can produce code faster to who will own quality and responsibility. [Photo: Shutterstock]

As artificial intelligence (AI) coding tools rapidly improve, more cases are emerging of developers rolling changes into production without directly reviewing the code. That is also intensifying debate over how far responsibility for code written by generative AI should extend.

On May 11, online media outlet Gigazine reported that developer Simon Willison (사이먼 윌리슨) said the boundary between “vibe coding” and agent-based engineering is increasingly blurring in his recent working style.

Vibe coding refers to accepting AI-generated code without reviewing it in detail, as long as the result works. By contrast, agent-based engineering is closer to an approach in which an experienced developer uses AI tools while also considering security, maintainability and operational stability.

Willison explained that he previously drew a clear distinction between the two concepts. He saw vibe coding as potentially practical for small personal tools, but viewed applying it as-is to software tied to other people’s data or real work as irresponsible.

He also said actual development habits are changing as AI coding tools improve. He said that after repeatedly seeing Claude, an AI coding agent, handle SQL query execution, JSON API creation, and even tests and documentation, he no longer reads code line by line for simple tasks.

The problem is that as trust grows, review procedures can become looser. Willison said, “I find myself wondering whether using unreviewed code in production is truly responsible.”

He likened the situation to how large organisations use internal services. For example, when using an image-processing service built by another team, most developers use it first by reading documentation rather than reading the full internal implementation, then check the details when problems arise, he said.

He pointed to a difference: unlike human development teams, AI has neither accountability nor a reputation. A specific development team allows responsibility to be traced when problems occur, but AI-generated code lacks that kind of structure, he said.

Willison particularly warned that repeated success could lead to overconfidence. He explained that when AI produces correct results multiple times, developers gradually skip verification steps, and a “normalization of deviance” can emerge in which errors are missed at critical moments.

He also said the criteria for evaluating code quality are changing in the AI era. In the past, development quality could be judged to some extent through metrics such as the number of commits, the scale of tests and how well the README was organised, but AI now makes it possible to build seemingly polished repositories in a short time.

Willison said that as a result, real-world usage history and operational experience are becoming more important. He said, “Even code made with AI is more valuable than code that is simply generated if someone uses and verifies it in real life every day.”

Reactions in the developer community are mixed. Some voiced concern that AI could make developers overly passive. Others argue that AI is an extension of how development has already evolved, like how developers stopped writing assembly directly after high-level programming languages emerged.

Criticism is also growing. Some say that if developers overwrite large-scale changes proposed by AI without analysing the root cause, it can instead create new bugs. As a result, the industry is seeing the core of the AI coding race shift beyond simple generation capability to how reliably code can be validated and integrated into responsible development processes.

Keyword

#Simon Willison #vibe coding #Claude #SQL #normalization of deviance
Copyright © DigitalToday. All rights reserved. Unauthorized reproduction and redistribution are prohibited.