[DigitalToday Hong Kyung-min (홍경민) intern reporter] If AI tools based on large language models such as ChatGPT were no longer available, could modern office work run normally?
A paper published in South Korea recently offered a clue. A research paper presented by a Korea Advanced Institute of Science and Technology (KAIST) team at the international academic conference CHI EA 2026, titled "Oops! ChatGPT is Temporarily Unavailable!": A Diary Study on Knowledge Workers' Experiences of LLM Withdrawal, found that even a temporary suspension of large language model use significantly shook workflow.
The paper said that as large language models have become essential infrastructure for modern knowledge work, users experience severe psychological discomfort and work delays when AI is absent, but regain ownership of outputs in the process of doing tasks directly.
The KAIST team set a four-day period banning AI use for 10 knowledge workers who frequently use LLMs, and analysed behavioural changes through diaries and in-depth interviews. In particular, the team used a dedicated web diary interface so participants could record in real time the reasons they wanted to use AI, their feelings and their stress levels each time the urge arose.
In this forced disconnection, participants compared working without LLMs to a world without home appliances such as dishwashers or robot vacuum cleaners, or without cars or the Google search service, and showed a strong sense of loss.
In particular, when they had to use traditional search engines instead of AI summarisation during information searches, they viewed repeatedly revising keywords and manually combining information as excessive and inefficient labour. As a result, some participants postponed work until AI support resumed, or tended to give up investing time to produce perfect results and instead lowered their own work standards.
Meaningful changes were also observed in social relationships. Users who had a habit of asking AI questions saw directly seeking help from others as a burdensome act that incurs social costs, and revealed a psychological barrier as they worried they might be seen as a Finger Prince or Princess, a term for someone who inconveniences others. But when actual collaboration occurred, some users found that discussion with humans was more useful than AI, confirming the possibility of restoring severed social interaction.
At the same time, AI withdrawal led to a positive aspect: rediscovering the value of work. Participants moved away from the habit of uncritically accepting AI logic and secured clarity in their work by designing reasoning processes themselves. Those who felt AI-generated outputs were not theirs regained strong pride and a sense of sovereignty over results by carrying out the entire process themselves, from planning to decision-making. This became an opportunity to realise that core tasks delegated to AI for efficiency were in fact important elements shaping their identity as professionals.
But the study warned that LLMs have already become fixed as a social norm and infrastructure beyond individual choice. Participants regarded using AI not as simply using a tool but as a requirement for maintaining competitiveness, and perceived not using AI as a personal loss and an act of falling behind the times. In particular, some university student participants showed deepened dependence on the technology, such as being unable to bring themselves to perform certain tasks like coding without AI or losing motivation to learn.
The research team said the significance of the survey lay in defining AI dependence not as a lack of individual capability but as an infrastructural change in the work environment. It recommended a need to design a value-driven appropriation approach in which knowledge workers do not become trapped in productivity-centred values suggested by AI, but instead proactively decide the scope of AI use according to their professional values and standards.
Overall, the study suggests AI is working like a part of our brains, beyond being a simple tool. The atrophy of the "muscles of thought" hidden behind the sweet efficiency provided by technology is likely to become the biggest threat knowledge workers face in the future. It is a time when conscious individual effort is more urgently needed than ever to coexist with AI without losing one's own logic system.