[Photo: Shutterstock]

[Digital Today reporter Chi-gyu Hwang] A study has found that excessive use of large language models, or LLMs, changes not only writing style but also the content itself.

On March 20 local time, a research team made up of researchers at universities on the U.S. West Coast conducted an experiment with 100 participants. The topic was “the impact of money on happiness.” The team analysed how participants’ answers differed depending on their reliance on LLMs.

The team classified participants who generated more than 40 percent of their text with an LLM as “high-dependence users.” High-dependence users submitted 69 percent more neutral answers than participants who used little AI. Participants who used less AI took much stronger positions, either positive or negative, on the relationship between money and happiness.

Natasha Jaques (나타샤 자크), the paper’s lead author and a professor in the University of Washington’s computer engineering department, said, "LLMs are changing writing in ways humans would never write."

Changes also appeared in style. In the writing of high-dependence users, first-person pronouns fell by 50 percent. Mentions of personal anecdotes or real experiences also declined. Overall, the language became less personal and shifted to a more formal style.

In a post-experiment survey, high-dependence users said their writing was less creative and did not contain their own voice. Still, their satisfaction with the final output was similar to that of participants who used less AI. The team cited this as a factor that could be a long-term concern.

The team used three models in the experiment: Anthropic Claude 3.5 Haiku, OpenAI GPT-4o mini and Google Gemini 2.5 Flash.

The team also analysed how LLMs edit existing writing. Human editors change words one by one and keep most of the original vocabulary, while all three models replaced far more than humans did. The team also confirmed cases in which the original meaning changed.

Jaques explained that the cause of this phenomenon can be found in how LLMs are trained on human feedback. She said, "The model cannot distinguish between satisfying humans and changing what humans want itself to fit the model."

Keyword

#University of Washington #Anthropic Claude 3.5 Haiku #OpenAI GPT-4o mini #Google Gemini 2.5 Flash #LLM
Copyright © DigitalToday. All rights reserved. Unauthorized reproduction and redistribution are prohibited.