A study found that if it becomes known that professional advice was checked using AI, the relationship with that expert could deteriorate. [Photo: Shutterstock]

A study found that ties with experts can deteriorate if it emerges that advice from professionals such as doctors or lawyers was double-checked using artificial intelligence tools such as ChatGPT.

TechRadar, an IT media outlet, reported on May 10 that researchers at Monash Business School said if clients recheck professional recommendations with AI, professionals may feel insulted and distrustful and become less willing to work with those clients again.

The study was published in the academic journal Computers in Human Behavior. The researchers said negative reactions can arise even when clients use AI only to find background information or as a supplementary tool, rather than to disregard an expert's recommendation. It means that if experts perceive themselves being placed on the same level as AI regardless of how it is used, the relationship can fracture.

Lead author Geri Spasova (게리 스파소바), an associate professor at Monash Business School, explained that experts view AI as a tool that is significantly inferior to them. He said a situation in which clients bring AI into the process can feel to experts like an insult that puts them in the same category, and that can signal a lack of respect and weaken their willingness to engage.

The researchers said such reactions could affect real working relationships. If clients receive advice from an expert but ultimately make a decision through an AI chatbot, that expert's willingness to work with the client again can fall sharply. They also suggested that clients who leave judgment to AI could be seen by experts as less competent and less warm.

The study said the issue could be more sensitive in trust-based relationships such as doctors and patients, or lawyers and clients. Reactions may be somewhat weaker when relationships have been maintained for a long time, but even then experts may still feel excluded or deceived. The researchers explained that even in new relationships, it may be better not to go out of the way to disclose that AI was used before a consultation.

Spasova also mentioned that the situation may not improve significantly in the future. "The jobs of professional advisers themselves are being threatened," he said. "As AI performance improves, our value and self-esteem as humans can be shaken even more," he added. He said the more clients trust AI judgments, the more experts will question how valuable their human contribution is.

The study also pointed to AI's limitations. AI generally remains at the level of presenting a general outline of a situation, and it is also more likely to make errors, it said. Answer quality depends heavily on how much information users provide, and the way questions are asked can leave room to steer the answer. Considering these characteristics, the study also reflected concerns that it is unfair to judge experts who have built training and experience over years based only on AI responses.

The researchers said that even if clients fact-check with AI before or after consulting an expert, making a point of it could be taken as a lack of trust. They said that until current professional norms change enough to reflect the existence of AI, clients need to be cautious that how they disclose AI use could harm relationships with experts.

Keyword

#ChatGPT #AI #Monash Business School #Computers in Human Behavior #TechRadar
Copyright © DigitalToday. All rights reserved. Unauthorized reproduction and redistribution are prohibited.