Advertisement
Researchers found that this was also the case when patients or medical advice seekers were made to believe that a doctor took the help of artificial intelligence (AI) before sharing his guidance.
As a result, the individuals were also found to be less willing to follow AI recommendations, compared to advice from human doctors provided solely on their medical expertise, found the study led by the University of Wuerzburg in Germany.
The results suggested that people trust medical guidance less if they suspect AI involvement, even as people around the world are increasingly turning to ChatGPT for health-related information, the researchers said.
Related Articles
Advertisement
The setting of the study is based on a digital health platform from where information on medical issues can be obtained, the researchers said.
For the study, over 2,000 participants received identical medical advice and were asked to evaluate it for reliability, comprehensibility and empathy.
The participants were divided into three groups. While one group of participants was told that the advice came from a doctor, the second was told that it came from an AI chatbot. The third was led to believe that a doctor provided the medical advice with help from an AI.
The researchers found that advice provided by human doctors also scored higher on empathy, compared to that when AI was involved.
The study represented a starting point for detailed research into the conditions under which AI can be used in diagnostics and therapy without jeopardising patients’ trust and cooperation, according to the authors.