Chat GPT not accurate at providing medication info, research says
Researchers at Long Island University recently ran a test using Chat GPT, asking it for answers to 45 medication-related questions.
According to the test, Chat GPT provided 10 correct responses but struck out on 29 other questions, including saying it was OK to combine two medications that can actually cause adverse effects when taken together. Six answers were excluded because of a lack of literature to provide a data-driven response.
The fact that Chat GPT provided answers that were wildly inaccurate alarmed researchers.
“In reality, these medications have the potential to interact with one another, and combined use may result in excessive lowering of blood pressure,” Grossman said. “Without knowledge of this interaction, a patient may suffer from an unwanted and preventable side effect," said Sara Grossman, lead author of the study.
Out of the 29 questions Chat GPT did not have an answer for, 10 included completely incorrect information, 11 did not directly address the question, and 12 gave incomplete answers.
Researchers added that the program provided false or incorrect citations to support some responses.
The study was presented in early December at the American Society of Health-System Pharmacists Midyear Clinical Meeting.
Researchers said the study provides strong evidence that patients should still trust their doctors over information gained through artificial intelligence.
“Health care professionals and patients should be cautious about using ChatGPT as an authoritative source for medication-related information,” said Grossman, “Anyone who uses ChatGPT for medication-related information should verify the information using trusted sources.”
ncG1vNJzZmirk6e2sbzSp5ywq16YvK570q2mq6GVqHyktMCtZKCopGK7sMCMmpqcraKWwaZ5wK1kqaqfq7altc2gZKadlJ6wosDIqKVmoZ6bvG6%2BxKycmqqTnXq0rdisZg%3D%3D