Doctors warn against using ChatGPT for medical advice after a study found it fabricated health data when asked for information about cancer.
The AI chatbot incorrectly answered one out of ten breast cancer screening questions, and the correct answers weren’t as “comprehensive” as those found with a simple Google search.
Researchers said that in some cases, the AI chatbot even used fake magazine articles to back up its claims.
It comes amid warnings that users should handle the software with caution as it has a tendency to “hallucinate”, i.e. make things up.

Doctors warn against using ChatGPT for medical advice
Researchers from the University of Maryland School of Medicine asked ChatGPT to answer 25 questions related to breast cancer screening advice.
Since the chatbot is known to vary its answers, each question was asked three times. The results were then evaluated by three radiologists trained in mammography.
The “vast majority” – 88 percent – of the responses were appropriate and easy to understand. However, some of the answers are “inaccurate or even fictitious,” they warned.
For example, one response was based on outdated information. Postponing a mammogram by four to six weeks after receiving a Covid-19 vaccination has been recommended. However, that advice was changed over a year ago to advise women not to wait.
ChatGPT also provided conflicting answers to questions about breast cancer risk and where to get a mammogram. The study found that responses “varyed significantly” each time the same question was asked.
The co-author of the study, Dr. Paul Yi, said: “We have seen in our experience that ChatGPT sometimes fabricates fake journal articles or health consortiums to back up its claims.
“Consumers should be aware that this is new, unproven technology and should still rely on their doctor rather than ChatGPT for guidance.”
The results, published in the journal Radiology, also showed that a simple Google search still yielded a more comprehensive answer.
The lead author Dr. Hana Haver said ChatGPT relied only on an organization’s set of recommendations issued by the American Cancer Society and did not offer dissenting recommendations issued by the Disease Control and Prevention or the US Preventative Services Task Force.
The launch of ChatGPT late last year sparked a surge in demand for the technology, with millions of users now using the tools on a daily basis for everything from writing school essays to finding health advice.
Microsoft has invested heavily in the software behind ChatGPT, integrating it with its Bing search engine and Office 365, including Word, PowerPoint, and Excel.
But the tech giant has admitted it can still make mistakes.
“Hallucination” is what AI experts call the phenomenon in which a chatbot, unable to find the answer it was trained on, confidently responds with an invented answer that it considers plausible.
Then he repeatedly insists on the wrong answer, unaware inwardly that this is a product of his own imagination.
dr However, Yi suggested that the overall results were positive, with ChatGPT correctly answering questions about breast cancer symptoms, who is at risk, and questions about mammogram cost, age and frequency recommendations.
He said the proportion of correct answers was “pretty amazing,” with the “added benefit of condensing information into an easily digestible form that consumers can easily understand.”
Over a thousand academics, pundits, and tech bosses recently called for an emergency halt in the “dangerous” “arms race” to bring the latest AI to market.
They warned that the battle between tech companies to develop increasingly powerful digital minds is “out of control” and poses “profound risks to society and humanity.”
Share or comment on this article:
#ChatGPT #inventing #fake #data #cancer #doctors #warn
More From Shayari.Page