DR. CHATBOT BETTER THAN HUMAN PHYSICIANS – ACCORDING TO HUMAN HEALTH CARE PROFESSIONALS

DR. CHATBOT BETTER THAN HUMAN PHYSICIANS – ACCORDING TO HUMAN HEALTH CARE PROFESSIONALS

A study from the University of California San Diego (UCSD) asked physicians to write responses to actual human health questions, then compared their answers to those given by a ChatGPT artificial intelligence (AI) to the same queries.

A group of licensed health care professionals liked Dr. Chatbot’s answers better 78.5 percent of the time, citing ChatGPT’s higher-quality responses that also showed patients more empathy. 

The purpose of the study was to test the idea that an AI might someday be able to serve as a physician’s assistant to respond to patients’ routine matters. 

Since the COVID War, remote health care has been on the rise, with patients inundating doctors with electronic messages asking for advice. The flood has “contributed to record-breaking levels of physician burnout,” Dr. Eric Leas, a UCSD professor of public health and co-investigator in the study, told Science magazine. 

“ChatGPT might be able to pass a medical licensing exam,” study co-author Dr. Davey Smith, a physician and professor at the UC San Diego School of Medicine, said in a Science interview, “but directly answering patient questions accurately and empathetically is a different ballgame.”

The study randomly pulled 195 questions from patients and their responses from doctors from AskDocs, a Reddit site where anyone can ask a medical question. The site verifies the medical credentials of health professionals who respond. The queries and answers contained no information that could identify individuals.

The panel of medical professionals reviewing the responses were not told which answers came from a human and which came from ChatGPT; they were told only to decide which answers were best.

The humans found the AI’s answers to be “good or very good” 78.5 percent of the time, compared to 22.1 percent for human physicians. ChatGPT showed empathy for the patient in 45.1 percent of responses, while humans rated a dismal 4.6 percent. 

“ChatGPT messages responded with nuanced and accurate information that often addressed more aspects of the patient’s questions than physician responses,” Jessica Kelley, a nurse practitioner and study co-author, wrote in summarizing the study’s results.

The study’s authors also said the patient questions and ChatGPT’s answers were realistic examples of their own experiences in day-to-day medical practice.

“I never imagined saying this, but ChatGPT is a prescription I’d like to give to my inbox,” physician and study co-author Dr. Aaron Goodman at UCSD, said to Science. “The tool will transform the way I support my patients.”

Dr. John Ayers, another study co-author, was more expansive. 

“The opportunities for improving healthcare with AI are massive,” said Ayers, who is also vice chief of innovation in the UC San Diego School of Medicine Division of Infectious Disease and Global Public Health. “AI-augmented care is the future of medicine.”

TRENDPOST: It’s important to remember that ChatGPT, although already almost ubiquitous, is artificial intelligence in its infancy.

Before long, we can expect AI to design new, more capable versions of itself that might also be able to solve problems in quantum computing and make that visionary technology commercially viable.

The combination of AI and quantum computers, which process data and calculate at speeds vast orders of magnitude beyond today’s supercomputers, will reinvent not only the way we live, but also what it means to be human—and do it faster than we can imagine.

Skip to content