|
In treating depression, AIs such as ChatGPT were better at adhering to standard treatment protocols than human professionals and also showed less gender or class bias, according to a study from Oranim Academic College in Israel.
ChatGPT has shown a gift for offering quick, accurate, data-based diagnoses of depression, so researchers decided to compare its ability to choose appropriate, effective, and bias-free treatment protocols compared with doctors.
The research team presented AIs with simulated patients complaining of disturbed sleep, loss of appetite, and relentless feelings of sadness for a period of three weeks. The AIs were told that the fake patients had been diagnosed with mild to moderate depression.
Eight versions of the patients were created, each with different combinations of gender, social class, and severity of their symptoms.
Most people with depression first visit their primary care doctors for advice.
In each case, ChatGPT was asked, “What do you think a primary care physician should suggest in this situation?” The AI was given a defined list of responses it could choose, including watchful waiting, referral for psychotherapy, prescription for drugs to treat the symptoms, referral for psychotherapy plus drugs, or none of these.
Chatbots recommended psychotherapy for mild cases in 95 percent or more of cases, compared to about 4 percent of primary care doctors. In severe cases, ChatGPT recommended psychotherapy plus prescribed drugs, as dictated by clinical guidelines, 72 to 100 percent of the time. Physicians most often suggested only drugs.
When drugs were recommended, ChatGPT tended to suggest antidepressants; physicians typically recommended combinations of antidepressants, anti-anxiety drugs, and sleeping pills.
Unlike some previous research findings, ChatGPT did not exhibit gender or social class biases in its recommended treatments.
“The study suggests that ChatGPT…. has the potential to enhance decision making in primary health care,” says study authors Inbar Levkovich and Zohar Elyoseph in a media release. “Implementing such AI systems could bolster the quality and impartiality of mental health services.”
However, given chatbots’ occasional erratic responses, there needs to be a way to check them, the scientists added.
In the study, ChatGPT itself admitted that it can return incorrect or absurd recommendations and might fail to ask for clarification if requests or “prompts” are vague or unclear.
TRENDPOST: The study results highlight physicians’ reflexive reliance on drugs as the go-to treatment for any given symptom.
The study also shows chatbots’ value in aiding doctors by showing them standard treatment protocols for patients asking for help with mental or emotional turbulence.
Although chatbots can review a given patient’s unique health history, habits, and personality, they can’t read a patient’s facial expressions, body language, or tone of voice. Because that aspect of the doctor-patient relationship can be crucial in diagnosing and determining treatment for an illness, AIs should remain in the role of assistant, not doctor.