Algorithms are as good as human raters at identifying signs of mental health issues in texts

UW Medicine researchers have found that algorithms are as good as trained human raters at identifying red flag language in text messages from people with severe mental illness. This opens up a promising area of ​​study that could help with psychiatric training and scarcity of care.

The results were published in late September in the journal Psychiatric Services.

Texting is increasingly becoming a part of mental health care and assessment, but these remote psychiatric interventions may lack the emotional reference points that therapists use to navigate in-person conversations with patients.

The research team, based in the Department of Psychiatry and Behavioral Sciences, used natural language processing for the first time to help detect and identify text messages that reflect “cognitive distortions” that might escape a undertrained or overworked clinician. The research could also eventually help more patients find care.

When we meet people in person, we have all these different contexts. We have visual cues, we have auditory cues, things that don’t come out in a text message. These are things we are trained to rely on. The hope here is that technology can provide an additional tool for clinicians to expand the information they rely on to make clinical decisions.”


Justin Tauscher, senior author of the paper and acting assistant professor, University of Washington School of Medicine

The study looked at thousands of one-time, spontaneous text messages between 39 people with severe mental illness and a history of hospitalization and their mental health care providers. The human raters scored the texts for several cognitive distortions as they typically would in the context of patient care. Evaluators look for subtle or overt language that suggests the patient is overgeneralizing, catastrophizing, or jumping to conclusions, all of which can be signs of trouble.

The researchers also programmed computers to perform the same text-scoring task and found that humans and AI scored similarly in most of the categories studied.

“I think being able to have systems that can help support clinical decision-making is extremely relevant and potentially impactful for people in the field who sometimes lack access to training, sometimes don’t have access to supervision or sometimes also are just tired, overworked and burnt out and have a hard time staying present in every interaction they have,” said Tauscher, who came to the research after a decade in a clinical setting.

Support from clinicians would be an immediate benefit, but the researchers also see future applications that work alongside a wearable fitness band or phone monitoring system. Dror Ben-Zeev, director of the UW Behavioral Research in Technology and Engineering Center and co-author of the paper, said technology could potentially provide real-time feedback that alerts a therapist to impending problems.

“The same way you get a blood oxygen level, heart rate, and other inputs,” Ben-Zeev said, “we might get a note that the patient is jumping to conclusions and catastrophizing. Just the ability to draw attention to a thought pattern is something we envision in the future People will have these feedback loops with their technology where they get insight into themselves.

This work was supported by the Garvey Institute for Brain Health Solutions at the University of Washington School of Medicine, the National Institute of Mental Health (R56-MH-109554), and the National Library of Medicine (T15-LM-007442 ).

Source:

Journal reference:

Tauscher, JS, et al. (2022) Automated detection of cognitive distortions in text exchanges between clinicians and people with severe mental illness. Psychiatric services. doi.org/10.1176/appi.ps.202100692.

Sharon D. Cole