Medical professionals diagnose depression by interpreting the responses of individuals to a variety of questions, probing lifestyle changes and ongoing thoughts. Like professionals, an effective automated agent must understand that responses to queries have varying prognostic value. In this study we demonstrate an automated depression-detection algorithm that models between an individual and agent and learns from sequences of questions and answers without the need to perform explicit topic of the content. We utilized data of 142 individuals undergoing depression screening, and modeled the interactions with audio and text features in a Long-Short Term Memory (LSTM) neural network model to detect depression. Our results were comparable to methods that explicitly modeled the topics of the questions and answers which suggests that depression can be detected through sequential of an interaction, with minimal information on the structure of the interview.

Authors: Tuka Alhanai , Mohammad Ghassemi , and James Glass

Paper: http://groups.csail.mit.edu/sls/publications/2018/Alhanai_Interspeech-2018.pdf

Source link
thanks you RSS link
( https://www.reddit.com/r//comments/9du8lh/r__depression_with_audiotext_/)


Please enter your comment!
Please enter your name here