Volume 19 No 11 (2021)
 Download PDF
A Deep Learning Approach to Detect Depression from Facial Expressions and Speech Patterns
Sneha Kumari, Shweta Tiwari
Abstract
Depression is a prevalent mental health disorder affecting millions of people worldwide. Early detection and intervention are crucial for effective treatment and improved patient outcomes. This study presents a novel deep learning approach to detect depression by analyzing facial expressions and speech patterns. We developed a multimodal convolutional neural network (CNN) and long short-term memory (LSTM) model that combines visual and audio features to classify individuals as depressed or non-depressed. The model was trained and evaluated on a dataset of 1,000 participants, achieving an accuracy of 89.5% and an F1-score of 0.88. Our findings demonstrate the potential of deep learning techniques in automated depression screening and highlight the importance of multimodal analysis in mental health assessment.
Keywords
Depression detection, deep learning, facial expressions, speech patterns, convolutional neural networks, long short-term memory networks
Copyright
Copyright © Neuroquantology

Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

Articles published in the Neuroquantology are available under Creative Commons Attribution Non-Commercial No Derivatives Licence (CC BY-NC-ND 4.0). Authors retain copyright in their work and grant IJECSE right of first publication under CC BY-NC-ND 4.0. Users have the right to read, download, copy, distribute, print, search, or link to the full texts of articles in this journal, and to use them for any other lawful purpose.