grehce commited on
Commit
830ca69
·
verified ·
1 Parent(s): 0f1ccba

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -25,7 +25,7 @@ As widespread and prevalent as depression is, identifying and treating depressio
25
  ## Model Overview
26
 
27
  DAM is a clinical-grade, speech-based model designed to screen for signs of depression and anxiety using voice biomarkers.
28
- To the best of our knowledge, it is the first model developed explicitly for clinical-grade mental health assessment from speech without reliance on linguistic content or transcription. Earlier works have been peer-reviewed in the largest voice biomarker study by the Annals of Family Medicine, a leading U.S. Primary Care Journal [5].
29
  The model operates exclusively on the acoustic properties of the speech signal, extracting depression- and anxiety-specific voice biomarkers rather than semantic or lexical information.
30
  Numerous studies [6–8] have demonstrated that paralinguistic features – such as spectral entropy, pitch variability, fundamental frequency, and related acoustic measures – exhibit strong correlations with depression and anxiety.
31
  Building on this body of evidence, DAM extends prior approaches by leveraging deep learning to learn fine-grained vocal biomarkers directly from the raw speech signal, yielding representations that demonstrate greater predictive power than hand-engineered paralinguistic features.
 
25
  ## Model Overview
26
 
27
  DAM is a clinical-grade, speech-based model designed to screen for signs of depression and anxiety using voice biomarkers.
28
+ To the best of our knowledge, it is the first model developed explicitly for clinical-grade mental health assessment from speech without reliance on linguistic content or transcription. A predecessor model has been peer-reviewed in the largest voice biomarker study by the Annals of Family Medicine, a leading U.S. Primary Care Journal [5].
29
  The model operates exclusively on the acoustic properties of the speech signal, extracting depression- and anxiety-specific voice biomarkers rather than semantic or lexical information.
30
  Numerous studies [6–8] have demonstrated that paralinguistic features – such as spectral entropy, pitch variability, fundamental frequency, and related acoustic measures – exhibit strong correlations with depression and anxiety.
31
  Building on this body of evidence, DAM extends prior approaches by leveraging deep learning to learn fine-grained vocal biomarkers directly from the raw speech signal, yielding representations that demonstrate greater predictive power than hand-engineered paralinguistic features.