Show simple item record

dc.contributor.authorÖzkanca, Yasin Serdar
dc.contributor.authorDemiroğlu, Cenk
dc.contributor.authorBesirli, A.
dc.contributor.authorÇelik, S.
dc.date.accessioned2020-05-18T22:08:31Z
dc.date.available2020-05-18T22:08:31Z
dc.date.issued2018
dc.identifier.isbn978-1-5108-7221-9
dc.identifier.issn2308-457Xen_US
dc.identifier.urihttp://hdl.handle.net/10679/6575
dc.identifier.urihttps://www.isca-speech.org/archive/Interspeech_2018/abstracts/2169.html
dc.description.abstractDepression is a common mental health problem around the world with a large burden on economies, well-being, hence productivity, of individuals. Its early diagnosis and treatment are critical to reduce the costs and even save lives. One key aspect to achieve that goal is to use voice technologies and monitor depression remotely and relatively inexpensively using automated agents. Although there has been efforts to automatically assess depression levels from audiovisual features, use of transcriptions along with the acoustic features has emerged as a more recent research venue. Moreover, difficulty in data collection and the limited amounts of data available for research are also challenges that are hampering the success of the algorithms. One of the novel contributions in this paper is to exploit the databases from multiple languages for feature selection. Since a large number of features can be extracted from speech and given the small amounts of training data available, effective data selection is critical for success. Our proposed multi-lingual method was effective at selecting better features and significantly improved the depression assessment accuracy. We also use text-based features for assessment and propose a novel strategy to fuse the text- and speech-based classifiers which further boosted the performance.en_US
dc.language.isoengen_US
dc.publisherInternational Speech Communication Associationen_US
dc.relation.ispartofProceedings of the Annual Conference of the International Speech Communication Association, INTERSPEECH
dc.rightsopenAccess
dc.titleMulti-lingual depression-level assessment from conversational speech using acoustic and text featuresen_US
dc.typeConference paperen_US
dc.description.versionPublisher version
dc.publicationstatusPublisheden_US
dc.contributor.departmentÖzyeğin University
dc.contributor.authorID(ORCID 0000-0002-6160-3169 & YÖK ID 144947) Demiroğlu, Cenk
dc.contributor.ozuauthorDemiroğlu, Cenk
dc.identifier.startpage3398en_US
dc.identifier.endpage3402en_US
dc.identifier.wosWOS:000465363900709
dc.identifier.doi10.21437/Interspeech.2018-2169en_US
dc.subject.keywordsDepression estimationen_US
dc.subject.keywordsAcoustic featuresen_US
dc.subject.keywordsFeature selectionen_US
dc.subject.keywordsMulti-lingual applicationsen_US
dc.identifier.scopusSCOPUS:2-s2.0-85055003235
dc.contributor.ozugradstudentÖzkanca, Yasin Serdar
dc.contributor.authorMale2
dc.relation.publicationcategoryConference Paper - International - Institutional Academic Staff and Graduate Student


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record


Share this page