Publication:
Hybrid nearest-neighbor/cluster adaptive training for rapid speaker adaptation in statistical speech synthesis systems

dc.contributor.authorMohammadi, Amir
dc.contributor.authorDemiroğlu, Cenk
dc.contributor.departmentElectrical & Electronics Engineering
dc.contributor.ozuauthorDEMİROĞLU, Cenk
dc.contributor.ozugradstudentMohammadi, Amir
dc.date.accessioned2016-02-15T13:38:34Z
dc.date.available2016-02-15T13:38:34Z
dc.date.issued2013
dc.descriptionDue to copyright restrictions, the access to the full text of this article is only available via subscription.
dc.description.abstractStatistical speech synthesis (SSS) approach has become one of the most popular methods in the speech synthesis field. An advantage of the SSS approach is the ability to adapt to a target speaker with a couple of minutes of adaptation data. However, many applications, especially in consumer electronics, require adaptation with only a few seconds of data which can be done using eigenvoice adaptation techniques. Although such techniques work well in speech recognition, they are known to generate perceptual artifacts in statistical speech synthesis. Here, we propose two methods to both alleviate those quality problems and improve the speaker similarity obtained with the baseline eigenvoice adaptation algorithm. Our first method is based on using a Bayesian approach for constraining the eigenvoice adaptation algorithm to move in realistic directions in the speaker space to reduce artifacts. Our second method is based on finding a reference speaker that is close to the target speaker, and using that reference speaker as the seed model in a second eigenvoice adaptation step. Both techniques performed significantly better than the baseline eigenvoice method in the subjective quality and similarity tests.
dc.description.sponsorshipEuropean Commission ; TÜBİTAK
dc.identifier.endpage1081
dc.identifier.isbn9781629934433
dc.identifier.scopus2-s2.0-84906278451
dc.identifier.startpage1077
dc.identifier.urihttp://hdl.handle.net/10679/2375
dc.identifier.wos000395050000228
dc.language.isoengen_US
dc.peerreviewedyes
dc.publicationstatuspublisheden_US
dc.publisherInternational Speech Communication Association
dc.relationinfo:eu-repo/grantAgreement/TUBITAK/1001 - Araştırma
dc.relationinfo:eu-repo/grantAgreement/EC/FP7
dc.relation.ispartofInterspeech 2013
dc.relation.publicationcategoryInternational
dc.rightsrestrictedAccess
dc.subject.keywordsStatistical speech synthesis
dc.subject.keywordsSpeaker adaptation
dc.subject.keywordsCluster adaptive training
dc.subject.keywordsEigenvoice adaptation
dc.titleHybrid nearest-neighbor/cluster adaptive training for rapid speaker adaptation in statistical speech synthesis systemsen_US
dc.typeconferenceObjecten_US
dspace.entity.typePublication
relation.isOrgUnitOfPublication7b58c5c4-dccc-40a3-aaf2-9b209113b763
relation.isOrgUnitOfPublication.latestForDiscovery7b58c5c4-dccc-40a3-aaf2-9b209113b763

Files

Collections