Publication:
More learning with less labeling for face recognition

dc.contributor.authorBüyüktaş, Barış
dc.contributor.authorEroğlu Erdem, Ç.
dc.contributor.authorErdem, Tanju
dc.contributor.departmentComputer Science
dc.contributor.ozuauthorERDEM, Arif Tanju
dc.contributor.ozugradstudentBüyüktaş, Barış
dc.date.accessioned2023-09-18T10:33:37Z
dc.date.available2023-09-18T10:33:37Z
dc.date.issued2023-05
dc.description.abstractIn this paper, we propose an improved face recognition framework where the training is started with a small set of human annotated face images and then new images are incorporated into the training set with minimum human annotation effort. In order to minimize the human annotation effort for new images, the proposed framework combines three different strategies, namely self-paced learning (SPL), active learning (AL), and minimum sparse reconstruction (MSR). As in the recently proposed ASPL framework [1], SPL is used for automatic annotation of easy images, for which the classifiers are highly confident and AL is used to request the help of an expert for annotating difficult or low-confidence images. In this work, we propose to use MSR to subsample the low-confidence images based on diversity using minimum sparse reconstruction in order to further reduce the number of images that require human annotation. Thus, the proposed framework provides an improvement over the recently proposed ASPL framework [1] by employing MSR for eliminating “similar” images from the set selected by AL for human annotation. Experimental results on two large-scale datasets, namely CASIA-WebFace-Sub and CACD show that the proposed method called ASPL-MSR can achieve similar face recognition performance by using significantly less expert-annotated data as compared to the state-of-the-art. In particular, ASPL-MSR requires manual annotation of only 36.10% and 54.10% of the data in CACD and CASIA-WebFace-Sub datasets, respectively, to achieve the same face recognition performance as the case when the whole training data is used with ground truth labels. The experimental results indicate that the number of manually annotated samples have been reduced by nearly 4% and 2% on the two datasets as compared to ASPL [1].en_US
dc.description.sponsorshipTÜBİTAK
dc.identifier.doi10.1016/j.dsp.2023.103915en_US
dc.identifier.issn1051-2004
dc.identifier.scopus2-s2.0-85149439861
dc.identifier.urihttp://hdl.handle.net/10679/8858
dc.identifier.urihttps://doi.org/10.1016/j.dsp.2023.103915
dc.identifier.volume136en_US
dc.identifier.wos000971259200001
dc.language.isoengen_US
dc.peerreviewedyesen_US
dc.publicationstatusPublisheden_US
dc.publisherElsevieren_US
dc.relationinfo:eu-repo/grantAgreement/TUBITAK/1001 - Araştırma/116E088
dc.relation.ispartofDigital Signal Processing: A Review Journal
dc.relation.publicationcategoryInternational Refereed Journal
dc.rightsrestrictedAccess
dc.subject.keywordsFace recognitionen_US
dc.subject.keywordsActive learningen_US
dc.subject.keywordsSelf-paced learningen_US
dc.subject.keywordsMinimum sparse reconstructionen_US
dc.subject.keywordsDeep learningen_US
dc.titleMore learning with less labeling for face recognitionen_US
dc.typearticleen_US
dspace.entity.typePublication
relation.isOrgUnitOfPublication85662e71-2a61-492a-b407-df4d38ab90d7
relation.isOrgUnitOfPublication.latestForDiscovery85662e71-2a61-492a-b407-df4d38ab90d7

Files

License bundle

Now showing 1 - 1 of 1
Placeholder
Name:
license.txt
Size:
1.45 KB
Format:
Item-specific license agreed upon to submission
Description: