Publication:
More learning with less labeling for face recognition

Placeholder

Institution Authors

Research Projects

Organizational Unit

Journal Title

Journal ISSN

Volume Title

Type

article

Access

restrictedAccess

Publication Status

Published

Journal Issue

Abstract

In this paper, we propose an improved face recognition framework where the training is started with a small set of human annotated face images and then new images are incorporated into the training set with minimum human annotation effort. In order to minimize the human annotation effort for new images, the proposed framework combines three different strategies, namely self-paced learning (SPL), active learning (AL), and minimum sparse reconstruction (MSR). As in the recently proposed ASPL framework [1], SPL is used for automatic annotation of easy images, for which the classifiers are highly confident and AL is used to request the help of an expert for annotating difficult or low-confidence images. In this work, we propose to use MSR to subsample the low-confidence images based on diversity using minimum sparse reconstruction in order to further reduce the number of images that require human annotation. Thus, the proposed framework provides an improvement over the recently proposed ASPL framework [1] by employing MSR for eliminating “similar” images from the set selected by AL for human annotation. Experimental results on two large-scale datasets, namely CASIA-WebFace-Sub and CACD show that the proposed method called ASPL-MSR can achieve similar face recognition performance by using significantly less expert-annotated data as compared to the state-of-the-art. In particular, ASPL-MSR requires manual annotation of only 36.10% and 54.10% of the data in CACD and CASIA-WebFace-Sub datasets, respectively, to achieve the same face recognition performance as the case when the whole training data is used with ground truth labels. The experimental results indicate that the number of manually annotated samples have been reduced by nearly 4% and 2% on the two datasets as compared to ASPL [1].

Date

2023-05

Publisher

Elsevier

Description

Keywords

Citation


Page Views

0

File Download

0