Browsing by Author "Ramos Cooper, Solange"
Now showing 1 - 2 of 2
Results Per Page
Sort Options
Item Domain adaptation for unconstrained ear recognition with convolutional neural networks.(2022) Ramos Cooper, Solange; Cámara Chávez, GuillermoAutomatic recognition using ear images is an active area of interest within the biometrics community. Human ears are a stable and reliable source of information since they are not affected by facial expressions, do not suffer extreme change over time, are less prone to injuries, and are fully visible in mask-wearing scenarios. In addition, ear images can be passively captured from a distance, making it convenient when implementing surveillance and security applications. At the same time, deep learning-based methods have proven to be powerful techniques for unconstrained recognition. However, to truly benefit from deep learning techniques, it is necessary to count on a large-size variable set of samples to train and test networks. In this work, we built a new dataset using the VGGFace dataset, fine-tuned pre-train deep models, analyzed their sensitivity to different covariates in data, and explored the score-level fusion technique to improve overall recognition performance. Open-set and close-set experiments were performed using the proposed dataset and the challenging UERC dataset. Results show a significant improvement of around 9% when using a pre-trained face model over a general image recognition model; in addition, we achieve 4% better performance when fusing scores from both models.Item VGGFace-Ear : an extended dataset for unconstrained ear recognition.(2022) Ramos Cooper, Solange; Gómez Nieto, Erick Mauricio; Cámara Chávez, GuillermoRecognition using ear images has been an active field of research in recent years. Besides faces and fingerprints, ears have a unique structure to identify people and can be captured from a distance, contactless, and without the subject’s cooperation. Therefore, it represents an appealing choice for building surveillance, forensic, and security applications. However, many techniques used in those applications—e.g., convolutional neural networks (CNN)—usually demand large-scale datasets for training. This research work introduces a new dataset of ear images taken under uncontrolled conditions that present high inter-class and intra-class variability. We built this dataset using an existing face dataset called the VGGFace, which gathers more than 3.3 million images. in addition, we perform ear recognition using transfer learning with CNN pretrained on image and face recognition. Finally, we performed two experiments on two unconstrained datasets and reported our results using Rank-based metrics.