Received the P.E. degree in Electronics Engineering from Universidad Mayor in December 2004 and the M.S. degree in Electrical Engineering from Universidad de Chile in 2012. He got a Ph.D. degree at the Department of Electrical Engineering from Universidad de Chile in 2015. His main interests are pattern recognition and machine learning applied to face and iris recognition, soft biometrics, gender classification, feature fusion, and feature selection.


1.- C. Perez, J. Tapia, P. Estévez, and C. Held, “Gender classification from face images using mutual information and feature fusion,”

International  Journal  Optomechatronics, vol. 6, no. 1, pp. 92–119, 2012.

In this article we report a new method for gender classification from frontal face images using feature selection based on mutual information and fusion of features extracted from intensity, shape, texture, and from three different spatial scales. We compare the results of three different mutual information measures: minimum redundancy and maximal relevance (mRMR), normalized mutual information feature selection (NMIFS), and conditional mutual information feature selection (CMIFS). We also show that by fusing features extracted from six different methods we significantly improve the gender classification results relative to those previously published, yielding 99.13% of the gender classification rate on the FERET database.

Link: Download Paper

2.- Tapia, J.E.; Perez, C.A., “Gender Classification Based on Fusion of Different Spatial Scale Features Selected by Mutual Information From Histogram of LBP, Intensity, and Shape,”

IEEE Transactions on Information Forensics and Security, vol.8, no.3, pp.488,499, March 2013.

In this paper, we report our extension of the use of feature selection based on mutual information and feature fusion to improve gender classification of face images. We compare the results of fusing three groups of features, three spatial scales, and four different mutual information measures to select features. We also showed improved results by fusion of LBP features with different radii and spatial scales, and the selection of features using mutual information. As measures of mutual information we use minimum redundancy and maximal relevance (mRMR), normalized mutual information feature selection (NMIFS), conditional mutual information feature selection (CMIFS), and conditional mutual information maximization (CMIM). We tested the results on four databases: FERET and UND, under controlled conditions, the LFW database under unconstrained scenarios, and AR for occlusions. It is shown that selection of features together with fusion of LBP features significantly improved gender classification accuracy compared to previously published results. We also show a significant reduction in processing time because of the feature selection, which makes real-time applications of gender classification feasible.

Link: Download Paper

3.- Tapia, J.E.; Perez, C.A.,”Gender Classification using One Half Face and Feature Selection based on Mutual InformationIEEE International Conference on Systems, Man, and Cybernetics (SMC), 2013,  pp.3282,3287, 13-16 Oct. 2013

In this paper, we report the exploration of using the symmetrical characteristics of the face for representing and determining gender from only half of the face. We first divide the faces into two halves, and then select the best features separately from the left and right sides. The method uses 4 different mutual information measures to select features; minimum redundancy and maximal relevance (mRMR), normalized mutual information feature selection (NMIFS), conditional mutual information feature selection (CMIFS), and conditional mutual information maximization (CMIM).


Paper: Download Paper

Ppt: SMC-2013-JTF

4.- Juan E. Tapia; Perez, C.A.; Bowyer K.W “Gender Classification from Iris Images using Fusion of Uniform Local Binary Patterns“, Computer Vision – ECCV 2014 Workshops – Zurich, Switzerland, September 6-7 and 12, 2014, Proceedings, Part II

This paper is concerned in analyzing iris texture in order to determine “soft biometrics”, attributes of a person, rather than identity.  In particular, this paper is concerned with predicting the gender of a person based on analysis of features of the iris texture. Previous researchers have explored various approaches for predicting the gender of a person based on iris texture. We explore using different implementations of Local Binary Patterns from the iris image using the masked information. Uniform LBP significantly improves accuracy of gender prediction relative to using the whole iris image. Using a subject-disjoint test set, we are able to achieve over 91% correct gender prediction using the texture of the iris. To our knowledge, this is the highest accuracy yet achieved for predicting gender from iris texture. Analyzing the results, we found that  the uniform LBP using concatenated histograms explores in the best way the texture of eye.


Paper: Download Paper.

Presentation: Download

5.- J. Tapia, C. Aravena, D. Schulz, C. Perez: Gender classification on face images under challenging conditions of occlusions, low quality and unconstrained environments, paper accepted at ICCV workshop Forensic Application of Computer Vision (FACV), December 12,  Santiago, CHILE. 2015.

Interest on gender classification from faces has been increasingly growing both in academia and industry because of its potential use in various applications such as humancomputer interaction, marketing, biometrics, and automated indexing of large face databases. Recently, NIST made public an empirical evaluation of facial gender classification algorithms, using datasets of facial images taken in controlled environments for visas and law enforcement mugshots. Gender classification was evaluated demographically and by the number of image samples per subject, showing a high variability of performance depending on the provider of the technology, age and ethnicity of the subject. Also the study confirms that the quality of “in the wild” images have a negative impact on algorithm certainty of gender classification. Therefore, there is a need for evaluation and improvement of face-based gender classification algorithms under more challenging conditions in order to be able to apply these methods to real-world applications.
In this work, we present preliminary gender classification results obtained on several databases in order to assess the performance over a wide range of operation conditions.


Download PPT: Download

6.- Juan E. Tapia; Perez, C.A.; Bowyer K.W “Gender Classification from the Same Iris Code Used for Recognition “. Transaction Information Forensic and Security, vol. 11, no. 8, pp. 1760-1770, Aug. 2016

Previous researchers have explored various approaches for predicting the gender of a person based on features of the iris texture. This paper is the first to predict gender directly from the same binary iris code that could be used for recognition. We found that information for gender prediction is distributed across the iris, rather than localized in particular concentric bands. We also found that using selected features representing a subset of the iris region achieves better accuracy than using features representing the whole iris region.
We used measures of mutual information to guide the selection of bits from the iris code to use as features in gender prediction. Using this approach, with a person-disjoint training and testing evaluation, we were able to achieve 89% correct gender prediction using the fusion of the best features of iris code from the left and the right eyes.

Link: Download

7.- Grotte, M and Virani, S and Holzinger, M and Register, A and Perez, C and Tapia  J, “All-Sky Image Fusion for a Synoptic Survey Telescope in Arctic and Antarctic Domains“, Advanced Maui Optical and Space Surveillance Technologies Conference, 2016.

Near-Earth object (NEO) detection, transient astronomical event detection, and Space Situational Awareness (SSA) support are all provided by wide-field, high cadence synoptic telescope surveys. Many such exquisite and monolithic synoptic surveys achieve impressive performance and are certainly changing these application areas. In the past 15 years Raven-class telescopes have made a clear case for the utility of commercial-off-the-shelf systems in SSA. This paper documents the initial efforts and next steps for the Omnidirectional Space Situational Awareness (OmniSSA) array, a wide-field-of-view synoptic survey system that leverages the Raven-class telescope paradigm. The approach utilizes multiple overlapping wide field-ofview sensors with post-processing super resolution and image stacking techniques to generate synthetic images equivalent to larger wide field-of-view systems. The synthetic array offers potential to utilize a plurality of components that are individually low cost and commercial off-the-shelf. A brief survey of synoptic survey systems is presented, followed by a description of the current hardware implementation of the OmniSSA array and preliminary out-of-the-box results for baseline OmniSSA camera SR and image stacking routines.

Link: Download

8.-Juan E. Tapia, Chapter 8 called: Gender Classification from Iris Images on IET Book series “Iris and Periocular Biometrics” Edited by the professor: Christian Rathged and Christoph Busch from Darmstadt University, Germany & Gjøvik University College, Norway.

This book covers iris and periocular recognition, a prominent field in Biometrics Recognition and Identity Science in the areas of security, computing and communications research and technologies. Selected topics cover a wide spectrum of current research focusing on periocular recognition to augment the biometric performance of the iris in unconstrained environments, paving the way for multi-spectral biometric recognition on mobile devices. This text is divided into three parts to cover the most recent research and future directions as well as security related topics.

Link: Download

9.- Juan E. Tapia And Carlos Aravena, “Gender Classification from NIR Iris Images using Deep Learning” Chapter accepted to publish on Deep Learning for Biometrics on Springer 2017, Edited by Bir Bhanu and Ajay Kumar

Link: Download

10.-  Juan Tapia and Ignacio Viedma, “Gender Classification From Multispectral Periocular Images“. International Joint Conference on Biometrics, IJCB 2017. 1-4 Octubre , Colorado, Denver.

Gender classification from multispectral periocular and iris images is a new topic on soft-biometric research. The feature extracted from RGB images and Near Infrared Images shows complementary information independent of the spectrum of the images. This paper shows that we can fu- sion these information improving the accuracy of gender classification. Most gender classification methods reported in the literature has used images from face databases and all the features for classification purposes. Experimental results suggest: (a) Features extracted in different scales can perform better than using only one feature in a single scale; (b) The periocular images performed better than iris images on VIS and NIR; c) The fusion of features on different spectral images NIR and VIS allows improve the ac- curacy; (c) The feature selection applied to NIR and VIS allows select relevant features and d) Our accuracy 90% is competitive with the state of the art.

Link: PPT Presentation

Link: Paper

11.- Juan Tapia and Carlos Aravena. “Gender Classification from periocular Images using Fusion of CNN Models“.

Paper accepted to by published at ISBA 2018: International Conference on Identity, Security and Behavior Analysis. 2018. January 10-12 Singapur.

Gender classification is a challenging topics and previous work has been trained separately a classifier to estimate gender classification using normalized and encoded representation to the left and the right eyes or fusion the images and not the model trained. Therefore it would be relevant trained a model for the left and right eye using a Convolutional Neuronal Network (CNN) for each eye and then concatenate the best model instead the images. The network benefits from this attention shift and become more robust towards occlusion and low resolution degradation. This preliminary work shows that the fusion of models (merge) of CNN outperformed the results of a single CNN model for the left and right set of images.



Blog at

Up ↑

%d bloggers like this: