Topic > Paradigms of Facial Recognition Systems - 1335

No previous work has been reported on any aspect of similarity recognition in images of familiar faces. However, it is worth reviewing the research on facial recognition, as many of the problems found in our problem are similar to those found in related problems. Facial recognition systems have been created according to two distinct paradigms. In the first paradigm, researchers first extract facial features such as eyes, nose, etc., and then use clustering/classification algorithms for recognition. The second paradigm treats the complete face image as an input vector and bases the analysis and recognition on algebraic transformations of the input space. Current research has adopted these two paradigms for family resemblance recognition. Face recognition algorithms generally have three phases, including the feature extraction phase (reducing the size of test images), the learning phase (clustering/classification), and the recognition phase. It can be said that the main difference between all the methods proposed by researchers in the last three decades is in the feature extraction stage. Superior efforts have been made for feature extraction, and algorithms from the principal component analysis (PCA) family are the most popular algorithm for reducing the size of the problem space that could have been used. Turk and Pentland [] are using PCA in face recognition for the first time. Feature vectors for PCA are vectorized face images. PCA rotates feature vectors from a large, highly correlated subspace to a small subspace whose basis vectors correspond to the direction of maximum variance in the original image space. This subspace is called Eigenface, useless information such as variations in illumination or noise are truncated and...... half of the paper ...... the first ones who use the wavelet transform with Haar filters to extract 16 images from the original image . The mean and standard deviation of each image form the feature vector. In the recognition stage, the Bhattacharyya distance is used to find the distance between the feature vector of the input image and the feature vectors of the obtained subspace. Kinage and Bhirud [Kin09] extend this study and use the two-dimensional wavelet transform plus 2DPCA. First, a wavelet transform was applied to the image to achieve a small, illumination-insensitive size. Then, 2DPCA clustering method is used to extract the feature space. In the recognition phase, the Euclidean distance between the input image and the experimental samples is calculated to determine the class to which the input image belongs. Experiments in the AT&T face database show that the success rate of the proposed method is 94,4%..