23 Mar 2015

Abstract- Eigenface method is one of the most basic and efficient methods for face recognition. Choosing the threshold value is a very significant factor for performance of face identification in eigenface approach. Besides that, the dimensional reduction of face space relies upon number of eigenfaces taken. In this research paper, an enhanced solution for face recognition is given by taking the enhanced value of threshold value and number of eigenfaces. The experimental results using MATLAB are demonstrated in this paper to verify the viability of the proposed face recognition method. Also only 15% of Eigenfaces with the largest eigenvalues are adequate for the recognition of a person. The best optimized solution for face recognition is provided when both the features are combined i.e. 15% of eigenfaces with largest eigenvalues are chosen and threshold value is chosen 0.8 times maximum of minimum the Euclidean distances from all other images of each image, it will wholly improve the recognition performance of the human face up to 97%. It also shows that if the minimum Euclidian distance from other images of the test image is zero, then the test image absolutely matches the existing image in the database. If the least Euclidian distance is non-zero although less than threshold value and it is a recognized face but having different expression of the face else it is an unidentified face.

Index Terms-Face Recognition, Eigenvalues, Eigenimages, Eigenfaces, Principle component analysis (PCA) and Olivetti Research Laboratory (ORL).

The face recognition can be used for a wide range of problems like film and image processing, criminal identification and human-computer interaction etc. This has provoked researchers to build up computational models to recognize the faces, which are quite simple and easy to implement. The model established in [1] is simple, fast and accurate in constrained environments. Our aim is to implement the model for a particular face and differentiate it from a large number of stored faces with a number of real time differences as well.

The scheme is based on an information theory method that decomposes face images become a minute set of characteristic feature images are called 'eigenfaces', which are in fact the principal components of the primary training set of face images. The eigenface method is one of the most efficient and simplest approaches in developing a system for Face Recognition. The recognition is performed by projecting new image into the subspace extended by the eigenfaces ('face space') and then organizing the face by contrasting its position into the face space with the positions of the identified individuals [2]. In eigenface method, the distance is measured between couples of images for recognition after the dimensional reduction of the face space. If the distance is less than a certain threshold value, then it is considered as an identified face else it is an unidentified face [5].

Recognition under commonly varying conditions like frontal view, a 45° view, scaled anterior view, subjects with spectacles etc. are tried, though the training data set covers a limited views. In additional this algorithm can be expanded to recognize gender of a person or to clarify the facial expression of a person. This algorithm models, the real time changing lighting conditions as well. But this is out of scope of the current implementation.

The information theory methods of encoding & decoding face images extracts the related information in a face image and encode it as efficiently as possible and contrast it with database of similarly encoded faces. Encoding is done by using features either possibly different or independent than the distinctly apparent features like hair, eyes, nose, ear and lips.

Mathematically, primary component analysis approach will handle every image of training set as a vector in an extremely high dimensional space. The eigenvectors of the covariance matrix of these vectors would incorporate the difference amongst the face images. Currently each image in the training set would contain its contribution to the eigenvectors (variations). This can be shown as an 'eigenface' signifying its contribution in the difference between the images. These eigenfaces look similar to ghostly images and some of them are shown in figure 2. In each eigenface some type of facial difference can be seen which diverges from the original image.

The high dimensional space along with every eigenfaces is named the image space (feature space). Also, each image is actually a linear combination of the eigenfaces. The amount of overall difference that single eigenface counts for, is actually recognized by the eigenvalue linked with the corresponding eigenvector. If the eigenface with minute eigenvalues are ignored, then an image can be able to a linear combination of condensed nunmber of these eigenfaces. For instance, if there are images of M in the training set, we would obtain M eigenfaces. Out of these, the only M' eigenfaces are chosen such that they are associated with the largest eigenvalues. These would extent the M' dimensional subspace 'face space' beyond all the possible images (image space).

When the face image to be recognized (known or unknown), is projected on this face space (figure 1), we get the weights associated with the eigenfaces, that linearly estimate the face or be able to use reconstruction the face. At the moment these weights are contrasted with the weights of the recognized face images in order that it can be recognized as an identified face used in the training set. In simpler terms, the Euclidean distance between the known projections and image projection is calculated; the classification of the face image is one of the faces with minimum Euclidean distance.

Recognizing alike faces, is same as to identify which is the closest point to the query, in the recently defined face space [4]. If the person is representing in the database more than once, the difficulty is to choose to which group of images the query is highly similar to. Finally if the input image isn't a face at all and its projection into the face space will give inconsistent results, so we will recognize this case also.

(a)

(b)

Figure 1: (a) The face space and the three projected images on it. Here u1 and u2 are the eigenfaces (b) The projected face from the training database

The overview algorithm for facial recognition using eigenfaces is illustrated in figure 2. Initial, the original images of the training set are converted into a group of eigenfaces E. After that; the weights are deliberate for each image of the training set and then stored in the set W.

Upon examining an unknown image X. The weights are deliberate for that specific image and stored in the vector WX. After that, WX is compared with weights of the images of which one knows for certain that they are faces (the weights of the training set W). One way to do it would be to consider each weight vector like a point in space and then calculate a common distance D between weight vectors from WX and the weight vector of an unknown image WX. If this average distance is greater than some threshold value, afterward the weight vector of an unknown image WX lies too "far apart" from the weights of the faces. In this situation, the unknown X is contemplated a non face. If not (if X is actual a face), its weight vector WX is accumulated for later classification. The best threshold value has to be determined empirically.

Figure 2: High-level functioning principle of the eigenface-based facial recognition algorithm

In this protion, the original plan for determination of the eigenfaces using Principle component analysis (PCA) will be presented. The algorithm illustrated in scope of this paper is a difference of the one outlined here.

Step I: Prepare for the data

The faces representing the training set (ÃŽâ€œi) should be prepared for processing.

Step II: Mean subtraction

The average matrix (ÃŽÂ¨) has to be calculated, then subtracted from the original faces (ÃŽâ€œi) and the result are stored in the variable ÃŽÂ¦i

ÃŽÂ¨ = (1/M) ÃŽÂ£1M ÃŽâ€œn

ÃÂ¤ = Ti - ÃŽÂ¨ (1)

Step III: Calculation of the covariance matrix

In this step, the covariance matrix (C) is calculated according to

C = (1/M) ÃŽÂ£ 1M ÃŽÂ¦n ÃŽÂ¦nT (2)

Now the eigenvectors ui and the coinciding eigenvalues ÃŽÂ»i of the vector (C) should be calculated.

Step IV: The eigenvectors and eigenvalues of the covariance matrix calculation

The covariance matrix (C) in step III (refer equation 2) has a dimensionality of N2 Ãƒ- N2, therefore one would have N2 eigenfaces and eigenvalues. For a 256 Ãƒ- 256 image means that one must calculate a 65, 536 Ãƒ- 65, 536 matrix and compute 65, 536 eigenfaces. Computationally, this isn't very effective because most of those eigenfaces are not functional for our task. Usually, PCA is used to illustrate a large dimensional space with a relatively small set of vectors [4]. PCA describes us that since we only have M images and M non-trivial eigenvectors. We can figure out for these eigenvectors by taking eigenvectors of the new M Ãƒ- M matrix:

L = ATA (3)

Because of the subsequent math trick:

ATAvi = ÃŽÂ¼ivi

AATAvi = ÃŽÂ¼iAvi (4)

Where vi is eigenvector of L. From this simple verification we can observe that Avi is an eigenvector of C. M eigenvectors of L are eventually used to form the M eigenvectors u1 of C that form our eigenface basis:

u1 = ÃŽÂ£k=1M vlkÃŽÂ¦k

Where u1 are the eigenfaces. In general, we will use only the subset of M eigenfaces, the Mj eigenfaces with the largest eigenvalues. Eigenfaces with minimal eigenvalues can be omitted, as they clarify only a small part of characteristic features of the faces.

Step V: Recognizing the faces

The progress of recognizing of a new (unknown) face ÃŽâ€œnew to one of the known faces proceeds in two steps. Firstly, the new image is transformed into its eigenface components. The resulting weights form the weight vector ÃŽÂ©T

Wk = ÃŽÂ¼k (ÃŽâ€œnew - ÃŽÂ¨) (5)

here k = 1,2,Ã¢â‚¬Â¦.M'. The weights acquired as above form the vector ÃŽÂ©T = [w1, w2, w3,Ã¢â‚¬Â¦. wM'] that illustrates the contribution of each single eigenface in representing the input face image. A vector may then be used in the standard pattern recognition algorithm to observe which of several pre-identified face class, if any, best illustrates the face. Face class can be computed by averaging weight vectors for one individual of the images. Face classes to be created depend on the categorization to be created like a face class can be created of all images where subject has the spectacles. With this face class, categorization can be made if the subject has spectacles or not. The Euclidean distance of weight vector from the face class of new image weight vector can be computed as follows,

ÃŽÂµk = || ÃŽÂ© - ÃŽÂ©k|| (6)

where ÃŽÂ©k is the vector describing the KTH face class. The Euclidean distance formula can be found in [2]. The face is categorized as belonging to a class k while the distance ÃŽÂµk is lower than some threshold value ÃŽÂ¸ÃŽÂµ. If not the face is classified as unknown. Also it can be found whether the image is the face image or is not by easily finding the squared distance between the mean can be adjusted input images and its projection inside the face space.

ÃŽÂµ2 = || ÃÂ¤ - ÃÂ¤f || (7)

where ÃÂ¤f is face space, ÃÂ¤ = ÃŽâ€œi - ÃŽÂ¨ is mean adjusted input.

With this we can categorize the image as identified face image, unidentified face image and not a face image.

Consider for simplicity we have only ten images in training set and image that is not in training set arise for the recognition task. The score for each of the ten images will be discovered with the incoming image. Additionally, even if an image isn't in the database, it will still say the image is known as the training image with which its score is the lowest. Obviously, this is a clash that we need to look at. It is for this purpose that we decide the threshold. The threshold is determined heuristically.

In general, the threshold value is chosen arbitrarily. There is no formula for calculating the threshold value. Its value is chosen arbitrarily or obtained as some factor of maximum value of the minimum Euclidian distances of every single image from other images. In this paper we have to calculate what should be the value of threshold?

To assess the effect of changing the threshold value on the performance of human-face recognition, we have performed several experiments on ORL databases using MATLAB. The ORL database has images of 40 people and 10 images of each person as shown in figure 1. So, there are all together 400 images total in our database. For testing, 100 images have taken in test database. In the test database, some faces are from the training database although having different face expression. Some faces are unknown faces which do not exist in training database. Some images are non-faces.

In PCA method, the eigen vectors having the significant eigenvalues are useful. In figure 1, a plot of Eigenvalues of all 400 images is shown. From this figure it can be seen that only about 40 images have significant eigenvalues. The remaining images have approximated zero eigen values. So there is no need to consider that eigenvectors in Eigenface approach containing zero or very low eigenvalues.

Figure 3: Examples of face images provided in the ORL database.

Figure 4

According to figure 2, only 100 images shown in a plot of eigenvalues clearly point out the significant eigenvalues; however, it is much clear that there are some non-zero eigenvalues in 40 images only.

Figure 5

So in PCA, only 40 images having non-zero eigenvalues are sufficient for eigenfaces. For the recognition of any face from this database, it is not required to use more than 40 eigenfaces.

Figure 3 show that it provides the same performance by using a number of 40 eigenfaces as the performance by using 100 numbers of eigenfaces. But 100 eigenfaces will increase the complexity and also the progressing time will also be increased.

Figure 6

Therefore only 15% of eigenfaces with the significant eigenvalues are enough for the recognition of a person as shown in figure 3. Now Euclidean distance of test image from every single image in the database is computed for face recognition. The test image will match the image having minimum Euclidean distance with it. In the figure, Euclidean distance of test image from all 400 images is shown.

Figure 7

Euclidean distance of test image is zero with the image number 52 in the database as quite obvious from the figure 7. It means that the test image is completely matches the image number 52 from our database as shown in figure 8.

Figure 8

One more test was done for the image which was present in the database but having different face expression. The test image number 3 has minimum Euclidean distance as 2.2186e+003 with the image number 39 from the database as shown in figure 10. This distance is less than threshold value that's why it is a known face.

Figure 9

The test image matches with the image number 39 in the database having different face expression as quite obvious from the figure 7.

Figure 10

The Euclidean minimum distance of another test image was found as 4104.7 from the image number 4 in the database (Figure 8) but this value is larger than chosen threshold value. Hence it is an unknown face. (Figure 9)

Figure 11

Figure 12

From the clarifications, it is clear that only 15% of eigenfaces with the largest eigenvalues are enough for the recognition of a person. It is also clear that if the Euclidian minimum distance of test image from the other images is zero, therefore the test image is completely matches with the existing image in the database. If the Euclidian minimum distance is non-zero but less than threshold value, therefore it is an identified face but having different face expression otherwise it is an unidentified face.

Recognition of face has become an essential issue in many applications such as credit card verification, security system, and criminal identification. For instance, the ability to model a specific face and differentiate it from a large set of stored face model would make it possible to vastly improve the criminal identification. Even the capability to merely detect faces, as apposed to recognizing them, can be important.

Acknowledgment

I would like to thank Mr. Denial Ng (Seagate Technology) for his precious counsels at the beginning of this research paper, Mr. Alvin Tan for his indispensable help and valued support and are grateful to the rest of the teachers in University of South Australia. Last but not least special thank to Dr Mark Ho and Prof Andrew Nafalski for their guidance, support and useful discussions.

If you are the real writer of this essay and no longer want to have the essay published on the our website then please click on the link below to send us request removal:

Request the removal of this essayGet in Touch With us

Get in touch with our dedicated team to discuss about your requirements in detail. We are here to help you our best in any way. If you are unsure about what you exactly need, please complete the short enquiry form below and we will get back to you with quote as soon as possible.