In this post, we will learn how to reconstruct a face using EigenFaces. This post is written for beginners. If you don’t know about Principal Component Analysis (PCA) or EigenFaces, I recommend you go through the following posts in the series.

## What are EigenFaces?

In our previous post, we explained Eigenfaces are images that can be added to a mean (average) face to create new facial images. We can write this mathematically as,

where,

is a new face.

is the mean or the average face,

is an EigenFace.

are scalar multipliers we can choose to create new faces. These can be positive or negative.

In our previous post, we explained how to calculate the EigenFaces , how to interpret them and how to create new faces by changing weights .

Now suppose, we are given a new facial photo as shown in Figure 1. How can we reconstruct the photo using EigenFaces ? In other words, how do we find the weights that when used in the above equation will produce the facial image as an output? This is exactly the question covered in this post but before we attempt to do that, we need a little background in linear algebra.

## Change of coordinates

Consider a 3D coordinate system , , as shown using black in Figure 2. You can imagine another set of perpendicular axes that is rotated and translated (shifted) by with respect to the original , , frame. In Figure 2, we show the axes of this rotated and translated coordinate system in blue. Let us consider a point (shown using the red dot) whose coordinates in the coordinates is .

How do we find the coordinates of the point in the coordinate system? This can be done in two steps

**Translate**: First, we can remove the translation component from by subtracting the origin of the new coordinate system. So we have a new vector .**Project**: Next, we need to project onto , , which is nothing but the dot product of with the direction , and respectively. The green line in Figure 2 shows the projection of the point onto the axis.

Let’s see how this technique applies to reconstructing faces.

## Calculating PCA weights for a new facial image

As we had seen in the previous post, to calculate the principal components of facial data, we convert the facial images into long vectors. For example, if we have a collection of aligned facial images of size 100 x 100 x 3, each image can be thought as a vector of length 100 x 100 x 3 = 30,000. Just like a tuple of three numbers represents a point in 3D, we can say that a vector of length 30,000 is a point in a 30,000 dimensional space. The axes of this high dimensional space are perpendicular to each other just like the axes , and of a 3D dimensional space are perpendicular to each other. And just like , the principal components (Eigenvectors) form a new coordinate system in this high dimensional space with the new origin being the mean vector.

Given a new image, here is how we can find the weights

**Vectorize image**: We first create a long vector from image data. This is simple a rearrangement of data which requires just a line or two of code.**Subtract mean vector****Project onto Principal Components**: This can be done by calculating the dot product of the mean subtracted vector with each of the principal components. This gives dot product is the weight**Assemble face vector**: Once the weights have been calculated, we can simply add the multiply each weight to the principal components ( or eigen faces ) and sum them all together. Finally, we need to add the average face vector to this sum.**Reshape vector into facial image**: As a result of the previous step, we obtain a vector that is 30k long and can be reshaped into a 100 x 100 x 3 image. This is the final image.

## PCA for dimensionality reduction

In our example, a 100 x 100 x 3 image has 30k dimensions. After doing PCA on 2000 images, we can obtain a space that is 2000 dimensional and yet is able to reconstruct a new face to a reasonable level of accuracy. What used to take 30k numbers to represent is now represented using only 2k numbers (i.e. the weights ). In other words, we just used PCA to reduce the dimension of the space of faces.

## Code for Face Reconstruction using EigenFaces (C++/Python)

Assuming you have downloaded the code, we will go over important parts of the code. First, the code for calculating the mean face and the EigenFaces is shared in files **createPCAModel.cpp** and **createPCAModel.py**. The method was explained in our previous post and so we will skip that explanation. Instead, we will go over **reconstructFace.cpp** and **reconstructFace.py**.

**Download Code**To easily follow along this tutorial, please download code by clicking on the button below. It's FREE!

**C++**

```
// Recontruct face using mean face and EigenFaces
void reconstructFace(int sliderVal, void*)
{
// Start with the mean / average face
Mat output = averageFace.clone();
for (int i = 0; i < sliderVal; i++)
{
// The weight is the dot product of the mean subtracted
// image vector with the EigenVector
double weight = imVector.dot(eigenVectors.row(i));
// Add weighted EigenFace to the output
output = output + eigenFaces[i] * weight;
}
displayResult(im, output);
}
int main(int argc, char **argv)
{
string modelFile("pcaParams.yml");
cout << "Reading model file " << modelFile << " ... " ;
FileStorage file(modelFile, FileStorage::READ);
// Extract mean vector
meanVector = file["mean"].mat();
// Extract Eigen Vectors
eigenVectors = file["eigenVectors"].mat();
// Extract size of the images used in training.
Mat szMat = file["size"].mat();
Size sz = Size(szMat.at<double>(1,0),szMat.at<double>(0,0));
// Extract maximum number of EigenVectors.
// This is the max(numImagesUsedInTraining, w * h * 3)
// where w = width, h = height of the training images.
int numEigenFaces = eigenVectors.size().height;
cout << "DONE" << endl;
cout << "Extracting mean face and eigen faces ... ";
// Extract mean vector and reshape it to obtain average face
averageFace = meanVector.reshape(3,sz.height);
// Reshape Eigenvectors to obtain EigenFaces
for(int i = 0; i < numEigenFaces; i++)
{
Mat row = eigenVectors.row(i);
Mat eigenFace = row.reshape(3,sz.height);
eigenFaces.push_back(eigenFace);
}
cout << "DONE" << endl;
// Read new test image. This image was not used in traning.
string imageFilename("test/satya1.jpg");
cout << "Read image " << imageFilename << " and vectorize ... ";
im = imread(imageFilename);
im.convertTo(im, CV_32FC3, 1/255.0);
// Reshape image to one long vector and subtract the mean vector
imVector = im.clone();
imVector = imVector.reshape(1, 1) - meanVector;
cout << "DONE" << endl;
// Show mean face first
output = averageFace.clone();
cout << "Usage:" << endl
<< "\tChange the slider to change the number of EigenFaces" << endl
<< "\tHit ESC to terminate program." << endl;
namedWindow("Result", CV_WINDOW_AUTOSIZE);
int sliderValue;
// Changing the slider value changes the number of EigenVectors
// used in reconstructFace.
createTrackbar( "No. of EigenFaces", "Result", &sliderValue, numEigenFaces, reconstructFace);
// Display original image and the reconstructed image size by side
displayResult(im, output);
waitKey(0);
destroyAllWindows();
}
```

**Python**

```
# Recontruct face using mean face and EigenFaces
def reconstructFace(*args):
# Start with the mean / average face
output = averageFace
for i in range(0,args[0]):
'''
The weight is the dot product of the mean subtracted
image vector with the EigenVector
'''
weight = np.dot(imVector, eigenVectors[i])
output = output + eigenFaces[i] * weight
displayResult(im, output)
if __name__ == '__main__':
# Read model file
modelFile = "pcaParams.yml"
print("Reading model file " + modelFile, end=" ... ", flush=True)
file = cv2.FileStorage(modelFile, cv2.FILE_STORAGE_READ)
# Extract mean vector
mean = file.getNode("mean").mat()
# Extract Eigen Vectors
eigenVectors = file.getNode("eigenVectors").mat()
# Extract size of the images used in training.
sz = file.getNode("size").mat()
sz = (int(sz[0,0]), int(sz[1,0]), int(sz[2,0]))
'''
Extract maximum number of EigenVectors.
This is the max(numImagesUsedInTraining, w * h * 3)
where w = width, h = height of the training images.
'''
numEigenFaces = eigenVectors.shape[0]
print("DONE")
# Extract mean vector and reshape it to obtain average face
averageFace = mean.reshape(sz)
# Reshape Eigenvectors to obtain EigenFaces
eigenFaces = []
for eigenVector in eigenVectors:
eigenFace = eigenVector.reshape(sz)
eigenFaces.append(eigenFace)
# Read new test image. This image was not used in traning.
imageFilename = "test/satya2.jpg"
print("Read image " + imageFilename + " and vectorize ", end=" ... ");
im = cv2.imread(imageFilename)
im = np.float32(im)/255.0
# Reshape image to one long vector and subtract the mean vector
imVector = im.flatten() - mean;
print("Done");
# Show mean face first
output = averageFace
# Create window for displaying result
cv2.namedWindow("Result", cv2.WINDOW_AUTOSIZE)
# Changing the slider value changes the number of EigenVectors
# used in reconstructFace.
cv2.createTrackbar( "No. of EigenFaces", "Result", 0, numEigenFaces, reconstructFace)
# Display original image and the reconstructed image size by side
displayResult(im, output)
cv2.waitKey(0)
cv2.destroyAllWindows()
```

You can create the model **pcaParams.yml** using **createPCAModel.cpp** and **createPCAModel.py**. The code uses the first 1000 images of the CelebA dataset and scales them to half the size first. So this PCA model was trained on images of size (89 x 109). In addition to the 1000 images, the code also used a vertically flipped version of the original images, and therefore we use 2000 images for training.

Back to the code shared above.

We first read the model file ( **lines 24-42** in C++ and **lines 23-43** in Python). It contains the mean vector of size 1 x 29,103, and a matrix of EigenVectors of size 2000 x 29,103. The model also includes the size of the images used in training.

Next, we reshape the mean vector to obtain the average face in **line 46** of the C++ and Python code. We also, reshape, the Eigen Vectors to obtain the EigenFaces in **lines 48-54** in both versions of the code.

Next, we read a new image that was not used in training. Note, the image is also of size 89×109 and the eyes were aligned with the images in the training set. This image is then vectorized (flattened) and the mean vector is subtracted from it. These operations are performed in **lines 57-66** in C++ and **lines 55-63** in Python.

The reconstruction is done in the function **reconstructFace** starting at **line 2** in both versions of the code. A slider is provided which controls the number of EigenVectors to use. Since the model was trained on 2000 images, we can have a maximum of 2000 EigenVectors.

We start with the average face. The weights are calculated by the dot product of the mean subtracted image vector and the EigenVectors. Finally, the weighted EigenFaces are added to the average face.

Behdad Payami says

Hi Satya,

thanks for this article. I downloaded the code (c++ and python), and while testing it, I ran into an issue:

I deposited 1,389 images into “images” folder. I created “pcaParams.yml” file, as instructed.

Finally, I ran “reconstructFace.py”, which generated an error. Following is the output generated by “reconstructFace.py”, including the error message:

———————————————-

Reading model file pcaParams.yml … DONE

Read image test/satya2.jpg and vectorize … Traceback (most recent call last):

File “reconstructFace.py”, line 78, in

imVector = im.flatten() – mean;

ValueError: operands could not be broadcast together with shapes (29103,) (1,116412)

———————————————-

Any ideas what could be causing this?

Thanks,

Behdad

Satya Mallick says

Resize the test image to 2x right after reading.

Something like

im = cv2.resize(im, (0,0), fx=2, fy=2)

Masque du Furet says

Well, if one wants to port into a RPi (or a nanoPi, dame RAM size, slower-ca 30%- CPU):

an eigenvector (size : 29103 floats, i.e 116412 bytes) , when stored in ASCII form (json, yml, xml) eats at least 29103* 16 bytes = more than 300 k bytes.

xml files have then a huge size; storing and loading needs conversion to and from ASCII, leading to rather uncomfortable programs.

A solution I found (thanx Satya and stackoverflow) was the following (sorry for my clumsy c++)

create a header file, letus call it matwritelib.h

containing :

#ifndef WATWRITELIB_H

#define WATWRITELIB_H

// https://stackoverflow.com/questions/32332920/efficiently-load-a-large-mat-into-memory-in-opencv/32357875

#include

#include

#include

void matwrite(const std::string& filename, const cv::Mat& mat);

cv::Mat matread(const std::string& filename) ;

// adopted from https://stackoverflow.com/questions/41201641/write-a-vector-of-cvmat-to-binary-file-in-c

void vecmatwrite(const std::string& filename,

const std::vector& matrices) ;

std::vector vecmatread(const std::string& filename) ;

#endif

***** end of the include file#ifndef WATWRITELIB_H

#define WATWRITELIB_H

// https://stackoverflow.com/questions/32332920/efficiently-load-a-large-mat-into-memory-in-opencv/32357875

#include

#include

#include

void matwrite(const std::string& filename, const cv::Mat& mat);

cv::Mat matread(const std::string& filename) ;

// adopted from https://stackoverflow.com/questions/41201641/write-a-vector-of-cvmat-to-binary-file-in-c

void vecmatwrite(const std::string& filename,

const std::vector& matrices) ;

std::vector vecmatread(const std::string& filename) ;

#endif

and the corresponding cpp, lets call it #ifndef WATWRITELIB_H

#define WATWRITELIB_H

// https://stackoverflow.com/questions/32332920/efficiently-load-a-large-mat-into-memory-in-opencv/32357875

#include

#include

#include

void matwrite(const std::string& filename, const cv::Mat& mat);

cv::Mat matread(const std::string& filename) ;

// adopted from https://stackoverflow.com/questions/41201641/write-a-vector-of-cvmat-to-binary-file-in-c

void vecmatwrite(const std::string& filename,

const std::vector& matrices) ;

std::vector vecmatread(const std::string& filename) ;

#endif

***** end of the included file

and the corresponding cpp file, lets call it matwritelib.cpp// https://stackoverflow.com/questions/32332920/efficiently-load-a-large-mat-into-memory-in-opencv/32357875

#include “matwritelib.h”

void vecmatwrite(const std::string& filename,

const std::vector& matrices) {

std::ofstream fs(filename.c_str(), std::ios::binary);

for (size_t i = 0; i < matrices.size(); ++i) {

const cv::Mat& mat = matrices[i];

// Header

int type = mat.type();

int channels = mat.channels();

fs.write((char*)&mat.rows, sizeof(int)); // rows

fs.write((char*)&mat.cols, sizeof(int)); // cols

fs.write((char*)&type, sizeof(int)); // type

fs.write((char*)&channels, sizeof(int)); // channels

// Data

if (mat.isContinuous()) {

fs.write(mat.ptr(0), (mat.dataend – mat.datastart));

} else {

int rowsz = CV_ELEM_SIZE(type) * mat.cols;

for (int r = 0; r < mat.rows; ++r) {

fs.write(mat.ptr(r), rowsz);

}

}

}

}

std::vector vecmatread(const std::string& filename) {

std::vector matrices;

// ifstream fs(filename, fstream::binary);

std::ifstream fs(filename.c_str(), std::ios::binary);

// Get length of file

fs.seekg(0, fs.end);

int length = fs.tellg();

fs.seekg(0, fs.beg);

while (fs.tellg() < length) {

// Header

int rows, cols, type, channels;

fs.read((char*)&rows, sizeof(int)); // rows

fs.read((char*)&cols, sizeof(int)); // cols

fs.read((char*)&type, sizeof(int)); // type

fs.read((char*)&channels, sizeof(int)); // channels

// Data

cv::Mat mat(rows, cols, type);

fs.read((char*)mat.data, CV_ELEM_SIZE(type) * rows * cols);

matrices.push_back(mat);

}

return matrices;

}

void matwrite(std::ofstream & fs, const cv::Mat& mat) {

// Header

int type = mat.type();

int channels = mat.channels();

fs.write((char*)&mat.rows, sizeof(int)); // rows

fs.write((char*)&mat.cols, sizeof(int)); // cols

fs.write((char*)&type, sizeof(int)); // type

fs.write((char*)&channels, sizeof(int)); // channels

// Data

if (mat.isContinuous()) {

fs.write(mat.ptr(0), (mat.dataend – mat.datastart));

} else {

int rowsz = CV_ELEM_SIZE(type) * mat.cols;

for (int r = 0; r < mat.rows; ++r) {

fs.write(mat.ptr(r), rowsz);

}

}

}

void matwrite(std::ofstream & fs, const cv::Mat& mat) {

// Header

int type = mat.type();

int channels = mat.channels();

fs.write((char*)&mat.rows, sizeof(int)); // rows

fs.write((char*)&mat.cols, sizeof(int)); // cols

fs.write((char*)&type, sizeof(int)); // type

fs.write((char*)&channels, sizeof(int)); // channels

// Data

if (mat.isContinuous()) {

fs.write(mat.ptr(0), (mat.dataend – mat.datastart));

} else {

int rowsz = CV_ELEM_SIZE(type) * mat.cols;

for (int r = 0; r < mat.rows; ++r) {

fs.write(mat.ptr(r), rowsz);

}

}

}

cv::Mat matread(const std::string& filename) {

std::ifstream fs(filename.c_str(), std::ios::binary);

// cv::Mat mat = matread(fs);

// return mat;

/* */

// Header

int rows, cols, type, channels;

fs.read((char*)&rows, sizeof(int)); // rows

fs.read((char*)&cols, sizeof(int)); // cols

fs.read((char*)&type, sizeof(int)); // type

fs.read((char*)&channels, sizeof(int)); // channels

std::cout<< "rows " << rows << " cols " <<cols <<" nc " << channels<<std::endl;

// Data

cv::Mat mat(rows, cols, type);

fs.read((char*)mat.data, CV_ELEM_SIZE(type) * rows * cols);

return mat;

}

Then, creating PCAs is very simple:

one just has to include matwritelib.h

and the storing part looks like this :

cv::FileStorage file = cv::FileStorage(filename, cv::FileStorage::WRITE);

// file << "mean" << meanVector;

matwrite("mean.bin", meanVector);

// file << "eigenVectors" << eigenVectors;

matwrite("eigens.bin", eigenVectors);

file << "size" << szMat;

one extra line ….; maybe a better way of naming files can be done.

xml-s- get very tiny, generated binaries are less huge than their ASCII counterpart.

陆安良 says

/usr/local/bin/python3 /Users/luan/Desktop/learnOpencv/learnopencv-master/ReconstructFaceUsingEigenFaces/reconstructFace.py

Reading model file pcaParams.yml … ——–>

Traceback (most recent call last):

File “/Users/luan/Desktop/learnOpencv/learnopencv-master/ReconstructFaceUsingEigenFaces/reconstructFace.py”, line 56, in

sz = (int(sz[0, 0]), int(sz[1, 0]), int(sz[2, 0]))

TypeError: ‘NoneType’ object is not subscriptable

陆安良 says

I don’t know why he went wrong

XMC.pl says

I’ll gear this review to 2 types of people: current Zune owners who are considering an upgrade, and people trying to decide between a Zune and an iPod. (There are other players worth considering out there, like the Sony Walkman X, but I hope this gives you enough info to make an informed decision of the Zune vs players other than the iPod line as well.)