SVD: Unlocking the Secrets of Eigenfaces

Welcome back to Techal! Today, we’re going to dive into the fascinating world of eigenfaces and how they can be computed using singular value decomposition (SVD).

Imagine having a vast library of human faces, each represented as a column vector in a large matrix. The goal is to decompose this library into a set of orthogonal eigenfaces that capture the essential features in a lower-dimensional space. That’s where SVD comes in.

First, we load the data matrix X from the Yale Faces Database and compute its SVD. The left singular vectors, denoted by u, are column vectors that have the same size as a reshaped face from X. These left singular vectors can be reshaped to form the eigenfaces.

Now, let me show you how we can use these eigenfaces to approximate a new human face. For example, let’s say we have a library of faces from the first 36 individuals, and we want to approximate the face of the 37th person. We take the face of the 37th person, represented as a column vector X, and project it into the eigenface space.
To do this, we multiply X by the transpose of the first R eigenfaces, denoted by uR. The result is a set of coefficients, or the fingerprint, that tells us the mixture of eigenfaces needed to reconstruct that person’s face.

By increasing the rank, or the number of eigenfaces used, we can improve the accuracy of the reconstruction. For example, at rank 25, the approximation may look like a generic average face. But as we increase the rank to 200 or 400, the reconstructed face starts to resemble the original person. By around 800, the reconstruction becomes identifiable as the person’s face.

Further reading:  New Advances in Artificial Intelligence and Machine Learning

It’s important to note that we didn’t use the images of the 37th person during the training process; rather, we trained the library on images from the first 36 individuals. This demonstrates that the structure of human faces is inherently low-dimensional. Out of the 32,000 possible pixels that could make up a face, the eigenfaces allow us to compress the information down to just a few hundred coefficients.

But the magic doesn’t stop there! What if we try to approximate something other than a human face? Let’s take an image of my dog Mort and see if we can represent him in the eigenface space. As we increase the rank, something remarkable happens. At first, the approximation looks like a weird person, but as we keep increasing the rank, Mort’s image starts to emerge. Although it’s not a perfect representation due to the limited dimensions of the eigenface space, it’s still quite remarkable.

This experiment proves that the orthonormal eigenface space can capture features beyond just human faces. We can even reconstruct images of objects like coffee cups or latte art to a certain extent.

The applications of eigenfaces go beyond mere approximation. They can be used for image classification, allowing for powerful analysis and recognition capabilities. Eigenfaces provide a way to represent data using much less information, highlighting the unique mixture of eigenfaces for each individual.

In conclusion, eigenfaces, computed using SVD, unveil the secrets hidden in the world of human faces and beyond. They allow us to compress, approximate, and even recognize images efficiently and effectively.

Further reading:  Why Choose a Data Lakehouse Architecture

Stay tuned for more exciting tech explorations here at Techal!

YouTube video
SVD: Unlocking the Secrets of Eigenfaces