Linear Discriminant Analysis: The Original Formulation

Welcome to another episode of Pattern Recognition! Today, we will delve into the original formulation of Linear Discriminant Analysis (LDA), also known as the Fischer Transform. LDA is a technique used to project data onto a direction that maximizes class information while minimizing spread within the classes. In this article, we will explore the steps involved in computing LDA and its applications in pattern recognition.

Linear Discriminant Analysis: The Original Formulation
Linear Discriminant Analysis: The Original Formulation

The Original Idea

To start, let’s discuss the original idea behind LDA. The goal is to find a direction, denoted as “r,” onto which we can project the feature vectors. This projection should maximize the between-class scatter (the difference between class means) while minimizing the within-class scatter (the covariance within each class). By determining this optimal direction, we can assign a decision to each data point based on a threshold value.

Computing LDA

To compute LDA, we follow a series of steps:

Step 1: Compute the Mean Scatter Matrix for Each Class

First, we compute the mean scatter matrix for each class. This involves calculating the mean value and scatter matrix (covariance matrix) within each class.

Step 2: Compute the Within-Class Scatter Matrix

Next, we compute the within-class scatter matrix by summing the scatter matrices for each individual class.

Further reading:  Deep Learning: An Introduction to the World of Neural Networks

Step 3: Compute the Between-Class Scatter Matrix

We also need to calculate the between-class scatter matrix, which is determined by subtracting the means of the two classes and taking the outer product.

Step 4: Determine the Optimal Direction, “r*”

To find the optimal direction, denoted as “r*”, we maximize the Rayleigh coefficient. This can be done by solving a generalized eigenvalue problem using the product of the within-class scatter matrix (Sw) and the between-class scatter matrix (Sb).

Step 5: Compute the Projection and Scatter Matrix

With the optimal direction, we can compute the projection of the feature vectors onto “r,” resulting in “x tilde” values. We can then calculate the mean (“mu tilde”) and the scatter matrix (“S k tilde”) using “r.”

Step 6: Classify the Decision

Lastly, we apply a threshold to the “x tilde” values to determine the class assignment for each data point.

FAQs

Q: Can LDA be applied without class memberships?
A: No, LDA requires knowledge of class memberships to compute the mean scatter and scatter matrices.

Q: Are there other methods for dimensionality reduction?
A: Yes, Principal Component Analysis (PCA), Sammon Transform, and Independent Component Analysis (ICA) are some other techniques available.

Q: What are the advantages of LDA?
A: LDA reduces dimensionality while preserving class information, making it suitable for classification tasks.

Conclusion

In this article, we explored the original formulation of Linear Discriminant Analysis (LDA), also known as the Fischer Transform. LDA is a powerful technique for reducing dimensionality while preserving class information. By finding the optimal direction, LDA allows for effective classification in various pattern recognition applications.

Further reading:  The Speech Brain Project: Revolutionizing Conversational AI

Thank you for joining us, and we look forward to seeing you in the next episode! Feel free to check out Techal for more insightful articles on technology.

YouTube video
Linear Discriminant Analysis: The Original Formulation