Published By Dr. Mahsa Hassankashi | 16/09/2020

What is Principal Component Analysis - PCA


Support Vector Machine

There are fundamental problems when we encounter huge data and we should reduce data dimension by PCA-Principal Component Analysis. PCA improves time-consuming process during analysis and enhances performance, and after using dimensional reduction the machine learnings would have better results. The comparison operation for machine learning algorithms before and after using PCA illustrates a practical way of PCA advantages which plays a great role to achieve accurate and data correctness result.

PCA reduces dimensions by transformationfrom correlated variables into linear uncorrelated variables, therefore information will be usable by reduction either in dimension or in the number of variables.

The increasing in number of input data cause high dimensional which has a farreaching effects on the analysis process. It will need more time while using solutions to reduce dimensionals is the best way to solve it, e.g. PCA- Principal Component Analysis and LDA-Linear Discriminate Analysis. In addition keeping and growing accurate result whereas dimensional reduction and loosing data is fundamental challenge which can be solved by boosting algorithms such as Ada Boost and Brown Boosting.

Principal Component Analysis (PCA) is best solution to reduce dimension of big data to optimize analysis which has huge amount of data and has a high cost either in time or in memory resources. We just need to eliminate some data with lowest importance and convert a stretch vector to shortest one in PCA. New vector is short but includes useful samples, so we just remove repetitive point. There are many points in the left above figure, by a simple way it can be reduced in the right picture.

Without changing vector direction which are in blue arrows so we call these blue arrow direction eigen vector or characteristic vector which carries substantial data, on the other hand red and purple vectors are not eigen vectors because these direction have been changed and are not suitable for succinct data.

Computation with data in right picture in above figure is easy and saving time to calculation and comparing among themselves. PCA just keep important data which have heavy weight and throw out repetitive and unnecessary data which has not bold role in our judgment about data mining.

Then PCA maps its each pixel to an element of matrix and make moderate X and Y from conjunction of row and column by subtract from mean value, albeit this process is optional and just useful for huge data with a lot of noisy.

Then we have to make covariance matrix in order to find how much dimensions X, Y and Z are vary from their related mean, this step hows us whether or not X has any effects on Y or Z dimensions. So if covariance equals to zero, it means X are independent from Y and Z and have no influence on them, if covariance becomes positive value so they have a direct impact on each other and the negative mode indicates that they have inverse influence on themselves. In fact computing matrix covariance helps us to appear the correlation among features and measure the exact value which indicates on the influence of each feature on the total story.


Support Vector Machine

N is number of data in training data set. Xi, Yi, Zi are each data in X and Y and Z dimensions. Xmean, Ymean, Zmean are mean value for them. Below matrix is for three features, each element is for correlation between two features.

There are two tips for all squared matrix. Firstly the covariance between each feature with itself is equal to variance and the variance value could be replaced, due to their similarity in computational ways.


variance


covariance

So the elements in matrix are changed according to this fact.


covariance

Another tip is related to diagonal elements on squared matrix, the elements that have been located in front of each other according to original or major diagonal are equal to each other. If major diagonal in squared matrix divided it into two section symmetrically, so each elements which are like a mirror for two sides of this line are equal to each other.


covariance

In below equation (I) is identity matrix which is multiply by λ and would be subtracted by covariance matrix, it produces polynomial and by solving that we can extract its root or λ (eigen values). The next step is to compute Eigen Vector or Feature Vector.


covariance

By the solving three equations and three unknowns by , we have a system of linear equation which gives us eigen vector.


covariance


covariance


covariance

Above eigen vector will be extracted from above equations by solving system linear equations.


covariance

PCA make matrix Covariance by formula in third and forth box in below flow chart. Then PCA calculate Eigen vector, eventually extract feature vector with less dimension.

Principal Component Analysis (Dimensional Reduction Method)
PCA-Steps:

1. Variance each features.

2. Covariance Matrix (Correlation Between Features) (Matrix 3*3, due to have three features).

3. Calculate Eigen Values with the aid of determinant computation (Det(A-λI)=0).

4. Heavier weight for Eigen Value is best one.

5. Solve 3 Equations, 3 Unknown for finding Eigen Vectors.

6. Calculation machine learning algorithms with new features

Mean Absolute Error Rate (Measurement Tools for Accuracy)

Time Consuming (For Each Function)


covariance

Three Equations, Three Unknown

In the second phase of this project, I involve to obtain Eigen vector to reduce data dimension which includes eliminating unnecessary vectors and remaining useful ones. After calculating matrix covariance (A), it should be subtracter from (λI) according to below equations.


covariance


covariance


covariance

Above equations produce three polynomial as three equations and three unknowns; I have to solve a linear system of equations. Solving this system give us Eigen vector.

Gauss Jordan Elimination

Gauss Jordan Elimination is the best way to solve this system, Firstly we have to make augmented matrix and then by using of several row operations the augmented matrix is transformed into reduced row-echelon form.

Augmented matrix is built from this linear system of equations with eliminate coefficient from polynomial and put each coefficient in element of matrix, moreover instead of λ using another coefficient this column is located the last column from right in matrix.

Augmented matrix is transformed into reduced row-echelon form by doing sufficient row operations. Row operations contains replacing row in matrix, add one row to another one, and multiply a constant value (scalar) to one row, until reaching desire result. Reduced row-echelon form is like below matrix.

Reduced row-echelon properties:

All of elements in each rows are zero until the first nonzero must be one.

The one value must be placed further in the right direction from top to bottom rows.

Top and bottom values of the first one in each row must be zero.


covariance

The below formula is a shortcut to compute Eigen values and vectors.


covariance


covariance


covariance

The below one is second version with better writing to have a clear understanding and comprehending about his approach.


covariance


covariance


    # -*- coding: utf-8 -*-
    """
    Created on Thu Sep 16 00:20:38 2020


    """
    import numpy as np
    from sklearn.decomposition import PCA
    X = np.array([[-1, -1], [-2, -1], [-3, -2], [1, 1], [2, 1], [3, 2]])
    pca = PCA(n_components=2)
    pca.fit(X)
    PCA(n_components=2)
    print(pca.explained_variance_ratio_)
    [0.9924... 0.0075...]
    print(pca.singular_values_)
    [6.30061... 0.54980...]
Reference: sklearn.decomposition.PCA

Conclusion

As I expected with PCA time consuming decreases significantly, But in Error rate in pca stage error increase tiny due to lose of part of information by dimensional reduction. It is better to use PCA because just grow a bit in error while reduce time incredibly, so it is useful to apply in dedicate matters such as disease prediction which needs estimate probability of disease accurate in short time.

Feedback

Feel free to leave any feedback on this article; it is a pleasure to see your opinions and comment about this code. If you have any questions, please do not hesitate to ask me here.












Join Us

Let's subscribe and follow our daily tutorial about technology. You can order your difficult topic and see on subscriber your topic with your name.