HomeMachine LearningMachine Learning EducationPrincipal Component Analysis In Python

Principal Component Analysis In Python

Real-world data may not be as simple as predicting a person’s salary against his or her experience. While this is just one example, there can be so many factors that may affect an employee’s salary. Real-world data is much more complex and therefore identifying and predicting a dependent factor against so many independent factors can reduce the probability of getting a correct prediction. That is why it is important to identify strong independent variables. Dimensionality Reduction is a technique that allows us to understand the independent variables and their variance thus helping to identify a minimum number of independent variables that has the highest variance with respect to the dependent variables.

In simple terms, the dimensionality reduction technique helps us reduce the number to independent variables in a problem by identifying new and most effective ones

Implementing Principal Component Analysis In Python

In this simple tutorial, we will learn how to implement a dimensionality reduction technique called  Principal Component Analysis (PCA) that helps to reduce the number to independent variables in a problem by identifying Principle Components.We will take a step by step approach to PCA.

Scaling The Data

Before jumping in to identify the strongest factors in your dataset, let it be any, we must make sure that all the data are in the same scale. If the data is not properly scaled it will lead to a false and inaccurate prediction as larger values will show larger effect.

from sklearn.preprocessing import StandardScaler
sc = StandardScaler()
X_train = sc.fit_transform(X_train)
X_test = sc.transform(X_test)

We have used the StandardScaler class of sklearn.preprocessing package for scaling the dataset.

  • X_train : training set containing only the independent features
  • X_test : test set containing only the independent features

Applying PCA to understand the Independent Factors

After the data is properly scaled We can apply Dimensionality Reduction to identify a set of new  strong features or Principal Components

from sklearn.decomposition import PCA
pca = PCA(n_components =  None)
pca.fit(X_train)
variance = pca.explained_variance_ratio_

  • n_components: number of principal components to identify

Here we have used the sklearn. decomposition library to import the PCA class. We then initialised the PCA class with n_components = None as we have no prior knowledge about the variance of factors. The PCA object is then fitted with with the independent variable set to calculates the variance. The explained_variance_ratio_ of the PCA object returns a numpy array containing variances of Principal Components sorted in descending order. (The number of principal Components will be same as the number of factors in X_train). The higher value in the numpy array denotes high variance.

From the obtained variances, choose the minimum number of principal components with the highest variances.

Applying PCA And Transforming The Datasets

After obtaining the minimum number of features(Principal Components with high variance), reinitialise the PCA with n_components as the number of Principal Components.Transform the training set and  test set

pca = PCA(n_components = number of Principal Components )
X_train = pca.fit_transform(X_train)
X_test = pca.transform(X_test)
explained_variance = pca.explained_variance_ratio_

The above code will transform the X_train and X_test to training sets containing only the specified number of principal components.

Using Principal Components For Prediction

X_train and X_test can now be fitted to any predictive model depending on the nature of the problem

Example:

#Fitting Logistic Regression to the Training set
classifier = LogisticRegression()
classifier.fit(X_train, y_train)
#Predicting the Test set results
y_pred = classifier.predict(X_test)

y_train: training set containing only the dependent factor

Important things to note:

  • PCA will take all the original training set variables and decompose them in a manner to make a new set of variables with high explained variance.
  • Principal component analysis involves extracting linear composites of observed variables.
  • PCA can be used to determine what amount of variability the independent variables can explain for the dependent variable and cannot be used to see whIch independent variables are more important for prediction.

This article has been published from the source link without modifications to the text. Only the headline has been changed.

Source link

Most Popular