Dimensionality reduction is an unsupervised learning technique.
Nevertheless, it can be used as a data transform pre-processing step for machine learning algorithms on classification and regression predictive modeling datasets with supervised learning algorithms.
There are many dimensionality reduction algorithms to choose from and no single best algorithm for all cases. Instead, it is a good idea to explore a range of dimensionality reduction algorithms and different configurations for each algorithm.
In this tutorial, you will discover how to fit and evaluate top dimensionality reduction algorithms in Python.
After completing this tutorial, you will know:
- Dimensionality reduction seeks a lower-dimensional representation of numerical input data that preserves the salient relationships in the data.
- There are many different dimensionality reduction algorithms and no single best method for all datasets.
- How to implement, fit, and evaluate top dimensionality reduction in Python with the scikit-learn machine learning library.
Let’s get started.
Tutorial Overview
This tutorial is divided into three parts; they are:
- Dimensionality Reduction
- Dimensionality Reduction Algorithms
- Examples of Dimensionality Reduction
- Scikit-Learn Library Installation
- Classification Dataset
- Principal Component Analysis
- Singular Value Decomposition
- Linear Discriminant Analysis
- Isomap Embedding
- Locally Linear Embedding
- Modified Locally Linear Embedding
Dimensionality Reduction
Dimensionality reduction refers to techniques for reducing the number of input variables in training data.
When dealing with high dimensional data, it is often useful to reduce the dimensionality by projecting the data to a lower dimensional subspace which captures the “essence” of the data. This is called dimensionality reduction.
— Page 11, Machine Learning: A Probabilistic Perspective, 2012.
High-dimensionality might mean hundreds, thousands, or even millions of input variables.
Fewer input dimensions often means correspondingly fewer parameters or a simpler structure in the machine learning model, referred to as degrees of freedom. A model with too many degrees of freedom is likely to overfit the training dataset and may not perform well on new data.
It is desirable to have simple models that generalize well, and in turn, input data with few input variables. This is particularly true for linear models where the number of inputs and the degrees of freedom of the model are often closely related.
Dimensionality reduction is a data preparation technique performed on data prior to modeling. It might be performed after data cleaning and data scaling and before training a predictive model.
… dimensionality reduction yields a more compact, more easily interpretable representation of the target concept, focusing the user’s attention on the most relevant variables.
— Page 289, Data Mining: Practical Machine Learning Tools and Techniques, 4th edition, 2016.
As such, any dimensionality reduction performed on training data must also be performed on new data, such as a test dataset, validation dataset, and data when making a prediction with the final model.
Dimensionality Reduction Algorithms
There are many algorithms that can be used for dimensionality reduction.
Two main classes of methods are those drawn from linear algebra and those drawn from manifold learning.
Linear Algebra Methods
Matrix factorization methods drawn from the field of linear algebra can be used for dimensionality.
For more on matrix factorization, see the tutorial:
- A Gentle Introduction to Matrix Factorization for Machine Learning
Some of the more popular methods include:
- Principal Components Analysis
- Singular Value Decomposition
- Non-Negative Matrix Factorization
Manifold Learning Methods
Manifold learning methods seek a lower-dimensional projection of high dimensional input that captures the salient properties of the input data.
Some of the more popular methods include:
- Isomap Embedding
- Locally Linear Embedding
- Multidimensional Scaling
- Spectral Embedding
- t-distributed Stochastic Neighbor Embedding
Each algorithm offers a different approach to the challenge of discovering natural relationships in data at lower dimensions.
There is no best dimensionality reduction algorithm, and no easy way to find the best algorithm for your data without using controlled experiments.
In this tutorial, we will review how to use each subset of these popular dimensionality reduction algorithms from the scikit-learn library.
The examples will provide the basis for you to copy-paste the examples and test the methods on your own data.
We will not dive into the theory behind how the algorithms work or compare them directly. For a good starting point on this topic, see:
Let’s dive in.
Examples of Dimensionality Reduction
In this section, we will review how to use popular dimensionality reduction algorithms in scikit-learn.
This includes an example of using the dimensionality reduction technique as a data transform in a modeling pipeline and evaluating a model fit on the data.
The examples are designed for you to copy-paste into your own project and apply the methods to your own data. There are some algorithms available in the scikit-learn library that are not demonstrated because they cannot be used as a data transform directly given the nature of the algorithm.
As such, we will use a synthetic classification dataset in each example.
Scikit-Learn Library Installation
First, let’s install the library.
Don’t skip this step as you will need to ensure you have the latest version installed.
You can install the scikit-learn library using the pip Python installer, as follows:
sudo pip install scikit-learn
For additional installation instructions specific to your platform, see:
Next, let’s confirm that the library is installed and you are using a modern version.
Run the following script to print the library version number.
# check scikit-learn version import sklearn print(sklearn.__version__)
Running the example, you should see the following version number or higher.
0.23.0
Classification Dataset
We will use the make_classification() function to create a test binary classification dataset.
The dataset will have 1,000 examples with 20 input features, 10 of which are informative and 10 of which are redundant. This provides an opportunity for each technique to identify and remove redundant input features.
The fixed random seed for the pseudorandom number generator ensures we generate the same synthetic dataset each time the code runs.
An example of creating and summarizing the synthetic classification dataset is listed below.
# synthetic classification dataset from sklearn.datasets import make_classification # define dataset X, y = make_classification(n_samples=1000, n_features=20, n_informative=10, n_redundant=10, random_state=7) # summarize the dataset print(X.shape, y.shape)
Running the example creates the dataset and reports the number of rows and columns matching our expectations.
(1000, 20) (1000,)
It is a binary classification task and we will evaluate a LogisticRegression model after each dimensionality reduction transform.
The model will be evaluated using the gold standard of repeated stratified 10-fold cross-validation. The mean and standard deviation classification accuracy across all folds and repeats will be reported.
The example below evaluates the model on the raw dataset as a point of comparison.
# evaluate logistic regression model on raw data from numpy import mean from numpy import std from sklearn.datasets import make_classification from sklearn.model_selection import cross_val_score from sklearn.model_selection import RepeatedStratifiedKFold from sklearn.linear_model import LogisticRegression # define dataset X, y = make_classification(n_samples=1000, n_features=20, n_informative=10, n_redundant=10, random_state=7) # define the model model = LogisticRegression() # evaluate model cv = RepeatedStratifiedKFold(n_splits=10, n_repeats=3, random_state=1) n_scores = cross_val_score(model, X, y, scoring='accuracy', cv=cv, n_jobs=-1) # report performance print('Accuracy: %.3f (%.3f)' % (mean(n_scores), std(n_scores)))
Running the example evaluates the logistic regression on the raw dataset with all 20 columns, achieving a classification accuracy of about 82.4 percent.
A successful dimensionality reduction transform on this data should result in a model that has better accuracy than this baseline, although this may not be possible with all techniques.
Note: we are not trying to “solve” this dataset, just provide working examples that you can use as a starting point.
Accuracy: 0.824 (0.034)
Next, we can start looking at examples of dimensionality reduction algorithms applied to this dataset.
I have made some minimal attempts to tune each method to the dataset. Each dimensionality reduction method will be configured to reduce the 20 input columns to 10 where possible.
We will use a Pipeline to combine the data transform and model into an atomic unit that can be evaluated using the cross-validation procedure; for example:
... # define the pipeline steps = [('pca', PCA(n_components=10)), ('m', LogisticRegression())] model = Pipeline(steps=steps)
Let’s get started.
Can you get a better result for one of the algorithms?
Let me know in the comments below.
Principal Component Analysis
Principal Component Analysis, or PCA, might be the most popular technique for dimensionality reduction with dense data (few zero values).
For more on how PCA works, see the tutorial:
- How to Calculate Principal Component Analysis (PCA) from Scratch in Python
The scikit-learn library provides the PCA class implementation of Principal Component Analysis that can be used as a dimensionality reduction data transform. The “n_components” argument can be set to configure the number of desired dimensions in the output of the transform.
The complete example of evaluating a model with PCA dimensionality reduction is listed below.
# evaluate pca with logistic regression algorithm for classification from numpy import mean from numpy import std from sklearn.datasets import make_classification from sklearn.model_selection import cross_val_score from sklearn.model_selection import RepeatedStratifiedKFold from sklearn.pipeline import Pipeline from sklearn.decomposition import PCA from sklearn.linear_model import LogisticRegression # define dataset X, y = make_classification(n_samples=1000, n_features=20, n_informative=10, n_redundant=10, random_state=7) # define the pipeline steps = [('pca', PCA(n_components=10)), ('m', LogisticRegression())] model = Pipeline(steps=steps) # evaluate model cv = RepeatedStratifiedKFold(n_splits=10, n_repeats=3, random_state=1) n_scores = cross_val_score(model, X, y, scoring='accuracy', cv=cv, n_jobs=-1) # report performance print('Accuracy: %.3f (%.3f)' % (mean(n_scores), std(n_scores)))
Running the example evaluates the modeling pipeline with dimensionality reduction and a logistic regression predictive model.
Note: Your results may vary given the stochastic nature of the algorithm or evaluation procedure, or differences in numerical precision. Consider running the example a few times and compare the average outcome.
In this case, we don’t see any lift in model performance in using the PCA transform.
Accuracy: 0.824 (0.034)
Singular Value Decomposition
Singular Value Decomposition, or SVD, is one of the most popular techniques for dimensionality reduction for sparse data (data with many zero values).
For more on how SVD works, see the tutorial:
- How to Calculate the SVD from Scratch with Python
The scikit-learn library provides the TruncatedSVD class implementation of Singular Value Decomposition that can be used as a dimensionality reduction data transform. The “n_components” argument can be set to configure the number of desired dimensions in the output of the transform.
The complete example of evaluating a model with SVD dimensionality reduction is listed below.
# evaluate svd with logistic regression algorithm for classification from numpy import mean from numpy import std from sklearn.datasets import make_classification from sklearn.model_selection import cross_val_score from sklearn.model_selection import RepeatedStratifiedKFold from sklearn.pipeline import Pipeline from sklearn.decomposition import TruncatedSVD from sklearn.linear_model import LogisticRegression # define dataset X, y = make_classification(n_samples=1000, n_features=20, n_informative=10, n_redundant=10, random_state=7) # define the pipeline steps = [('svd', TruncatedSVD(n_components=10)), ('m', LogisticRegression())] model = Pipeline(steps=steps) # evaluate model cv = RepeatedStratifiedKFold(n_splits=10, n_repeats=3, random_state=1) n_scores = cross_val_score(model, X, y, scoring='accuracy', cv=cv, n_jobs=-1) # report performance print('Accuracy: %.3f (%.3f)' % (mean(n_scores), std(n_scores)))
Running the example evaluates the modeling pipeline with dimensionality reduction and a logistic regression predictive model.
Note: Your results may vary given the stochastic nature of the algorithm or evaluation procedure, or differences in numerical precision. Consider running the example a few times and compare the average outcome.
In this case, we don’t see any lift in model performance in using the SVD transform.
Accuracy: 0.824 (0.034)
Linear Discriminant Analysis
Linear Discriminant Analysis, or LDA, is a multi-class classification algorithm that can be used for dimensionality reduction.
The number of dimensions for the projection is limited to 1 and C-1, where C is the number of classes. In this case, our dataset is a binary classification problem (two classes), limiting the number of dimensions to 1.
For more on LDA for dimensionality reduction, see the tutorial:
- Linear Discriminant Analysis for Dimensionality Reduction in Python
The scikit-learn library provides the LinearDiscriminantAnalysis class implementation of Linear Discriminant Analysis that can be used as a dimensionality reduction data transform. The “n_components” argument can be set to configure the number of desired dimensions in the output of the transform.
The complete example of evaluating a model with LDA dimensionality reduction is listed below.
# evaluate lda with logistic regression algorithm for classification from numpy import mean from numpy import std from sklearn.datasets import make_classification from sklearn.model_selection import cross_val_score from sklearn.model_selection import RepeatedStratifiedKFold from sklearn.pipeline import Pipeline from sklearn.discriminant_analysis import LinearDiscriminantAnalysis from sklearn.linear_model import LogisticRegression # define dataset X, y = make_classification(n_samples=1000, n_features=20, n_informative=10, n_redundant=10, random_state=7) # define the pipeline steps = [('lda', LinearDiscriminantAnalysis(n_components=1)), ('m', LogisticRegression())] model = Pipeline(steps=steps) # evaluate model cv = RepeatedStratifiedKFold(n_splits=10, n_repeats=3, random_state=1) n_scores = cross_val_score(model, X, y, scoring='accuracy', cv=cv, n_jobs=-1) # report performance print('Accuracy: %.3f (%.3f)' % (mean(n_scores), std(n_scores)))
Running the example evaluates the modeling pipeline with dimensionality reduction and a logistic regression predictive model.
Note: Your results may vary given the stochastic nature of the algorithm or evaluation procedure, or differences in numerical precision. Consider running the example a few times and compare the average outcome.
In this case, we can see a slight lift in performance as compared to the baseline fit on the raw data.
Accuracy: 0.825 (0.034)
Isomap Embedding
Isomap Embedding, or Isomap, creates an embedding of the dataset and attempts to preserve the relationships in the dataset.
The scikit-learn library provides the Isomap class implementation of Isomap Embedding that can be used as a dimensionality reduction data transform. The “n_components” argument can be set to configure the number of desired dimensions in the output of the transform.
The complete example of evaluating a model with SVD dimensionality reduction is listed below.
# evaluate isomap with logistic regression algorithm for classification from numpy import mean from numpy import std from sklearn.datasets import make_classification from sklearn.model_selection import cross_val_score from sklearn.model_selection import RepeatedStratifiedKFold from sklearn.pipeline import Pipeline from sklearn.manifold import Isomap from sklearn.linear_model import LogisticRegression # define dataset X, y = make_classification(n_samples=1000, n_features=20, n_informative=10, n_redundant=10, random_state=7) # define the pipeline steps = [('iso', Isomap(n_components=10)), ('m', LogisticRegression())] model = Pipeline(steps=steps) # evaluate model cv = RepeatedStratifiedKFold(n_splits=10, n_repeats=3, random_state=1) n_scores = cross_val_score(model, X, y, scoring='accuracy', cv=cv, n_jobs=-1) # report performance print('Accuracy: %.3f (%.3f)' % (mean(n_scores), std(n_scores)))
Running the example evaluates the modeling pipeline with dimensionality reduction and a logistic regression predictive model.
Note: Your results may vary given the stochastic nature of the algorithm or evaluation procedure, or differences in numerical precision. Consider running the example a few times and compare the average outcome.
In this case, we can see a lift in performance with the Isomap data transform as compared to the baseline fit on the raw data.
Accuracy: 0.888 (0.029)
Locally Linear Embedding
Locally Linear Embedding, or LLE, creates an embedding of the dataset and attempts to preserve the relationships between neighborhoods in the dataset.
The scikit-learn library provides the LocallyLinearEmbedding class implementation of Locally Linear Embedding that can be used as a dimensionality reduction data transform. The “n_components” argument can be set to configure the number of desired dimensions in the output of the transform
The complete example of evaluating a model with LLE dimensionality reduction is listed below.
# evaluate lle and logistic regression for classification from numpy import mean from numpy import std from sklearn.datasets import make_classification from sklearn.model_selection import cross_val_score from sklearn.model_selection import RepeatedStratifiedKFold from sklearn.pipeline import Pipeline from sklearn.manifold import LocallyLinearEmbedding from sklearn.linear_model import LogisticRegression # define dataset X, y = make_classification(n_samples=1000, n_features=20, n_informative=10, n_redundant=10, random_state=7) # define the pipeline steps = [('lle', LocallyLinearEmbedding(n_components=10)), ('m', LogisticRegression())] model = Pipeline(steps=steps) # evaluate model cv = RepeatedStratifiedKFold(n_splits=10, n_repeats=3, random_state=1) n_scores = cross_val_score(model, X, y, scoring='accuracy', cv=cv, n_jobs=-1) # report performance print('Accuracy: %.3f (%.3f)' % (mean(n_scores), std(n_scores)))
Running the example evaluates the modeling pipeline with dimensionality reduction and a logistic regression predictive model.
Note: Your results may vary given the stochastic nature of the algorithm or evaluation procedure, or differences in numerical precision. Consider running the example a few times and compare the average outcome.
In this case, we can see a lift in performance with the LLE data transform as compared to the baseline fit on the raw data.
Accuracy: 0.886 (0.028)
Modified Locally Linear Embedding
Modified Locally Linear Embedding, or Modified LLE, is an extension of Locally Linear Embedding that creates multiple weighting vectors for each neighborhood.
The scikit-learn library provides the LocallyLinearEmbedding class implementation of Modified Locally Linear Embedding that can be used as a dimensionality reduction data transform. The “method” argument must be set to ‘modified’ and the “n_components” argument can be set to configure the number of desired dimensions in the output of the transform which must be less than the “n_neighbors” argument.
The complete example of evaluating a model with Modified LLE dimensionality reduction is listed below.
# evaluate modified lle and logistic regression for classification from numpy import mean from numpy import std from sklearn.datasets import make_classification from sklearn.model_selection import cross_val_score from sklearn.model_selection import RepeatedStratifiedKFold from sklearn.pipeline import Pipeline from sklearn.manifold import LocallyLinearEmbedding from sklearn.linear_model import LogisticRegression # define dataset X, y = make_classification(n_samples=1000, n_features=20, n_informative=10, n_redundant=10, random_state=7) # define the pipeline steps = [('lle', LocallyLinearEmbedding(n_components=5, method='modified', n_neighbors=10)), ('m', LogisticRegression())] model = Pipeline(steps=steps) # evaluate model cv = RepeatedStratifiedKFold(n_splits=10, n_repeats=3, random_state=1) n_scores = cross_val_score(model, X, y, scoring='accuracy', cv=cv, n_jobs=-1) # report performance print('Accuracy: %.3f (%.3f)' % (mean(n_scores), std(n_scores)))
Running the example evaluates the modeling pipeline with dimensionality reduction and a logistic regression predictive model.
Note: Your results may vary given the stochastic nature of the algorithm or evaluation procedure, or differences in numerical precision. Consider running the example a few times and compare the average outcome.
In this case, we can see a lift in performance with the modified LLE data transform as compared to the baseline fit on the raw data.
Accuracy: 0.846 (0.036)
Summary
In this tutorial, you discovered how to fit and evaluate top dimensionality reduction algorithms in Python.
Specifically, you learned:
- Dimensionality reduction seeks a lower-dimensional representation of numerical input data that preserves the salient relationships in the data.
- There are many different dimensionality reduction algorithms and no single best method for all datasets.
- How to implement, fit, and evaluate top dimensionality reduction in Python with the scikit-learn machine learning library.
Source link