AutoML Libraries for Python

Audio version of the article

 

AutoML provides tools to automatically discover good machine learning model pipelines for a dataset with very little user intervention.

It is ideal for domain experts new to machine learning or machine learning practitioners looking to get good results quickly for a predictive modeling task.

Open-source libraries are available for using AutoML methods with popular machine learning libraries in Python, such as the scikit-learn machine learning library.

In this tutorial, you will discover how to use top open-source AutoML libraries for scikit-learn in Python.

After completing this tutorial, you will know:

  • AutoML are techniques for automatically and quickly discovering a well-performing machine learning model pipeline for a predictive modeling task.
  • The three most popular AutoML libraries for Scikit-Learn are Hyperopt-Sklearn, Auto-Sklearn, and TPOT.
  • How to use AutoML libraries to discover well-performing models for predictive modeling tasks in Python.

Let’s get started.

Tutorial Overview

This tutorial is divided into four parts; they are:

  1. Automated Machine Learning
  2. Auto-Sklearn
  3. Tree-based Pipeline Optimization Tool (TPOT)
  4. Hyperopt-Sklearn

Automated Machine Learning

Automated Machine Learning, or AutoML for short, involves the automatic selection of data preparation, machine learning model, and model hyperparameters for a predictive modeling task.

It refers to techniques that allow semi-sophisticated machine learning practitioners and non-experts to discover a good predictive model pipeline for their machine learning task quickly, with very little intervention other than providing a dataset.

… the user simply provides data, and the AutoML system automatically determines the approach that performs best for this particular application. Thereby, AutoML makes state-of-the-art machine learning approaches accessible to domain scientists who are interested in applying machine learning but do not have the resources to learn about the technologies behind it in detail.

— Page ix, Automated Machine Learning: Methods, Systems, Challenges, 2019.

Central to the approach is defining a large hierarchical optimization problem that involves identifying data transforms and the machine learning models themselves, in addition to the hyperparameters for the models.

Many companies now offer AutoML as a service, where a dataset is uploaded and a model pipeline can be downloaded or hosted and used via web service (i.e. MLaaS). Popular examples include service offerings from Google, Microsoft, and Amazon.

Additionally, open-source libraries are available that implement AutoML techniques, focusing on the specific data transforms, models, and hyperparameters used in the search space and the types of algorithms used to navigate or optimize the search space of possibilities, with versions of Bayesian Optimization being the most common.

There are many open-source AutoML libraries, although, in this tutorial, we will focus on the best-of-breed libraries that can be used in conjunction with the popular scikit-learn Python machine learning library.

They are: Hyperopt-Sklearn, Auto-Sklearn, and TPOT.

Did I miss your favorite AutoML library for scikit-learn?
Let me know in the comments below.

We will take a closer look at each, providing the basis for you to evaluate and consider which library might be appropriate for your project.

Auto-Sklearn

Auto-Sklearn is an open-source Python library for AutoML using machine learning models from the scikit-learn machine learning library.

It was developed by Matthias Feurer, et al. and described in their 2015 paper titled “Efficient and Robust Automated Machine Learning.”

… we introduce a robust new AutoML system based on scikit-learn (using 15 classifiers, 14 feature preprocessing methods, and 4 data preprocessing methods, giving rise to a structured hypothesis space with 110 hyperparameters).

— Efficient and Robust Automated Machine Learning, 2015.

The first step is to install the Auto-Sklearn library, which can be achieved using pip, as follows:

sudo pip install autosklearn

Once installed, we can import the library and print the version number to confirm it was installed successfully:

Running the example prints the version number. Your version number should be the same or higher.

Next, we can demonstrate using Auto-Sklearn on a synthetic classification task.

We can define an AutoSklearnClassifier class that controls the search and configure it to run for two minutes (120 seconds) and kill any single model that takes more than 30 seconds to evaluate. At the end of the run, we can report the statistics of the search and evaluate the best performing model on a holdout dataset.

The complete example is listed below.

Running the example will take about two minutes, given the hard limit we imposed on the run.

At the end of the run, a summary is printed showing that 599 models were evaluated and the estimated performance of the final model was 95.6 percent.

We then evaluate the model on the holdout dataset and see that a classification accuracy of 97 percent was achieved, which is reasonably skillful.

For more on the Auto-Sklearn library, see:

Tree-based Pipeline Optimization Tool (TPOT)

Tree-based Pipeline Optimization Tool, or TPOT for short, is a Python library for automated machine learning.

TPOT uses a tree-based structure to represent a model pipeline for a predictive modeling problem, including data preparation and modeling algorithms, and model hyperparameters.

… an evolutionary algorithm called the Tree-based Pipeline Optimization Tool (TPOT) that automatically designs and optimizes machine learning pipelines.

— Evaluation of a Tree-based Pipeline Optimization Tool for Automating Data Science, 2016.

The first step is to install the TPOT library, which can be achieved using pip, as follows:

Once installed, we can import the library and print the version number to confirm it was installed successfully:

Running the example prints the version number. Your version number should be the same or higher.

Next, we can demonstrate using TPOT on a synthetic classification task.

This involves configuring a TPOTClassifier instance with the population size and number of generations for the evolutionary search, as well as the cross-validation procedure and metric used to evaluate models. The algorithm will then run the search procedure and save the best discovered model pipeline to file.

The complete example is listed below.

Running the example may take a few minutes, and you will see a progress bar on the command line.

The accuracy of top-performing models will be reported along the way.

Your specific results will vary given the stochastic nature of the search procedure.

Generation 1 - Current best internal CV score: 0.9166666666666666
Generation 2 - Current best internal CV score: 0.9166666666666666
Generation 3 - Current best internal CV score: 0.9266666666666666
Generation 4 - Current best internal CV score: 0.9266666666666666
Generation 5 - Current best internal CV score: 0.9266666666666666

Best pipeline: ExtraTreesClassifier(input_matrix, bootstrap=False, criterion=gini, max_features=0.35000000000000003, min_samples_leaf=2, min_samples_split=6, n_estimators=100)

In this case, we can see that the top-performing pipeline achieved the mean accuracy of about 92.6 percent.

The top-performing pipeline is then saved to a file named “tpot_best_model.py“.

Opening this file, you can see that there is some generic code for loading a dataset and fitting the pipeline. An example is listed below.

import numpy as np
import pandas as pd
from sklearn.ensemble import ExtraTreesClassifier
from sklearn.model_selection import train_test_split

# NOTE: Make sure that the outcome column is labeled 'target' in the data file
tpot_data = pd.read_csv('PATH/TO/DATA/FILE', sep='COLUMN_SEPARATOR', dtype=np.float64)
features = tpot_data.drop('target', axis=1)
training_features, testing_features, training_target, testing_target = \
train_test_split(features, tpot_data['target'], random_state=1)

# Average CV score on the training set was: 0.9266666666666666
exported_pipeline = ExtraTreesClassifier(bootstrap=False, criterion="gini", max_features=0.35000000000000003, min_samples_leaf=2, min_samples_split=6, n_estimators=100)
# Fix random state in exported estimator
if hasattr(exported_pipeline, 'random_state'):
setattr(exported_pipeline, 'random_state', 1)

exported_pipeline.fit(training_features, training_target)
results = exported_pipeline.predict(testing_features)

You can then retrieve the code for creating the model pipeline and integrate it into your project.

For more on TPOT, see the following resources:

Hyperopt-Sklearn

HyperOpt is an open-source Python library for Bayesian optimization developed by James Bergstra.

It is designed for large-scale optimization for models with hundreds of parameters and allows the optimization procedure to be scaled across multiple cores and multiple machines.

HyperOpt-Sklearn wraps the HyperOpt library and allows for the automatic search of data preparation methods, machine learning algorithms, and model hyperparameters for classification and regression tasks.

… we introduce Hyperopt-Sklearn: a project that brings the benefits of automatic algorithm configuration to users of Python and scikit-learn. Hyperopt-Sklearn uses Hyperopt to describe a search space over possible configurations of Scikit-Learn components, including preprocessing and classification modules.

— Hyperopt-Sklearn: Automatic Hyperparameter Configuration for Scikit-Learn, 2014.

Now that we are familiar with HyperOpt and HyperOpt-Sklearn, let’s look at how to use HyperOpt-Sklearn.

The first step is to install the HyperOpt library.

This can be achieved using the pip package manager as follows:

Next, we must install the HyperOpt-Sklearn library.

This too can be installed using pip, although we must perform this operation manually by cloning the repository and running the installation from the local files, as follows:

We can confirm that the installation was successful by checking the version number with the following command:

This will summarize the installed version of HyperOpt-Sklearn, confirming that a modern version is being used.

Next, we can demonstrate using Hyperopt-Sklearn on a synthetic classification task.

We can configure a HyperoptEstimator instance that runs the search, including the classifiers to consider in the search space, the pre-processing steps, and the search algorithm to use. In this case, we will use TPE, or Tree of Parzen Estimators, and perform 50 evaluations.

At the end of the search, the best performing model pipeline is evaluated and summarized.

The complete example is listed below.

Running the example may take a few minutes.

The progress of the search will be reported and you will see some warnings that you can safely ignore.

At the end of the run, the best-performing model is evaluated on the holdout dataset and the Pipeline discovered is printed for later use.

Your specific results may differ given the stochastic nature of the learning algorithm and search process. Try running the example a few times.

In this case, we can see that the chosen model achieved an accuracy of about 84.8 percent on the holdout test set. The Pipeline involves a SGDClassifier model with no pre-processing.

The printed model can then be used directly, e.g. the code copy-pasted into another project.

For more on Hyperopt-Sklearn, see:

Summary

In this tutorial, you discovered how to use top open-source AutoML libraries for scikit-learn in Python.

Specifically, you learned:

  • AutoML are techniques for automatically and quickly discovering a well-performing machine learning model pipeline for a predictive modeling task.
  • The three most popular AutoML libraries for Scikit-Learn are Hyperopt-Sklearn, Auto-Sklearn, and TPOT.
  • How to use AutoML libraries to discover well-performing models for predictive modeling tasks in Python.

This article has been pubished from the source link without modifications to the text. Only the headline has been changed.

Source link

- Advertisment -

Most Popular

The Company Challenging Businesses to Get Out of the Digital Stone Age

Blockchain technology is getting accepted by companies from various industries -- financial, healthcare, legal, education, and even governments -- that have recognized its future...

Understanding Why Machine Learning can prove beneficial for your Organization

Is machine learning the right choice for your business? In this article by Sagar Trivedi, find out what the possibilities are, and how using...

Robots Invade the Construction Site

https://media.wired.com/clips/5fb6fc13553098d62122bc35/720p/pass/Business-Construction-Robot-1%25202.mp4 Boosted by advances in sensors and artificial intelligence, a new generation of machines is automating a tech-averse industry. THERESA AREVALO WAS in high school when she...

Building a Loading Indicator with SwiftUI

Have you ever used the magic move animation in Keynote? With magic move, you can easily create slick animation between slides. Keynote automatically analyzes...

Comparing Riot and Silvergate Blockchains

Despite a sell-off last week primarily due to traders taking profits, the price of Bitcoin has surged nearly 170% so far this year to reach...

Trusting Cloud with our data

It would be great if there were an easy yes or no answer. But it was never going to be that simple. The truth is,...
- Advertisment -