HomeMachine LearningMachine Learning DIYScikit-Optimization for Hyperparameter Tuning

Scikit-Optimization for Hyperparameter Tuning

 

Hyperparameter optimization refers to performing a search in order to discover the set of specific model configuration arguments that result in the best performance of the model on a specific dataset.

There are many ways to perform hyperparameter optimization, although modern methods, such as Bayesian Optimization, are fast and effective. The Scikit-Optimize library is an open-source Python library that provides an implementation of Bayesian Optimization that can be used to tune the hyperparameters of machine learning models from the scikit-Learn Python library.

You can easily use the Scikit-Optimize library to tune the models on your next machine learning project.

In this tutorial, you will discover how to use the Scikit-Optimize library to use Bayesian Optimization for hyperparameter tuning.

After completing this tutorial, you will know:

  • Scikit-Optimize provides a general toolkit for Bayesian Optimization that can be used for hyperparameter tuning.
  • How to manually use the Scikit-Optimize library to tune the hyperparameters of a machine learning model.
  • How to use the built-in BayesSearchCV class to perform model hyperparameter tuning.

Let’s get started.

Tutorial Overview

This tutorial is divided into four parts; they are:

  1. Scikit-Optimize
  2. Machine Learning Dataset and Model
  3. Manually Tune Algorithm Hyperparameters
  4. Automatically Tune Algorithm Hyperparameters

Scikit-Optimize

Scikit-Optimize, or skopt for short, is an open-source Python library for performing optimization tasks.

It offers efficient optimization algorithms, such as Bayesian Optimization, and can be used to find the minimum or maximum of arbitrary cost functions.

Bayesian Optimization provides a principled technique based on Bayes Theorem to direct a search of a global optimization problem that is efficient and effective. It works by building a probabilistic model of the objective function, called the surrogate function, that is then searched efficiently with an acquisition function before candidate samples are chosen for evaluation on the real objective function.

For more on the topic of Bayesian Optimization, see the tutorial:

  • How to Implement Bayesian Optimization From Scratch in Python

Importantly, the library provides support for tuning the hyperparameters of machine learning algorithms offered by the scikit-learn library, so-called hyperparameter optimization. As such, it offers an efficient alternative to less efficient hyperparameter optimization procedures such as grid search and random search.

The scikit-optimize library can be installed using pip, as follows:

Once installed, we can import the library and print the version number to confirm the library was installed successfully and can be accessed.

The complete example is listed below.

# report scikit-optimize version number
import skopt
print('skopt %s' % skopt.__version__)

Running the example reports the currently installed version number of scikit-optimize.

Your version number should be the same or higher.

skopt 0.7.2

For more installation instructions, see the documentation:

  • Scikit-Optimize Installation Instructions

Now that we are familiar with what Scikit-Optimize is and how to install it, let’s explore how we can use it to tune the hyperparameters of a machine learning model.

Machine Learning Dataset and Model

First, let’s select a standard dataset and a model to address it.

We will use the ionosphere machine learning dataset. This is a standard machine learning dataset comprising 351 rows of data with three numerical input variables and a target variable with two class values, e.g. binary classification.

Using a test harness of repeated stratified 10-fold cross-validation with three repeats, a naive model can achieve an accuracy of about 64 percent. A top performing model can achieve accuracy on this same test harness of about 94 percent. This provides the bounds of expected performance on this dataset.

The dataset involves predicting whether measurements of the ionosphere indicate a specific structure or not.

You can learn more about the dataset here:

No need to download the dataset; we will download it automatically as part of our worked examples.

The example below downloads the dataset and summarizes its shape.

# summarize the ionosphere dataset
from pandas import read_csv
# load dataset
url = 'https://raw.githubusercontent.com/jbrownlee/Datasets/master/ionosphere.csv'
dataframe = read_csv(url, header=None)
# split into input and output elements
data = dataframe.values
X, y = data[:, :-1], data[:, -1]
print(X.shape, y.shape)

Running the example downloads the dataset and splits it into input and output elements. As expected, we can see that there are 351 rows of data with 34 input variables.

We can evaluate a support vector machine (SVM) model on this dataset using repeated stratified cross-validation.

We can report the mean model performance on the dataset averaged over all folds and repeats, which will provide a reference for model hyperparameter tuning performed in later sections.

The complete example is listed below.

# evaluate an svm for the ionosphere dataset
from numpy import mean
from numpy import std
from pandas import read_csv
from sklearn.model_selection import cross_val_score
from sklearn.model_selection import RepeatedStratifiedKFold
from sklearn.svm import SVC
# load dataset
url = 'https://raw.githubusercontent.com/jbrownlee/Datasets/master/ionosphere.csv'
dataframe = read_csv(url, header=None)
# split into input and output elements
data = dataframe.values
X, y = data[:, :-1], data[:, -1]
print(X.shape, y.shape)
# define model model
model = SVC()
# define test harness
cv = RepeatedStratifiedKFold(n_splits=10, n_repeats=3, random_state=1)
# evaluate model
m_scores = cross_val_score(model, X, y, scoring='accuracy', cv=cv, n_jobs=-1, error_score='raise')
print('Accuracy: %.3f (%.3f)' % (mean(m_scores), std(m_scores)))

Running the example first loads and prepares the dataset, then evaluates the SVM model on the dataset.

Note: Your results may vary given the stochastic nature of the algorithm or evaluation procedure, or differences in numerical precision. Consider running the example a few times and compare the average outcome.

In this case, we can see that the SVM with default hyperparameters achieved a mean classification accuracy of about 93.7 percent, which is skillful and close to the top performance on the problem of 94 percent.

Note the data type, the range, and the name of the hyperparameter specified for each.

We can then define a function that will be called by the search procedure. This is a function expected by the optimization procedure later and takes a model and set of specific hyperparameters for the model, evaluates it, and returns a score for the set of hyperparameters.

In our case, we want to evaluate the model using repeated stratified 10-fold cross-validation on our ionosphere dataset. We want to maximize classification accuracy, e.g. find the set of model hyperparameters that give the best accuracy. By default, the process minimizes the score returned from this function, therefore, we will return one minus the accuracy, e.g. perfect skill will be (1 – accuracy) or 0.0, and the worst skill will be 1.0.

The evaluate_model() function below implements this and takes a specific set of hyperparameters.

Next, we can execute the search by calling the gp_minimize() function and passing the name of the function to call to evaluate each model and the search space to optimize.

The procedure will run until it converges and returns a result.

The result object contains lots of details, but importantly, we can access the score of the best performing configuration and the hyperparameters used by the best forming model.

...
# summarizing finding:
print('Best Accuracy: %.3f' % (1.0 - result.fun))
print('Best Parameters: %s' % (result.x))

Tying this together, the complete example of manually tuning the hyperparameters of an SVM on the ionosphere dataset is listed below.

# manually tune svm model hyperparameters using skopt on the ionosphere dataset
from numpy import mean
from pandas import read_csv
from sklearn.model_selection import cross_val_score
from sklearn.model_selection import RepeatedStratifiedKFold
from sklearn.svm import SVC
from skopt.space import Integer
from skopt.space import Real
from skopt.space import Categorical
from skopt.utils import use_named_args
from skopt import gp_minimize

# define the space of hyperparameters to search
search_space = list()
search_space.append(Real(1e-6, 100.0, 'log-uniform', name='C'))
search_space.append(Categorical(['linear', 'poly', 'rbf', 'sigmoid'], name='kernel'))
search_space.append(Integer(1, 5, name='degree'))
search_space.append(Real(1e-6, 100.0, 'log-uniform', name='gamma'))

# define the function used to evaluate a given configuration
@use_named_args(search_space)
def evaluate_model(**params):
# configure the model with specific hyperparameters
model = SVC()
model.set_params(**params)
# define test harness
cv = RepeatedStratifiedKFold(n_splits=10, n_repeats=3, random_state=1)
# calculate 5-fold cross validation
result = cross_val_score(model, X, y, cv=cv, n_jobs=-1, scoring='accuracy')
# calculate the mean of the scores
estimate = mean(result)
# convert from a maximizing score to a minimizing score
return 1.0 - estimate

# load dataset
url = 'https://raw.githubusercontent.com/jbrownlee/Datasets/master/ionosphere.csv'
dataframe = read_csv(url, header=None)
# split into input and output elements
data = dataframe.values
X, y = data[:, :-1], data[:, -1]
print(X.shape, y.shape)
# perform optimization
result = gp_minimize(evaluate_model, search_space)
# summarizing finding:
print('Best Accuracy: %.3f' % (1.0 - result.fun))
print('Best Parameters: %s' % (result.x))

Running the example may take a few moments, depending on the speed of your machine.

You may see some warning messages that you can safely ignore, such as:

UserWarning: The objective has been evaluated at this point before.

At the end of the run, the best-performing configuration is reported.

Note: Your results may vary given the stochastic nature of the algorithm or evaluation procedure, or differences in numerical precision. Consider running the example a few times and compare the average outcome.

In this case, we can see that configuration, reported in order of the search space list, was a modest C value, a RBF kernel, a degree of 2 (ignored by the RBF kernel), and a modest gamma value.

Importantly, we can see that the skill of this model was approximately 94.7 percent, which is a top-performing model

(351, 34) (351,)
Best Accuracy: 0.948
Best Parameters: [1.2852670137769258, 'rbf', 2, 0.18178016885627174]

This is not the only way to use the Scikit-Optimize library for hyperparameter tuning. In the next section, we can see a more automated approach.

Automatically Tune Algorithm Hyperparameters

The Scikit-Learn machine learning library provides tools for tuning model hyperparameters.

Specifically, it provides the GridSearchCV and RandomizedSearchCV classes that take a model, a search space, and a cross-validation configuration.

The benefit of these classes is that the search procedure is performed automatically, requiring minimal configuration.

Similarly, the Scikit-Optimize library provides a similar interface for performing a Bayesian Optimization of model hyperparameters via the BayesSearchCV class.

This class can be used in the same way as the Scikit-Learn equivalents.

First, the search space must be defined as a dictionary with hyperparameter names used as the key and the scope of the variable as the value.

...
# define search space
params = dict()
params['C'] = (1e-6, 100.0, 'log-uniform')
params['gamma'] = (1e-6, 100.0, 'log-uniform')
params['degree'] = (1,5)
params['kernel'] = ['linear', 'poly', 'rbf', 'sigmoid']

We can then define the BayesSearchCV configuration taking the model we wish to evaluate, the hyperparameter search space, and the cross-validation configuration.

We can then execute the search and report the best result and configuration at the end.

Tying this together, the complete example of automatically tuning SVM hyperparameters using the BayesSearchCV class on the ionosphere dataset is listed below.

Running the example may take a few moments, depending on the speed of your machine.

You may see some warning messages that you can safely ignore, such as:

UserWarning: The objective has been evaluated at this point before.

At the end of the run, the best-performing configuration is reported.

Note: Your results may vary given the stochastic nature of the algorithm or evaluation procedure, or differences in numerical precision. Consider running the example a few times and compare the average outcome.

In this case, we can see that the model performed above top-performing models achieving a mean classification accuracy of about 95.2 percent.

The search discovered a large C value, an RBF kernel, and a small gamma value.

(351, 34) (351,)
0.9525166191832859
OrderedDict([('C', 4.8722263953328735), ('degree', 4), ('gamma', 0.09805881007239009), ('kernel', 'rbf')])

This provides a template that you can use to tune the hyperparameters on your machine learning project.

Summary

In this tutorial, you discovered how to use the Scikit-Optimize library to use Bayesian Optimization for hyperparameter tuning.

Specifically, you learned:

  • Scikit-Optimize provides a general toolkit for Bayesian Optimization that can be used for hyperparameter tuning.
  • How to manually use the Scikit-Optimize library to tune the hyperparameters of a machine learning model.
  • How to use the built-in BayesSearchCV class to perform model hyperparameter tuning.

This article has been published from the source link wirthout modifications to the text. Only the headline has been changed.

Source link

Most Popular