HomeMachine LearningMachine Learning DIYUnderstanding Multi-Label Classification with Deep Learning

Understanding Multi-Label Classification with Deep Learning

Multi-label classification involves predicting zero or more class labels.

Unlike normal classification tasks where class labels are mutually exclusive, multi-label classification requires specialized machine learning algorithms that support predicting multiple mutually non-exclusive classes or “labels.”

Deep learning neural networks are an example of an algorithm that natively supports multi-label classification problems. Neural network models for multi-label classification tasks can be easily defined and evaluated using the Keras deep learning library.

In this tutorial, you will discover how to develop deep learning models for multi-label classification.

After completing this tutorial, you will know:

  • Multi-label classification is a predictive modeling task that involves predicting zero or more mutually non-exclusive class labels.
  • Neural network models can be configured for multi-label classification tasks.
  • How to evaluate a neural network for multi-label classification and make a prediction for new data.

Let’s get started.

Tutorial Overview

This tutorial is divided into three parts; they are:

  • Multi-Label Classification
  • Neural Networks for Multiple Labels
  • Neural Network for Multi-Label Classification

Multi-Label Classification

Classification is a predictive modeling problem that involves outputting a class label given some input

It is different from regression tasks that involve predicting a numeric value.

Typically, a classification task involves predicting a single label. Alternately, it might involve predicting the likelihood across two or more class labels. In these cases, the classes are mutually exclusive, meaning the classification task assumes that the input belongs to one class only.

Some classification tasks require predicting more than one class label. This means that class labels or class membership are not mutually exclusive. These tasks are referred to as multiple label classification, or multi-label classification for short.

In multi-label classification, zero or more labels are required as output for each input sample, and the outputs are required simultaneously. The assumption is that the output labels are a function of the inputs.

We can create a synthetic multi-label classification dataset using the make_multilabel_classification() function in the scikit-learn library.

Our dataset will have 1,000 samples with 10 input features. The dataset will have three class label outputs for each sample and each class will have one or two values (0 or 1, e.g. present or not present).

The complete example of creating and summarizing the synthetic multi-label classification dataset is listed below.

# example of a multi-label classification task
from sklearn.datasets import make_multilabel_classification
# define dataset
X, y = make_multilabel_classification(n_samples=1000, n_features=10, n_classes=3, n_labels=2, random_state=1)
# summarize dataset shape
print(X.shape, y.shape)
# summarize first few examples
for i in range(10):
print(X[i], y[i])

Running the example creates the dataset and summarizes the shape of the input and output elements.

We can see that, as expected, there are 1,000 samples, each with 10 input features and three output features.

The first 10 rows of inputs and outputs are summarized and we can see that all inputs for this dataset are numeric and that output class labels have 0 or 1 values for each of the three class labels.

(1000, 10) (1000, 3)

[ 3. 3. 6. 7. 8. 2. 11. 11. 1. 3.] [1 1 0]
[7. 6. 4. 4. 6. 8. 3. 4. 6. 4.] [0 0 0]
[ 5. 5. 13. 7. 6. 3. 6. 11. 4. 2.] [1 1 0]
[1. 1. 5. 5. 7. 3. 4. 6. 4. 4.] [1 1 1]
[ 4. 2. 3. 13. 7. 2. 4. 12. 1. 7.] [0 1 0]
[ 4. 3. 3. 2. 5. 2. 3. 7. 2. 10.] [0 0 0]
[ 3. 3. 3. 11. 6. 3. 4. 14. 1. 3.] [0 1 0]
[ 2. 1. 7. 8. 4. 5. 10. 4. 6. 6.] [1 1 1]
[ 5. 1. 9. 5. 3. 4. 11. 8. 1. 8.] [1 1 1]
[ 2. 11. 7. 6. 2. 2. 9. 11. 9. 3.] [1 1 1]

Next, let’s look at how we can develop neural network models for multi-label classification tasks.

Neural Networks for Multiple Labels

Some machine learning algorithms support multi-label classification natively.

Neural network models can be configured to support multi-label classification and can perform well, depending on the specifics of the classification task.

Multi-label classification can be supported directly by neural networks simply by specifying the number of target labels there is in the problem as the number of nodes in the output layer. For example, a task that has three output labels (classes) will require a neural network output layer with three nodes in the output layer.

Each node in the output layer must use the sigmoid activation. This will predict a probability of class membership for the label, a value between 0 and 1. Finally, the model must be fit with the binary cross-entropy loss function.

In summary, to configure a neural network model for multi-label classification, the specifics are:

  • Number of nodes in the output layer matches the number of labels.
  • Sigmoid activation for each node in the output layer.
  • Binary cross-entropy loss function.

We can demonstrate this using the Keras deep learning library.

We will define a Multilayer Perceptron (MLP) model for the multi-label classification task defined in the previous section.

Each sample has 10 inputs and three outputs; therefore, the network requires an input layer that expects 10 inputs specified via the “input_dim” argument in the first hidden layer and three nodes in the output layer.

We will use the popular ReLU activation function in the hidden layer. The hidden layer has 20 nodes that were chosen after some trial and error. We will fit the model using binary cross-entropy loss and the Adam version of stochastic gradient descent.

The definition of the network for the multi-label classification task is listed below.

You may want to adapt this model for your own multi-label classification task; therefore, we can create a function to define and return the model where the number of input and output variables is provided as arguments.

Now that we are familiar with how to define an MLP for multi-label classification, let’s explore how this model can be evaluated.

Neural Network for Multi-Label Classification

If the dataset is small, it is good practice to evaluate neural network models repeatedly on the same dataset and report the mean performance across the repeats.

This is because of the stochastic nature of the learning algorithm.

Additionally, it is good practice to use k-fold cross-validation instead of train/test splits of a dataset to get an unbiased estimate of model performance when making predictions on new data. Again, only if there is not too much data that the process can be completed in a reasonable time.

Taking this into account, we will evaluate the MLP model on the multi-output regression task using repeated k-fold cross-validation with 10 folds and three repeats.

The MLP model will predict the probability for each class label by default. This means it will predict three probabilities for each sample. These can be converted to crisp class labels by rounding the values to either 0 or 1. We can then calculate the classification accuracy for the crisp class labels.

The scores are collected and can be summarized by reporting the mean and standard deviation across all repeats and cross-validation folds.

The evaluate_model() function below takes the dataset, evaluates the model, and returns a list of evaluation scores, in this case, accuracy scores.

We can then load our dataset and evaluate the model and report the mean performance.

Tying this together, the complete example is listed below.

# mlp for multi-label classification
from numpy import mean
from numpy import std
from sklearn.datasets import make_multilabel_classification
from sklearn.model_selection import RepeatedKFold
from keras.models import Sequential
from keras.layers import Dense
from sklearn.metrics import accuracy_score

# get the dataset
def get_dataset():
X, y = make_multilabel_classification(n_samples=1000, n_features=10, n_classes=3, n_labels=2, random_state=1)
return X, y

# get the model
def get_model(n_inputs, n_outputs):
model = Sequential()
model.add(Dense(20, input_dim=n_inputs, kernel_initializer='he_uniform', activation='relu'))
model.add(Dense(n_outputs, activation='sigmoid'))
model.compile(loss='binary_crossentropy', optimizer='adam')
return model

# evaluate a model using repeated k-fold cross-validation
def evaluate_model(X, y):
results = list()
n_inputs, n_outputs = X.shape[1], y.shape[1]
# define evaluation procedure
cv = RepeatedKFold(n_splits=10, n_repeats=3, random_state=1)
# enumerate folds
for train_ix, test_ix in cv.split(X):
# prepare data
X_train, X_test = X[train_ix], X[test_ix]
y_train, y_test = y[train_ix], y[test_ix]
# define model
model = get_model(n_inputs, n_outputs)
# fit model
model.fit(X_train, y_train, verbose=0, epochs=100)
# make a prediction on the test set
yhat = model.predict(X_test)
# round probabilities to class labels
yhat = yhat.round()
# calculate accuracy
acc = accuracy_score(y_test, yhat)
# store result
print('>%.3f' % acc)
results.append(acc)
return results

# load dataset
X, y = get_dataset()
# evaluate model
results = evaluate_model(X, y)
# summarize performance
print('Accuracy: %.3f (%.3f)' % (mean(results), std(results)))

Running the example reports the classification accuracy for each fold and each repeat, to give an idea of the evaluation progress.

Note: Your results may vary given the stochastic nature of the algorithm or evaluation procedure, or differences in numerical precision. Consider running the example a few times and compare the average outcome.

At the end, the mean and standard deviation accuracy is reported. In this case, the model is shown to achieve an accuracy of about 81.2 percent.

You can use this code as a template for evaluating MLP models on your own multi-label classification tasks. The number of nodes and layers in the model can easily be adapted and tailored to the complexity of your dataset.

Once a model configuration is chosen, we can use it to fit a final model on all available data and make a prediction for new data.

The example below demonstrates this by first fitting the MLP model on the entire multi-label classification dataset, then calling the predict() function on the saved model in order to make a prediction for a new row of data.

# use mlp for prediction on multi-label classification
from numpy import asarray
from sklearn.datasets import make_multilabel_classification
from keras.models import Sequential
from keras.layers import Dense

# get the dataset
def get_dataset():
X, y = make_multilabel_classification(n_samples=1000, n_features=10, n_classes=3, n_labels=2, random_state=1)
return X, y

# get the model
def get_model(n_inputs, n_outputs):
model = Sequential()
model.add(Dense(20, input_dim=n_inputs, kernel_initializer='he_uniform', activation='relu'))
model.add(Dense(n_outputs, activation='sigmoid'))
model.compile(loss='binary_crossentropy', optimizer='adam')
return model

# load dataset
X, y = get_dataset()
n_inputs, n_outputs = X.shape[1], y.shape[1]
# get model
model = get_model(n_inputs, n_outputs)
# fit the model on all data
model.fit(X, y, verbose=0, epochs=100)
# make a prediction for new data
row = [3, 3, 6, 7, 8, 2, 11, 11, 1, 3]
newX = asarray([row])
yhat = model.predict(newX)
print('Predicted: %s' % yhat[0])

Running the example fits the model and makes a prediction for a new row. As expected, the prediction contains three output variables required for the multi-label classification task: the probabilities of each class label.

Summary

In this tutorial, you discovered how to develop deep learning models for multi-label classification.

Specifically, you learned:

  • Multi-label classification is a predictive modeling task that involves predicting zero or more mutually non-exclusive class labels.
  • Neural network models can be configured for multi-label classification tasks.
  • How to evaluate a neural network for multi-label classification and make a prediction for new data.

This article has been published from the source link without modifications to the text. Only the headline has been changed.

Source link

 

Most Popular