Guide To Catalyst

Catalyst

Catalyst is a PyTorch framework developed with the intent of advancing research and development in the domain of deep learning. It enables code reusability, reproducibility and rapid experimentation so that users can conveniently create deep learning models and pipelines without writing another training loop.

Catalyst framework is part of the PyTorch ecosystem – a collection of numerous tools and libraries for AI development. It is also a part of the Catalyst Ecosystem – an MLOps ecosystem that expedites training, analysis and deployment of deep learning experiments through CatalystAlchemy and Reaction frameworks respectively.

Highlighting features of Catalyst

  • It enables creating deep learning pipelines with just a few lines of code.
  • It is compatible with Python 3.6+ and PyTorch 1.3+ versions.
  • It enables the creation of a universal train/inference loop with features such as evaluation metrics, model checkpointing and early-stopping included.
  • All the source code, as well as environment variables, remain saved thereby enabling code reproducibility.
  • It enables creating configuration files for storing the model’s hyperparameters.
  • It also has a provision for ‘callbacks’ – ways to reuse parts of a train/inference pipeline with required customizations.
  • It supports some of the best deep learning R&D practices such as Stochastic Weight Averaging (SWA), Ranger optimizerone-cycle trainingfp16 precision, distributed training and so on.

Practical implementation

Here’s a demonstration of handling image classification task using Catalyst. We have used the well-known MNIST dataset having 10 output classes (for classifying images of handwritten digits from 0 to 9). The code has been implemented using Google colab with Python 3.7.10 and Catalyst 21.3 versions. Step-wise explanation of the code is as follows:

  1. Install Catalyst library using pip command

!pip install -U catalyst

(-U option installs the updated version of library)

  1. Import required libraries and modules
 import os
 from torch import nn, optim
 from torch.utils.data import DataLoader
 from catalyst import dl, utils
 from catalyst.data.transforms import ToTensor
 from catalyst.contrib.datasets import MNIST
  1. Define the neural network model, loss function and optimizer to be used

Here, we use a simple architecture comprising an input layer and a hidden layer each having 28 neurons while the output layer has 10 neurons since we have 10 possible output classes.

net = nn.Sequential(nn.Flatten(), nn.Linear(28 * 28, 10))

We have used the cross entropy loss function of PyTorch which combines LogSoftmax and NLLLoss in one class.

loss = nn.CrossEntropyLoss()

We are using Adam optimization algorithm

opt = optim.Adam(net.parameters(), lr=0.02)  #’lr’ denotes learning rate

  1. Load data for training and validation

Using DataLoader() method of PyTorch, we load the MNIST dataset from contrib API of  Catalyst. We create a dictionary to load both training and validation sets at once by specifying the ‘train’ parameter of MNIST() as True or False respectively.

 data_loader = {
     "training": DataLoader(MNIST(os.getcwd(), train=True, download=True, 
     transform=ToTensor()), batch_size=32),
     "validation": DataLoader(MNIST(os.getcwd(), train=False,     
      download=True, 
      transform=ToTensor()), batch_size=32),
 }

ToTensor() method of ‘Data’ API converts a NumPy array to tensor representation.

  1. Define an experiment runner for handling experiments with supervised model using SupervisedRunner() method of Runners API.
exp_runner = dl.SupervisedRunner(input_key="features", output_key="logits", target_key="targets", loss_key="loss")
  1. Model training using Runners.train() method
 exp_runner.train(
     model=net,  #model to be trained
     criterion=loss, #loss function for training
     optimizer=opt, #optimizer for training

#dictionary with data loaders for training and validation
     loaders=data_loader, 

     num_epochs=1,
   #list with Catalyst callbacks
     callbacks=[  
        #callback for accuracy computation
         dl.AccuracyCallback(input_key="logits", target_key="targets",  
         topk_args=(1, 3, 5)),
        #callback for plotting confusion matrix to the loggers
         dl.ConfusionMatrixCallback(input_key="logits", 
         target_key="targets", 
         num_classes=10),
     ],
     logdir="./logs",  #path to output directory
  #name of loader to compute metrics and save the checkpoints
     valid_loader="valid",  
 #key to the metric name by which to select the checkpoints
     valid_metric="loss",
 #flag indicating whether to minimize the valid metric specified above
     minimize_valid_metric=True,
     verbose=True,  #display status of model training to console
 #load the best checkpoint state as per the validation metrics
     load_best_on_end=True,
 )

Sample output:

 Hparams (experiment): {}

 1/1 * Epoch (train): 100% 1875/1875 [00:20<00:00, 92.78it/s, 

accuracy=0.938, accuracy01=0.938, accuracy03=1.000, accuracy05=1.000, loss=0.098, lr=0.020, momentum=0.900]

 train (1/1) accuracy: 0.8802833557128906 | accuracy/std: 0.07081212792674185 | accuracy01: 0.8802833557128906 | accuracy01/std: 0.07081212792674185 | accuracy03: 0.9749666452407837 | accuracy03/std: 0.03580065525942475 | accuracy05: 0.9921166896820068 | accuracy05/std: 0.02176691186632642 | loss: 0.5139051675796509 | loss/std: 0.3664878010749817 | lr: 0.02 | momentum: 0.9
 
1/1 * Epoch (valid): 100%
313/313 [00:03<00:00, 83.65it/s, accuracy=0.875, accuracy01=0.875, accuracy03=1.000, accuracy05=1.000, loss=0.608, lr=0.020, momentum=0.900]
 
valid (1/1) accuracy: 0.8496999740600586 | accuracy/std: 0.08438007466043124 | accuracy01: 0.8496999740600586 | accuracy01/std: 0.08438007466043124 | accuracy03: 0.9573000073432922 | accuracy03/std: 0.04357493405029723 | accuracy05: 0.9886000156402588 | accuracy05/std: 0.019559607813700718 | loss: 0.8412032127380371 | loss/std: 0.5866137742996216 | lr: 0.02 | momentum: 0.9

Note: By default, top-1, top-3 and top-5 relevant sets of samples are considered for metrics computation. So the output displays average accuracy as well as accuracy01, accuracy03 and accuracy05 accordingly.

These accuracy results for training and validation sets get stored in ‘training.csv’ and ‘validation.csv’ files which appear as follows:

Sample ‘training.csv’

Guide To Catalyst 1

Sample ‘validation.csv’

This article has been published from the source link without modifications to the text. Only the headline has been changed.

Source link

 

Most Popular