HomeMachine LearningMachine Learning EducationAccelerating MLOps: using CI/CD with ML models

Accelerating MLOps: using CI/CD with ML models

[ad_1]

 

Continuous Integration and Continuous Deployment (CI/CD) are key components of any mature software development environment. During CI, newly added code is merged into the codebase, kicking off builds and automated testing. If all tests succeed, then the CD phase begins, deploying the changes automatically to production. In this way, developers can immediately release changes to production by simply committing to, or merging into, the proper branch in their version control system.

Developers have a great deal of flexibility as to how they build this pipeline, due to the wide variety of open and interoperable platforms for version control and CI/CD. This is not, however, always true in the world of machine learning: it can be difficult to properly version, track, and productionize machine learning models

Challenges of CI/CD in machine learning

Some existing services provide this functionality effectively, but lock data scientists into a black-box silo where their models must be trained, tracked, and deployed on a closed technology stack. Even when open systems are available, they do not always interoperate cleanly, forcing data scientists to undergo a steep learning curve or bring in specialists to build novel deployment tools.

At Algorithmia, we believe ML teams should have the freedom to use any training platform, language, and framework they prefer, then easily and quickly deploy their models to production. To enable this, we provide CI/CD workflows in Jenkins and GitHub Actions, which work out of the box, but can be easily modified to work with just about any CI/CD tool to continuously deploy your ML models into our scalable model-hosting platform, running in our public marketplace or in your own private-cloud or on-prem cluster.

This is made possible by our Algorithm Management API, which provides a code-only solution for deploying your model as a scalable, versioned API endpoint. Let’s take a look at a typical CI/CD pipeline for model deployment:

The process kicks off when you train your initial model or modify the prediction code—or when an automatic process retrains the model as new data becomes available. The files are saved to network-available storage and/or a version control system such as Git. This triggers any tests that must be run, then kicks off the deployment process. In an ideal world, your new model will be live and running on your production servers within seconds or minutes as a readily available API endpoint apps and services can call. In addition, your endpoint should support versioning, so dependant apps/services can access older versions of your model as easily as the latest copy.

Algorithmia’s CI/CD tools provide the latter stages of that workflow: detecting the change in your saved model or code, and deploying your new model to a scalable, versioned Algorithmia API endpoint (an “algorithm” in our terminology). These are drop-in configurations: the only changes you need to make are to the single Python script, which specifies the settings to use (e.g. endpoint name and execution language) and which files to deploy.

If you’re using Jenkins or GitHub Actions, simply clone and configure the appropriate configuration. If you prefer a simple, notebook-driven deploy, check out our Jupyter Notebook example. If you’re using another tool, it should be fairly simple to customize the examples, or you can contact us to request new ones!

[ad_2]

This article has been published from the source link without modifications to the text. Only the headline has been changed.

Source link

Most Popular