See how Wallaroo.AI helps unlock AI at scale for Retail >

Canary Deployment At A Glance

Just as the canary bird would alert miners to toxic fumes while they worked, a canary deployment is used by MLOps and others to detect issues in their new CI/CD process releases. This has become important as model development constantly changes due to continuous integration, generating new issues constantly. The sooner they are dealt with the better. 

What is Canary Deployment?

Canary deployment, also known as “canary analysis,” is a deployment strategy or pattern that allows you to release a new version of your model or application in increments to a subset of users. As a result, only particular users are served the changes, providing time to make adjustments until you are confident enough to fully replace the current version in production to all users with the latest. The basic steps for implementing canary deployment are:

  • Installing the updated model to your systems 
  • Splitting the users into two groups (the canary and the control group)
  • Evaluating the canary version and deciding whether to migrate the rest of the users

When Should You Use Canary Deployment

Canary deployment is a great strategy to ensure zero downtime for the majority of your users. It allows ops teams to detect problems before the full rollout, even on large-scale deployments. For example, an online product company can’t afford even a moment of interruption as it would mean significant revenue and possible customer loss. Canary deployments can reduce your risk of error and allow you to update your systems with confidence. However, there are multiple deployment strategies such as shadow deployment and blue-green, so why use canary?

For starters with shadow deployment, the ML team is the only person seeing the results. The model is deployed in a “shadow” environment to generate predictions by duplicating requests, then sending the production traffic to the shadow environment to see how it will perform in production, but it isn’t quite used for anything else. With blue-green deployments, you are allowed to test a new release in an actual production environment and enable teams to switch users over. This deployment strategy can require a large budget to accommodate the infrastructure requirements as it calls for maintaining two identical hosting environments. Therefore, businesses with limited resources that want to test models among their user base may want to opt for canary deployment.

Benefits of Canary Deployments

Canary deployment has multiple benefits that set it apart from other ML deployment strategies:

  1. Tests/Experiments in Production: With canary deployment, organizations can utilize different experimentation pipelines on certain deployment platforms that allow other kinds of experiments to be run in production. In a key split experiment, for example, requests are distributed according to the value of a key, or query attribute. A scenario with a credit card company could have gold card customers routed to one model and platinum cardholders to another. This would not be a good way to split for let’s say an A/B test, but it can still be useful for other situations. 
  2. Least Prone to Risks: Adopting a process that includes canary deployment means all updates to your infrastructures can now take place in smaller increments (10%, 25%, 75%, and 100%). Because of this, compared to the other ML deployment methods, you can now roll back deployment to a previous application version if necessary with no downtime.
  3. Cheaper Deployment with Direct Feedback: While other deployment strategies may work similarly, canary deployments are often much cheaper as there are no additional cost requirements for creating an additional production environment. Additionally, its small subset of users can provide direct input concerning the performance and issues before finalizing the release.

Experimentation, Deployment, and Wallaroo

When it comes to model testing for deployment, Wallaroo’s experimentation pipelines allow for a variety of experiments to be run in production. Being able to test new models and see how they perform in production not only helps data scientists improve the ML decision-making processes but saves enterprises from disrupting the current workflow with untested revisions. Wallaroo can help you optimize the last mile of your MLOps. If you are interested in letting Wallaroo help improve your current ML deployment, please reach out to us at deployML@wallaroo.ai. Also, please visit our documentation page for step-by-step instructions for common tasks including setting up testing frameworks. 

Table of Contents

Topics

Share

Related Blog Posts

Get Your AI Models Into Production, Fast.

Unblock your AI team with the easiest, fastest, and most flexible way to deploy AI without complexity or compromise. 

Keep up to date with the latest ML production news sign up for the Wallaroo.AI newsletter

Platform
Platform Learn how our unified platform enables ML deployment, serving, observability and optimization
Technology Get a deeper dive into the unique technology behind our ML production platform
Solutions
Solutions See how our unified ML platform supports any model for any use case
Computer Vision (AI) Run even complex models in constrained environments, with hundreds or thousands of endpoints
Resources