How to Deploy Shadow ML Models with Wallaroo

September 8, 2022

Machine Learning Model Testing & Experimentation

Wallaroo offers several different frameworks for model evaluation against other models or sets of models inside a pipeline. Allowing you to select which models would perform better in production. Being able to test existing models against a potential replacement helps facilitate better decision-making from your data. 

What is Shadow Deployment?

One type of testing framework is through parallel deployments, also referred to as “shadow deployments”. Single or multiple new models known as “challengers” are tested against an existing control model known to generate accurate results called the “champion”, to determine if the “challenger” can generate faster or more precise outputs depending on the criteria.

They key with shadow deployments is you get to see how a challenger would perform in the production environment without actually being in production (and possibly giving bad results to the business).

How to Deploy Challenger ML models

In this tutorial, we will illustrate how to deploy challenger models and compare their performance against the champion model using Wallaroo. This will cover a step-by-step process of uploading the champion and challenger models, creating a shadow deployment in a Wallaroo pipeline, performing an inference with a shadow deployment, observing the data and shadow_data results from the Inference Result Object, and viewing the pipeline logs and pipeline shadow logs.

Prerequisites for Shadow Deployment

For the sake of this guide, we shall be using the files below which are available via the Wallaroo tutorials repository.

Please Note: The following information has been validated only as of 9/8/2022 and the recorded procedures in this blog may not reflect current practices. If you have any issues executing these procedures with similar results, please visit our documentation site for our most recent code and suggestions.

Step 1: Importing libraries 

Start by importing the required libraries using the following command:

import wallaroo
from wallaroo.object import EntityNotFoundError

Step 2: Connecting to Wallaroo 

Next, type in the command below to connect to your Wallaroo instance and save the connection as the variable wl:

wl = wallaroo.Client()

Step 3: Setting Variables

Depending on your organization’s requirements and Wallaroo instance, you will need to specify the following variables to create or use existing workspaces and pipelines, as well as upload models. For example, in this guide the model files were placed within a folder named “models” therefore the file locations are written ‘models/(file name)’. Begin with the following commands, making adjustments as necessary to meet your organizational needs:

workspace_name = 'ccfraud-comparison-demo'
pipeline_name = 'cc-shadow'
pipeline_name_multi = 'cc-shadow-multi'
champion_model_name = 'ccfraud-lstm'
champion_model_file = 'models/keras_ccfraud.onnx'
shadow_model_01_name = 'ccfraud-xgb'
shadow_model_01_file = 'models/xgboost_ccfraud.onnx'
shadow_model_02_name = 'ccfraud-rf'
shadow_model_02_file = 'models/modelA.onnx'
sample_data_file = './smoke_test.json'

Step 4: Creating or Connecting to a Workspace and a Pipeline

It is not a requirement in the Wallaroo instance to have unique workspace and pipeline names. Therefore, use your organization’s standards to create or connect to a workplace and a pipeline by defining the workspace_name variable and the pipeline_name variable respectively in the command below:

def get_workspace(name):
    wl = wallaroo.Client()
    workspace = None
    for ws in wl.list_workspaces():
        if ws.name() == name:
            workspace= ws
    if(workspace == None):
        workspace = wl.create_workspace(name)
    return workspace
def get_pipeline(name):
    wl = wallaroo.Client()
    try:
        pipeline = wl.pipelines_by_name(pipeline_name)[0]
    except EntityNotFoundError:
        pipeline = wl.build_pipeline(pipeline_name)
    return pipeline
workspace = get_workspace(workspace_name)
wl.set_current_workspace(workspace)
pipeline = get_pipeline(pipeline_name)
pipeline
shadow deployment detail

Step 5: Loading the ML Models 

Having created and set your workspace as current in the previous command, your models will be uploaded and listed as champion, model2, and model3 according to the variable names assigned in step 3. The commands should mirror the following:

champion = wl.upload_model(champion_model_name, champion_model_file).configure()
model2 = wl.upload_model(shadow_model_01_name, shadow_model_01_file).configure()
model3 = wl.upload_model(shadow_model_02_name, shadow_model_02_file).configure()

Step 6: Creating Shadow Deployment

You can now proceed to create shadow deployment using the add_shadow_deploy(champion, challengers[]) command. In this command, champion refers to the primary model used for inferences run through the pipeline, while challengers[] denotes the sets of models used for inferences iteratively.

pipeline.add_shadow_deploy(champion, [model2, model3])
pipeline.deploy()

It will generate the following results: 

ML Shadow Deployment pipeline result

Step 7: Running Test Inference

A test inference will be created using data from sample_data_file.

The inference result data from the champion and the challengers[] models will be returned through the Inference Result Object’s data element and the Inference Result Object’s shadow_data element respectively.

response = pipeline.infer_from_file(sample_data_file)
response

The response will look like this:

Running test inference on shadow deployment models

Step 8: Viewing Pipeline Logs

Once the inferences have been completed, employ the following pipeline logs function to extract the log data from the pipeline:

pipeline.logs()

If the process is successful, it will appear as follows: 

ML shadow deployment pipeline log

Step 9: Viewing Shadow Deploy Pipeline Logs

The following pipeline logs_shadow_deploy() command will allow you to view the results and inputs for the shadow deployed models.

logs = pipeline.logs_shadow_deploy()
logs

The logs from a shadow pipeline, grouped by their input will be as shown: 

Shadow deploy pipeline log

Step 10: Undeploying the Pipeline

Lastly, once you’ve completed your shadow deployment and received the pipeline logs, undeploy the pipeline to restore the resources to the system. The command for this process is:  

pipeline.undeploy()

It will generate the following results: 

undeploying ml pipeline

Evaluating your model’s performance before placing it in production is a great way to test new versions without impacting business operations. By following the  steps above, you can successfully shadow deploy models and test performance using Wallaroo. To speak to our experts about our shadow deployment solutions, reach out and contact us. For more information or assistance on this how-to, please visit the Wallaroo documentation site.