Keras Model Conversion with Wallaroo
November 2, 2022Keras models are a construct of network architecture and model weights that offer consistency and simple APIs for integration, minimizing the volume of user actions needed for common cases. However, despite its commercial adoption by large corporate companies, not all ML deployment platforms provide a method of adoption for an enterprise’s ML model of choice. Wallaroo’s flexible platform is particularly useful for solving this issue, as it allows the continued use of a company’s preferred ML models and adapts to your existing digital ecosystem. This adaptability is a powerful added value to any MLOps, allowing teams to keep their current models instead of starting over to fit into a deployment platform. Wallaroo’s model conversion process can transform a Keras model into an open format that will run on the Wallaroo framework at speed.
The benefits of model conversion go beyond mere compatibility, allowing enterprises to integrate all the advanced Wallaroo platform features into their MLOps for faster deployment, real-time predictions, experimentation, scalability, and analysis of larger data sets. In addition to utilizing these features, it allows companies to save on resources by keeping their model files in their current format so that any future changes can be made with ease. Wallaroo’s conversion processes allow for enhanced compatibility across machine learning frameworks, standardizing ML model management for your enterprise.
This guide will follow the steps for converting your Keras (or Tensor) model into ONNX, enabling Keras or Tensor ML models to deploy with Wallaroo.
Note: The following information has been validated only as of 11/2/2022. If you have any issues executing these procedures with similar results, please visit our documentation site for our most recent code and suggestions.
Prerequisites for Executing Wallaroo’s Model Conversion
To follow the instructions in this guide successfully, you will need a Wallaroo instance and must run this Notebook from the Wallaroo Jupyter Hub service. Additionally, you will need the following parameters to run the Wallaroo auto-converter convert_model(path, source_type, and conversion_arguments) method for keras conversions:
- path (STRING): The path to the ML model file.
- source_type (ModelConversionSource): ML model type to be converted.
- conversion_arguments: The arguments for the conversion depending on the model type to be converted.
Keras Model Conversion
Step 1: Import Libraries
Start your Keras model conversion by importing all the required libraries with the following command:
import wallaroo
from wallaroo.ModelConversion import ConvertKerasArguments, ModelConversionSource, ModelConversionInputType
from wallaroo.object import EntityNotFoundError
Step 2: Configuration and Methods
Next, specify the workspace, pipeline, model name, model file name, and the sample data required to upload and convert the keras model. The get_workspace(name) prompt either creates a new workspace or sets an existing workspace with the given name as default. Similarly, the get_pipeline(name) function sets or creates a pipeline with the provided name in the current workspace.
workspace_name = 'keras-autoconvert-workspace'
pipeline_name = 'keras-autoconvert-pipeline'
model_name = 'simple-sentiment-model'
model_file_name = 'simple_sentiment_model.zip'
sample_data = 'simple_sentiment_testdata.json'
def get_workspace(name):
workspace = None
for ws in wl.list_workspaces():
if ws.name() == name:
workspace= ws
if(workspace == None):
workspace = wl.create_workspace(name)
return workspace
def get_pipeline(name):
try:
pipeline = wl.pipelines_by_name(pipeline_name)[0]
except EntityNotFoundError:
pipeline = wl.build_pipeline(pipeline_name)
return pipeline
Step 3: Connect to Wallaroo
For the following step, establish a connection to your Wallaroo instance, then save the connection information in the wl variable.
wl = wallaroo.Client()
Please log into the following URL in a web browser:
https://magical-bear-3782.keycloak.wallaroo.community/auth/realms/master/device?user_code=KHWJ-VMZW
Login successful!
Step 4: Set the Workspace and Pipeline
Enter the following command to create or set the workspace and pipeline in accordance with the previously configured names:
workspace = get_workspace(workspace_name)
wl.set_current_workspace(workspace)
pipeline = get_pipeline(pipeline_name)
pipeline
The following response will appear:

Step 5: Set the Model Auto-Convert Parameters
Proceed to specify the model shape and the other parameters for the conversion of the simple-sentiment-model using the following code:
model_columns = 100
model_conversion_args = ConvertKerasArguments(
name=model_name,
comment="simple keras model",
input_type=ModelConversionInputType.Float32,
dimensions=(None, model_columns)
)
model_conversion_type = ModelConversionSource.KERAS
Step 6: Upload and Convert the Model
The following command will upload and convert the model, consequently saving it as ‘{unique-file-id}-converted.onnx’:
# converts and uploads model.
model_wl = wl.convert_model('simple_sentiment_model.zip', model_conversion_type, model_conversion_args)
Model_wl
The results should show as follows:
{'name': 'simple-sentiment-model', 'version': '081bd353-2a70-45c0-a226-91b62cd2d8fb', 'file_name': '7b73d334-7fa4-4c3d-8b95-abeb0f1e6616-converted.onnx', 'image_path': None, 'last_update_time': datetime.datetime(2022, 10, 18, 16, 55, 5, 149341, tzinfo=tzutc())}
Test Inference
Once the model upload and conversion are complete, the final step to ensure model functionality is to run a sample inference.
Step 7: Add and Deploy Pipeline
Before running the test inference to determine the model’s functionality, enter the following command to add and deploy a pipeline:
pipeline.add_model_step(model_wl).deploy()
The following response will be generated:

Step 8: Run a Test Inference
You should now be able to run a sample inference from the ‘simple_sentiment_testdata.json’ file by entering the code below:
sample_data = 'simple_sentiment_testdata.json'
result = pipeline.infer_from_file(sample_data)
result[0].data()
The results should show as follows:
[array([[0.09469762],
[0.99103099],
[0.93407357],
[0.56030995],
[0.9964503 ]])]
Step 9: Undeploy the Pipeline
Lastly, undeploy the pipeline to restore the resources to the Wallaroo instance once the test is successful. You can do this with the following code:
pipeline.undeploy()
The response should look like this:

Wallaroo’s flexible platform can be integrated with a company’s existing framework and ecosystem, allowing you to leverage the cutting-edge Wallaroo infrastructure for your enterprise’s MLOPs. By utilizing this Keras model conversion process, you can harness more control over your data and models by taking full advantage of Wallaroo’s high performance, processing power, and lightning-fast deployment.
To further explore the value that Keras Model Conversion brings to your enterprise, contact our team of experts or email us at deployML@wallaroo.ai.