See how Wallaroo.AI helps unlock AI at scale for Retail >

Wallaroo.AI April 2024 Release Updates

Wallaroo.AI April 2024 Release Updates | Wallaroo.AI Blog

Edge model in-line updates: 

Deploying models to Edge endpoints is challenging at the best of times. Once deployed models must be monitored for data drift and performance accuracy in order to deliver business value. If and when data scientists see model drift they need to take action to retrain the models and redeploy to the edge endpoints. 

Updating production models can have an adverse effect on business continuity as the production environment will likely be stopped and restarted while redeployment takes place. This downtime drives up operating costs and delays time to value for the business. 

Giving data scientists the capability to self-serve model deployments and also update models in line while keeping the production environment up and running is crucial to success, lowering costs and achieving fast time to value. 

This latest platform release for Wallaroo helps to solve these situations by enabling edge and multicloud model in-line updates availability. Organizations can now replace model versions, entirely new models, or a set of model steps and replace multiple edge locations model deployments with ease.

Wallaroo pipeline publishes are containerized versions of the pipeline, models and model steps, and the inference engine published to an Open Container Initiative (OCI) Registry. Once published, they are used to deploy models to edge locations and used for inference requests.

Edge locations added to a pipeline published allow deployments on edge devices to connect with the Wallaroo Ops instance and transmit their inference results as part of the pipeline logs.

Pipeline publishes can be replaced by new pipeline publishes from the following sources:

A new version of the same pipeline that creates a new pipeline publish. In these instances, the edge locations associated with the pipeline publish are preserved with the original pipeline along with their pipeline inference logs. Any input or output schema changes from the new models or model steps are reflected in the pipeline logs.

A separate pipeline with its own models and model steps. In this scenario, the edge locations associated with the original pipeline publish are assigned to the new pipeline. The inference logs are stored with the original pipeline, and new inference logs for the edge locations are stored as part of the new pipeline.

Model Observability for Low/No Connectivity Edge Deployments

In addition to in-line model updates this Wallaroo release also provides Data Scientists with the advanced capability of model observability at the edge where there is little to no Connectivity.

There are many use cases where edge devices have low or no connectivity such as in the energy industry where pipelines, wells, wind farms are in remote locations. Processing data at the edge while having the ability of centralized ops model management helps to deliver continuity and lower operating costs through not requiring staff to venture out to these locations or expensive network costs to transmit data, or data that is out of date and therefore not accurate. 

Through providing data scientists with the ability to observe model performance in these remote connectivity scenarios they can gain early proactive insights into model performance and take timely action to rectify and course correct in order to ensure high reliability and model performance. 

This in turn helps the business to lower operating costs and achieve tim to value quickly. Learn more about these new capabilities in our release documentation and test drive using the Free Community Edition linked below. 

Links

April 2024 Release Updates

Wallaroo.AI Free Community Edition

Edge Model In-Line Updates Tutorial

Model Observability for Low/No Edge Connectivity Deployments

Table of Contents

Topics

Share

Related Blog Posts

Get Your AI Models Into Production, Fast.

Unblock your AI team with the easiest, fastest, and most flexible way to deploy AI without complexity or compromise. 

Keep up to date with the latest ML production news sign up for the Wallaroo.AI newsletter

Platform
Platform Learn how our unified platform enables ML deployment, serving, observability and optimization
Technology Get a deeper dive into the unique technology behind our ML production platform
Solutions
Solutions See how our unified ML platform supports any model for any use case
Computer Vision (AI) Run even complex models in constrained environments, with hundreds or thousands of endpoints
Resources