From Training to Real-Time Inference: How to Solve Computer Vision Challenges in Healthcare Learn More >

Wallaroo.AI March 2022 Product Release Notes

Wallaroo December 2022 Product Release Notes | Wallaroo.AI

Product overview

The Wallaroo platform enables companies to manage their ML models in a simple, secure, and scalable fashion to facilitate the last mile of their ML journey. The Wallaroo platform offering is comprised of 3 core components:

  • Self-service toolkit for ML model deployment:
    • Integrations: The Wallaroo platform can be installed in any type of environment (cloud, edge, hybrid and on-prem). Additionally, the Wallaroo platform supports ML pipelines across different model training frameworks (TensorFlow, sklearn, PyTorch, XGBoost, etc.). The Wallaroo platform also offers data connectors to process various types of data modalities. 
    • ML pipeline management: Data Scientists can leverage the Wallaroo platform’s self-service SDK, UI and API to collaborate, manage and deploy their ML models and pipelines in a production environment.
  • Blazingly fast compute engine: Wallaroo’s purpose-built compute engine allows running models on vast amounts of data with optimized computational resource utilization, based on the size of data and complexity of ML pipelines to run. 
  • Advanced observability: Data Scientists can generate actionable insights at scale and help identify new business trends by analyzing model performance in real-time within the Wallaroo platform.  

March release overview

As we continue to iterate on our core capabilities, we are pleased to announce the following product improvements in our March 2022 product release: 

  • Compute Engine 
    • Autoscaling the Wallaroo compute engine: Data Scientists can leverage the Wallaroo engine auto-scaling feature to iterate quickly on experiments. ML pipelines that Data Scientists deploy in Wallaroo will dynamically utilize allocated computational resources efficiently without any user intervention. This will ensure that available computational resources are utilized efficiently across all deployed pipelines. 
  • Self-Service toolkit for ML Model deployment
    • Workspaces management: Data Scientists can create and manage workspaces, in which they can invite team members to review shared models/artifacts and collaborate on model deployment, management, and monitoring efforts.

For more information about this release, please contact us at deployML@wallaroo.ai

More product release notes:

Table of Contents

Topics

Share

Related Blog Posts

Get Your AI Models Into Production, Fast.

Unblock your AI team with the easiest, fastest, and most flexible way to deploy AI without complexity or compromise. 

Keep up to date with the latest ML production news sign up for the Wallaroo.AI newsletter

Platform
Platform Learn how our unified platform enables ML deployment, serving, observability and optimization
Technology Get a deeper dive into the unique technology behind our ML production platform
Solutions
Solutions See how our unified ML platform supports any model for any use case
Computer Vision (AI) Run even complex models in constrained environments, with hundreds or thousands of endpoints
Resources