From Training to Real-Time Inference: How to Solve Computer Vision Challenges in Healthcare Learn More >

Announcing Wallaroo Is Now Part Of The Open Grid Alliance

Announcing Wallaroo Is Now Part Of The Open Grid Alliance | Wallaroo.AI

We are proud to announce that we are now part of the Open Grid Alliance (OGA) – a member-supported and collaborative organization that is focused on re-architecting the Internet to scale with the next generation of immersive applications that will transform how we work, live and play. We’re especially excited about the OGA’s vision of a network that seamlessly leverages multi-cloud, distributed, and edge resources. This aligns perfectly with Wallaroo’s mission to make it easy to deploy, monitor, and manage any kind of machine learning model, in any kind of deployment environment – cloud, on-prem, edge, and hybrid. Wallaroo will also participate in the OGA Grid Innovation Zone workstream, defining technologies for edge machine learning given our deep expertise in edge ML. 

“I’m pleased to welcome Wallaroo to the OGA. Wallaroo’s efficient inferencing at the edge coupled with its global model observability and management capability is a great strategic fit with OGA’s effort to scale and optimize distributed and edge ML operations across use cases and industries,” said Benoit Pelletier, OGA board member and Director of NextG-AI Research & Innovation Center and Strategic Alliances at VMware.

“I see a great opportunity in the OGA to share our thought leadership and experience in streamlining ML Operations as well as collaborating with the alliance members on identifying best practices and removing roadblocks to drive innovation and value creation in edge AI,” said Vid Jain, CEO & co-founder of Wallaroo.

One of the challenges OGA aims to solve for an open grid is the ability to “provide real-time telemetry for autonomous operations, predictive modeling and rapid decision-making.” But taking complex machine learning models trained in cloud or on-prem environments and deploying them at the edge is still relatively new. The development environment can be so different from the live production environment that it could take months of reengineering before a single model can be successfully deployed at the edge. And once the model is live, the data scientists who developed the model often have no view into the ongoing performance of their model until something goes wrong.

Overall we have seen two major hurdles for scaling the adoption of edge ML to more use cases:

1) Monitoring the ongoing accuracy of live models: Data science teams can focus so much on just deploying models and running models at the edge, that they forget to think about the day after a model is deployed. The network environment is continually changing and so past network quality and demand models quickly degrade. An important question to ask is – does your edge ML operations have the ability to monitor performance, push updated models, run A/B tests across portions of your fleet or do shadow testing?

2) Edge environment compute constraints: Edge devices are frequently resource constrained in terms of CPU power, memory, and/or network bandwidth. Having them offload the inference load to a remote data center is one solution, of course. But what happens if that introduces too much latency or requires more bandwidth than is available? The Open Grid Alliance’s vision of highly distributed, low-latency network resources provides additional options for running AI models where-needed. However, whether running on-device, on a local server, in a near-by micro-datacenter near-prem, or in a traditional data center, what’s needed is a specialized ML inference engine designed to run efficiently and perform consistently across this wide range of environments and provide the monitoring capabilities data scientists depend on.

Network architects need to reconsider their approach to machine learning not as a one-time deployment of a model, but rather as a cycle of deploying, managing and monitoring models based on ongoing performance. Through our work as part of the OGA, we look forward to making AI/ML a ubiquitous tool to improve network performance as well as the business operations of enterprises and service providers who intend to leverage leveraging this new vision for the Internet.

About Open Grid Alliance
The Open Grid Alliance is a member-supported collaboration organization that produces vendor-neutral strategies to re-architect the Internet with grid topologies needed to scale globally. Founding Open Grid Alliance members include VMware, Vapor IO, Deutsche Telekom, and Dell Technologies. OGA now includes over 30 member organizations.

Table of Contents

Topics

Share

Related Blog Posts

Get Your AI Models Into Production, Fast.

Unblock your AI team with the easiest, fastest, and most flexible way to deploy AI without complexity or compromise. 

Keep up to date with the latest ML production news sign up for the Wallaroo.AI newsletter

Platform
Platform Learn how our unified platform enables ML deployment, serving, observability and optimization
Technology Get a deeper dive into the unique technology behind our ML production platform
Solutions
Solutions See how our unified ML platform supports any model for any use case
Computer Vision (AI) Run even complex models in constrained environments, with hundreds or thousands of endpoints
Resources