Meet Wallaroo.AI: The Game-Changing Platform for Deploying Machine Learning 

April 20, 2021

In the dynamic landscape of data science, deploying machine learning models has often been a complex and daunting task. But what if there was a platform designed to simplify this critical phase, making it seamless and efficient for every enterprise? Enter Wallaroo.AI, the game-changing solution that’s transforming the way businesses approach deploying machine learning. Dive into our journey, understand the challenges of ML deployment, and discover how Wallaroo.AI is redefining ML deployment.

“Very few companies have the resources to fully leverage their data to build better products or optimize their operations. Wallaroo levels the playing field by making it easy, fast, and low cost for any enterprise to take their boldest data and ML ideas live to deliver results.”

— VID JAIN, FOUNDER & CEO, WALLAROO

The Wallaroo Story

We began in 2017 as a tight-knit group of engineers dedicated to solving the increasingly common problem of analyzing large amounts of data via computational algorithms efficiently and at scale. By applying our collective expertise in building distributed computing systems in industries such as high-frequency financial trading and adTech, we built a high-performance compute engine like nothing else on the market. While our customers could now efficiently analyze their data and use it to run machine learning (ML) models at scale, they soon pointed out their next biggest challenge: bringing those models online easily, and then understanding how the models were performing so they quickly and sustainably generate business value.

Like most organizations, they were doing everything by the book: data scientists would build ML models to solve a business problem, and engineers would launch them using a patchwork of open-source software and containerized model approaches. What they found, however, was that getting each model to production was like pulling teeth. 

Models often had to be painstakingly re-engineered, the deployment software couldn’t process data fast enough — even when running on an alarming amount of computing resources — and it was unnecessarily difficult to see how models were performing to measure their ongoing accuracy.

We knew there had to be a better way. 

Machine Learning’s “last mile problem

Data science is a modern-day superpower, and enterprises around the world know it. Over 90% of Fortune 1000 companies are investing in Big Data, analytics, and artificial intelligence (AI), reaching over $700 billion being poured into teams of data scientists and engineers to revolutionize the way they do business. 

Yet machine learning is hard, and the last mile of ML – getting the models into production to impact the bottom line – is especially hard. If businesses can’t do this easily or at scale, their AI initiatives will fail, resulting in significant costs in terms of budget, manpower, and disillusionment. According to Gartner, less than half of AI prototypes make it to production, and in the end, only about 10% generate substantial ROI. 

Deployment solutions — whether containerization, cobbling together various existing technologies, or customizing an analytics workhorse like Apache Spark — are cumbersome, limited in scope, expensive at scale, prone to failure, and unable to run ML models against batch and streaming data. 

With investments in AI only trending upwards, companies hoping to turn a profit with their data will never reach their full potential as the long deployment lead times and high cost to run and maintain the necessary infrastructure often outweigh the benefits.

Introducing Wallaroo: The different, better way to deploy ML

Wallaroo is a breakthrough platform for the last mile of ML, providing a simple, secure, and scalable deployment capability that fits into your end-to-end workflow.

Figure 1: Wallaroo and the last mile of ML 

Wallaroo gets your ML to business results faster, easier, and with a far lower investment. By streamlining the deploy/run/observe parts of an ML lifecycle and giving data scientists the freedom to use the tools they already know, Wallaroo enables your team to:

  • Deploy models in seconds
  • Analyze data up to 12.5X faster
  • Reduce compute costs by 80%
  • Iterate quickly and scale easily
Figure 2: Before vs. after using Wallaroo for production AI.

1 easy-to-use platform. 3 key components

This is where we normally get the question: “okay, so I see what you do and the problem you solve, but what exactly are you?” The Wallaroo platform is composed of 3 key components:

  • A self-service toolkit for easy model deployment and management
  • A distributed compute engine allowing you to inference faster using fewer servers
  • Observability, insights, and dashboards to monitor the ongoing performance of your models in production
Figure 3: Wallaroo’s 3 key components

Self-service Toolkit: Swift and simple ML deployments

This is the component that enables data scientists to deploy their ML models against live data in two clicks of a button—whether it’s to a testing, staging, or production environment. With an intuitive SDK, UI, and API and support for common data workflows, Wallaroo takes care of the details to let data teams focus on the bigger picture. 

  • Easily deploy, test, and iterate ML models using the frameworks your team already knows (e.g., TensorFlow, PyTorch, Scikit-learn, and XGBoost).
  • Run batch jobs or streaming to capture valuable market insights as they happen. 
  • Refine and immediately redeploy new and improved models without complex re-engineering or operational headaches, including model management and data scientist collaboration features.

Distributed computing engine: Lightning-speed computing at lower cost 

Here’s where the magic happens. This highly performant, easily-scalable engine can analyze up to 100K events per second on a single server (beating the industry average of 5,000 events per second), making Wallaroo the fastest platform on the market for production ML. 

  • Run multiple models on a single server to drastically reduce computing costs and maintenance overhead.
  • Analyze data at record-breaking speed and react to market changes in real-time for a sharper competitive edge.
  • Leverage an ultrafast environment for production model scoring and pre/post-processing, with support for custom data operations. 
  • Scale down to run at the edge

Typically with customer transformer models, computer vision, complex neural networks, and NLP models we have seen 5X – 12.5X faster analysis using 80% less infrastructure compared to the customer’s previous deployments.

Observability and Model Insights: Real-time metrics to measure business impact

This is where it all ties together. A simple, easy-to-use interface allows anyone on your team to explore powerful metrics and detailed analytics so they can effectively track, measure, and help improve your ML’s performance.

  • Drill down into computing specifics like throughput, model latency, and benchmark performance for in-depth analysis.
  • Validate model inputs to guard against invalid or unexpected data. 
  • Monitor the behavior of models and their inputs over time to understand when changes in the environment might require a model refresh. 
  • Use A/B testing along with shadow and staged deployments to make sure you’re always using the highest-performing models. 
  • Simplify audits with detailed event logs and visibility into everything your compliance and risk management team needs to do their jobs more efficiently. 

Your data. Your tools. Your ecosystem.

Enterprises will often look to all-in-one MLOps platforms such as SageMaker, Databricks, or DataRobot to simplify deployment. However, these platforms force data teams to standardize on proprietary tools, processes, and formats. These tools will then lead to complexity as different business units within the same company might use different data platforms. One of our customers, for example, is all-in-one on a certain cloud, but because of mergers & acquisitions, its data engineering teams are supporting different deployment processes for multiple clouds. 

In response, companies will spend countless resources building their platform in-house, cobbling together open-source technologies such as Spark and MLflow, which might work within the current ecosystem but at the expense of performance and model observability. 

Seamless integration with data systems

Wallaroo is designed to click into your ecosystem and seamlessly connect with everything around it. We provide a standardized process that ML engineering teams can use to deploy, run and observe models across platforms, clouds, and environments (in the cloud, on-premises, or at the edge). 

Our Connector Framework neatly plugs our platform with your incoming and outgoing data points and takes care of the integration to get you up and running in no time.   

  • Quickly connect with popular data sources and sinks, like Apache Kafka and Amazon S3. 
  • Plug in custom integrations and even your own in-house solutions. 
  • Rely on rapid support if you need to integrate something that isn’t available out of the box.
Figure 4: Wallaroo connects to your data sources and feeds your downstream applications without requiring you to rip and replace your data ecosystem.

You can also rest assured that all your data will remain yours. Everything that goes in and out of Wallaroo is private, secure, and only visible to those with permission to see it.

Built from the ground up for speed, efficiency, and ease-of-use

Wallaroo was specifically engineered for modern machine learning deployments, unlike Apache Spark, or heavy-weight containers. The core distributed computing engine is written in Rust language, not Java – so it runs at C-speeds and is Python-friendly. Our SDK was designed with data scientists in mind, and has incorporated direct feedback from our customers.

Bring your boldest AI projects online with Wallaroo

Wallaroo is a platform that enables the future of AI and analytics we always wished we had: one where cutting-edge AI and ML can be deployed in seconds, and data teams can deliver a higher value at a lower cost. We built it so your team can spend less time making your data work with your software, and more time making your data work for your business. 

If you want to explore this further with us, such as access to the full SDK, giving us specific feedback about functionality/semantics/integration, or discussing how it can help with your use case, email us at deployML@wallaroo.ai.

You can find more information about Wallaroo at https://wallaroo.ai/blog.