Databricks announced the release of Databricks Model Serving to provide simplified machine learning (ML) within the Databricks Lakehouse platform.
Model Serving removes the complexity of building and maintaining a sophisticated infrastructure for intelligent applications. Organizations can now use the Databricks Lakehouse Platform to integrate real-time machine learning systems into their business, from personalized recommendations to customer service chatbots, without the need to configure and manage the underlying infrastructure.
Deep integration within the Lakehouse Platform offers linearity of data and models, management and monitoring across the entire ML lifecycle – from experimentation to training and production. Databricks Model Serving is now generally available on AWS and Azure.
“Databricks Model Serving accelerates data science teams’ path to production by simplifying deployments, reducing overhead and delivering a fully integrated experience directly within the Databricks Lakehouse,”
said Patrick Wendell, Co-Founder and VP of Engineering at Databricks.
“This offering will let customers deploy far more models, with lower time to production, while also lowering the total cost of ownership and the burden of managing complex infrastructure”,
With the opportunities surrounding generative artificial intelligence (AI) taking center stage, businesses feel the urgency to prioritize AI investments across the board. Leveraging AI/ML enables organizations to uncover insights from their data, make accurate, instant predictions that deliver business value, and drive new AI-led experiences for their customers.
For example, AI can enable a bank to quickly identify and combat fraudulent charges on a customer’s account or give a retailer the ability to instantly suggest complementary accessories based on a customer’s clothing purchases. Most of these experiences are integrated in real-time applications.
However, implementing these real-time ML systems has remained a challenge for many organizations because of the burden placed on ML experts to design and maintain infrastructure that can dynamically scale to meet demand.
Databricks Model Serving removes the complexity of building and operating these systems and offers native integrations across the lakehouse, including Databricks’ Unity Catalog, Feature Store and MLflow. It delivers a highly available, low latency service for model serving, giving businesses the ability to easily integrate ML predictions into their production workloads.
Fully managed by Databricks, Model Serving quickly scales up from zero and back down as demand changes, reducing operational costs and ensuring customers pay only for the compute they use.