After Snowflake, Databricks also integrates with Tecton to accelerate enterprise ML projects

To further strengthen our commitment to providing industry-leading coverage of data technology, VentureBeat is excited to welcome Andrew Brust and Tony Baer as regular contributors. Watch for their articles in the Data Pipeline.

Databricks is getting support for Tecton’s feature store, as an increasing number of enterprises look to leverage its lakehouse platform for machine learning (ML) projects.

In a statement on Thursday, Tecton announced an integration that will make its feature store available on Databricks’ platform, giving joint customers a way to build and automate their ML feature pipelines, from prototype to production, in a matter of minutes.

“Building on Databricks’ powerful and massively scalable foundation for data and AI, Tecton extends the underlying data infrastructure to support ML-specific requirements. This partnership with Databricks enables organizations to embed machine learning into live, customer-facing applications and business processes, quickly, reliably and at scale,” Mike Del Balso, cofounder and CEO of Tecton, said.

How does Tecton feature store accelerate ML application deployment?

For any predictive application to work, the ML model underneath has to be trained on historical data. In most cases, this data can be visualized as a table, with rows representing certain elements and columns providing attributes describing those elements. Each individual attribute, or measurable property, is a called a feature. Data scientists usually apply transformations to raw data to create features for ML models, but the process comes with unique engineering challenges and takes a lot of time, affecting the training and deployment timelines.

A feature store provides data scientists with a dedicated place to save developed features for reuse at a later stage or by another team member within the same organization. Tecton also does the same job, although its offering goes a step ahead and also automates the entire lifecycle of ML features – from the transformation of raw data to serving for inference.

This way, when it’s integrated with Databricks, teams can automate the building of ML features and operationalize ML applications in minutes, rather than months. The whole thing works without having to leave the Databricks workspace.

“A Databricks user will be able to define features in Tecton and those features will be processed, orchestrated and stored using Databricks. They will be available in a Databricks notebook for users that are training models and are also made available for online inference, to power models running in production,” Del Balso told VentureBeat.

“Historical features are stored in Delta Lake, meaning that all of the features a user builds are natively available in the data lakehouse. Databricks users also have access to MLflow, where they can host the trained models and create serving endpoints to deliver real-time predictions. In a nutshell, through this integration, a Databricks user can define and manage features in Tecton, process feature values using Databricks compute, and serve predictions using MLflow,” he added.

Widespread adoption

Multiple Tecton and Databricks customers, including Fortune 500 companies, are already using this integration to power real-time predictive applications such as fraud detection, real-time underwriting, dynamic pricing, recommendations and personalization. However, Databricks is not the only company with this kind of integration.

A few months ago, Snowflake, too, partnered with Tecton to introduce its feature store on its data cloud. The engagement also included the integration of its open-source feature store “Feast.”

Originally appeared on: TheSpuzz

Scoophot
Logo