Why data lakehouses are the key to growth and agility

Were you unable to attend Transform 2022? Check out all of the summit sessions in our on-demand library now! Watch here.


As organizations ramp up their efforts to be truly data-driven, a growing number are investing in new data lakehouse architecture.

As the name implies, a data lakehouse combines the structure and accessibility of a data warehouse with the massive storage of a data lake. The goal of this merged data strategy is to give every employee the ability to access and employ data and artificial intelligence in order to make better business decisions.

Many oganizations clearly see lakehouse architecture as the key to upgrading their data stacks in a manner that provides greater data flexibility and agility.

Indeed, a recent survey by Databricks, found that nearly two-thirds (66%) of survey respondents are using a data lakehouse. And 84% of those who aren’t using one currently, are looking to do so.

Event

MetaBeat 2022

MetaBeat will bring together thought leaders to give guidance on how metaverse technology will transform the way all industries communicate and do business on October 4 in San Francisco, CA.

Register Here

“More businesses are implementing data lakehouses because they combine the best features of both warehouses and data lakes, giving data teams more agility and easier access to the most timely and relevant data,” says Hiral Jasani, senior partner marketing manager at Databricks.

There are four primary reasons why organizations that adopt data lakehouse models do so, Jasani says:

  • Improving data quality (cited 50%)
  • Increasing productivity (cited by 37%)
  • Enabling better collaboration (cited by 36%)
  • Eliminating data silos (cited by 33%)

How data quality and integration impacts from a data lakehouse architecture

A modern data stack built on the lakehouse addresses data quality and data integration issues. It leverages open-source technologies, employs data governance tools and includes self-service tools to support business intelligence (BI), streaming, artificial intelligence (AI), and machine learning (ML) initiatives, Jasani explains.

“Delta Lake, which is an open, reliable, performing and secure data storage and management layer for the data lake, is the foundation and enabler of a cost-effective, highly scalable lakehouse architecture,” Jasani says.

Delta Lake supports both streaming and batch operations, Jasani notes. It eliminates data silos by providing a single home for structured, semi-structured, and unstructured data. This should make analytics simple and accessible across the organization. It allows data teams to incrementally improve the quality of their data in their lakehouse until it is ready for downstream consumption.

“Cloud also plays a large role in data stack modernization,” Jasani continues. “The majority of respondents (71%) reported that they have already adopted cloud across at least half their data infrastructure. And 36% of respondents cited support across multiple clouds as a top critical capability of a modern data technology stack.”

How siloed and legacy systems hold back advanced analytics

The many SaaS platforms that organizations rely on today generate large volumes of insightful data. This can provide huge competitive advantage when managed properly, Jasani says. However, many organizations use siloed, legacy architectures which can prevent them from optimizing their data.

“When business intelligence (BI), streaming data, artificial intelligence and machine learning are managed in separate data stacks, this adds further complexity and problems with data quality, scaling, and integration,” Jasani stresses.

Legacy tools cannot scale to manage the increasing amount of data, and as a result, teams are spending a significant amount of time preparing data for analysis rather than actually gleaning insights from their data. On average, the survey found that respondents spent 41% of their total time on data analytics projects dedicated to data integration and preparation.

In addition, learning how to differentiate and integrate data science and machine learning capabilities into the IT stack can be challenging, Jasani says. The traditional approach of standing up a separate stack just for AI workloads doesn’t work anymore due to the increased complexity of managing data replication between different platforms, he explains.

Poor data quality issues affect nearly all organizations

Poor data quality and data integration issues can result in serious, negative impacts on a business, Jasani confirms.

“Almost all survey respondents (96%) reported negative business effects as a result of data integration challenges. These include lessened productivity due to the increased manual work, incomplete data for decision making, cost or budget issues, trapped and inaccessible data, a lack of a consistent security or governance model, and a poor customer experience.”

Moreover, there are even greater long-term risks of business damage, including disengaged customers, missed opportunities, brand value erosion, and ultimately bad business decisions, Jasani says.

Related to this – data teams are looking to implement the modern data stack to improve collaboration (cited by 46%). The goal is to have a free flow of information and it enables data literacy and trust across an organization.

“When teams can collaborate with data, they can share metrics and objectives to have an impact in their departments. The use of open source technologies also fosters collaboration as it allows data professionals to leverage the skills they already know and use tools they love,” Jasani says.

“Based on what we’re seeing in the market and hearing from customers, trust and transparency are cultural challenges facing almost every organization when it comes to managing and using data effectively,” Jasani continues. “When there are multiple copies of data living in different places across the organization, it’s difficult for employees to know what data is the latest or most accurate, resulting in a lack of trust in the information.”

If teams can’t trust or rely on the data presented to them, they can’t pull meaningful insights that they feel confident in, Jasani stresses. Data that is siloed across different business functions creates an environment where different business groups are utilizing separate data sets, when they all should be working from a single source of truth.

Data lakehouse models and advanced analytics tools

Organizations that are most typically considering lakehouse technology are those that want to implement more advanced data analytics tools. These organizations are likely handling many different formats for raw data on inexpensive storage. This makes it more cost-effective for ML/AI uses, Jasani explains.

“A data lakehouse that is built on open standards provides the best of data warehouses and data lakes. It supports diverse data types and data workloads for analytics and artificial intelligence. And, a common data repository allows for greater visibility and control of their data environment so they can better compete in a digital-first world. These AI-driven investments can account for a significant increase in revenue and better customer and employee experiences,” Jasani says.

To achieve these capabilities and address data integration and data quality challenges, survey respondents reported that they plan to modernize their data stacks in several ways. These include implementing data quality tools (cited by 59%), open source technologies (cited by 38%), data governance tools (cited by 38%), and self-service tools (cited by 38%).

One of the important first steps to modernizing a data stack is to build or invest in infrastructure that ensures data teams can access data from a single system. In this way, everyone will be working off the same up-to-date information.

“To prevent data silos, a data lakehouse can be utilized as a single home for structured, semi-structured, and unstructured data, providing a foundation for a cost-effective and scalable modern data stack,” Jasani notes. “Enterprises can run Al/ML, and BI/analytics workloads directly on their data lakehouse, which will also work with existing storage, data, and catalogs so organizations can build on current resources while having a future-proofed governance model.”

There are also several considerations that IT leaders should factor into their strategy for modernizing their data stack, Jasani explains. They included whether they want a managed or self-managed service, product reliability to minimize downtime, high-quality connectors to ensure easy access to data and tables, timely customer service and support, and product performance capabilities to handle large volumes of data.

Additionally, leaders should consider the importance of open, extendable platforms that offer streamlined integrations with their data tools of choice and enable them to connect to data wherever it lives, Jasani recommends.

Finally, Jasani says “there is a need for a flexible and high-performance system that supports diverse data applications including SQL analytics, real-time streaming, data science, and machine learning. One of the most common missteps is to use multiple systems – a data lake, separate data warehouse(s), and other specialized systems for streaming, image analysis, etc. Having multiple systems adds complexity and prevents data teams from accessing the right data for their use cases.”

Originally appeared on: TheSpuzz

Scoophot
Logo