These days, it's hard to go an hour without reading an article about generative AI. Though the phenomenon sometimes referred to as the “steam engine” of the Fourth Industrial Revolution is still in its early stages, there's no doubt that “GenAI” is poised to transform nearly every industry, including finance, healthcare, and law.
While the cool user-facing applications may get the most attention, the companies driving this revolution are currently benefiting the most: Just this month, chipmaker Nvidia briefly became the world's most valuable company, a $3.3 trillion behemoth essentially driven by demand for AI computing power.
But in addition to GPUs (graphics processing units), companies also need infrastructure to manage the flow of data, store it, process it, train on it, analyze it, and ultimately unlock the full potential of AI.
One company looking to capitalize on this is Onehouse, a three-year-old California startup founded by Vinoth Chandar, who created the open source Apache Hudi project while he was a data architect at Uber. Hudi brings the benefits of data warehouses to data lakes, creating what it calls a “data lakehouse,” allowing support for actions like indexing and running real-time queries against large datasets of structured, unstructured, and semi-structured data.
For example, an e-commerce company that continuously collects customer data across orders, feedback, and related digital interactions will need a system that ingests all that data and keeps it up to date, which could help with product recommendations based on user activity. Hudi allows for data to be ingested from various sources with minimal latency and supports deletes, updates, and inserts (“upserts”), which is essential for such real-time data use cases.
Onehouse is building a fully managed data lakehouse to help companies adopt Hudi, or, in Chandar's words, “accelerate ingestion and data standardization into open data formats” that can be used by nearly every major tool in the data science, AI, and machine learning ecosystem.
“Onehouse abstracts away low-level data infrastructure construction and helps AI companies focus on their models,” Chandar told TechCrunch.
Today, Onehouse is bringing two new products to market that improve Hudi performance and reduce cloud storage and processing costs, and announced it has raised $35 million in a Series B funding round.
(Data) At the Lake House
One House advertisement on a billboard in London. Image courtesy of One House
Chandar developed Hudi as an internal project for Uber in 2016, and since the ride-hailing company donated the project to the Apache Foundation in 2019, Hudi has been adopted by companies including Amazon, Disney, and Walmart.
Chandar left Uber in 2019 and founded OneHouse after a brief stint at Confluent. The startup emerged from stealth in 2022 with $8 million in seed funding and soon after raised a $25 million Series A round, both of which were co-led by Greylock Partners and Addition.
The venture capital firms teamed up again for a follow-up Series B, this time with David Sachs' Craft Ventures leading the round.
“Data lakehouses are quickly becoming the standard architecture for organizations that want to centralize their data to power new services like real-time analytics, predictive ML, and GenAI,” Michael Robinson, partner at Craft Ventures, said in a statement.
In context, data warehouses and data lakes are similar in that they act as central repositories for pooling data, but they do so in different ways: data warehouses are best suited for processing and querying historical and structured data, while data lakes have emerged as a more flexible alternative to support multiple types of data and high-performance queries, storing large amounts of raw data in its original format.
This makes data lakes ideal for AI and machine learning workloads, as it is cheaper to store pre-transformed raw data, and at the same time, data can be stored in its original format to support more complex queries.
But the trade-off is a whole new layer of data management complexity, and the risk of poor data quality due to the wide variety of data types and formats. This is part of what Hudi aims to solve by bringing key features of data warehouses to data lakes, such as ACID transactions to support data integrity and reliability, and by improving metadata management for more diverse datasets.
Configuring data pipelines in Onehouse. Image courtesy of Onehouse
Because it's an open source project, any company can adopt Hudi. A quick look at the logos on the Onehouse website reveals some impressive users, including AWS, Google, Tencent, Disney, Walmart, Bytedance, Uber, and Huawei. But the fact that such big names are leveraging Hudi internally shows the effort and resources required to build Hudi as part of an on-premise data lakehouse setup.
“While Hudi offers rich capabilities for data ingestion, management and transformation, enterprises still need to integrate around six open source tools to achieve their goal of a production-quality data lakehouse,” Chandar said.
That's why Onehouse offers a fully managed, cloud-native platform that ingests, transforms, and optimizes your data in a fraction of the time.
“Users can have an open data lakehouse up and running in under an hour, with broad interoperability with all major cloud-native services, warehouses and data lake engines,” Chandar said.
The company declines to disclose the names of its commercial customers, apart from a few that are cited in case studies, such as Indian unicorn Apna.
“Because we are a young company, we are not disclosing OneHouse's entire commercial client list at this time,” Chandar said.
With $35 million in the bank, Onehouse is now extending its platform with a free tool called Onehouse LakeView. The tool provides observability for Lakehouse functionality, providing insights such as table statistics, trends, file sizes, and timeline history. It builds on the existing observability metrics provided by the core Hudi project and provides additional context about workloads.
“Without LakeView, our users would have to spend a lot of time interpreting metrics and deeply understanding the entire stack to get to the root cause of performance issues or inefficiencies in their pipeline configurations,” says Chandar. “LakeView automates this and provides email alerts on positive and negative trends, signaling the need for data management to improve query performance.”
Additionally, Onehouse is also announcing a new product called Table Optimizer, a managed cloud service that optimizes existing tables to speed up data ingestion and transformation.
“Open and interoperable”
Other big players in the space can't be ignored: companies like Databricks and Snowflake are increasingly embracing the lake house paradigm. Earlier this month, Databricks reportedly spent $1 billion to acquire a company called Tabular, with an eye toward creating a common lake house standard.
Onehouse has certainly entered a hot field, but hopefully its focus on an “open, interoperable” system that makes it easier to avoid vendor lock-in will help it stand the test of time. Essentially, Onehouse promises to give you universal access to a single copy of your data from just about anywhere, whether that be Databricks, Snowflake, Cloudera, or AWS native services, without the need to build separate data silos for each one.
Similar to Nvidia in the GPU space, it's hard to ignore the opportunities that await every company in the data management space. Data is at the heart of AI development, and not having enough good data is the main reason many AI projects fail. But even with tons of data, companies need the infrastructure to ingest, transform, standardize, and leverage it. This bodes well for Onehouse and companies like it.
“From a data management and processing perspective, we believe that quality data provided by a solid data infrastructure foundation will play a key role in bringing these AI projects to real-world operational use cases and avoiding the issue of garbage-in/garbage-out data,” Chandar said. “Data lakehouse users are struggling to scale their data processing and query needs to build these new AI applications on enterprise-scale data, and we are starting to see that demand.”