1: Separating storage and compute tends to result in relatively high startup latency (Databricks, here's looking at you!) when you have to provision new compute. For a massive batch job, waiting 10 minutes for a cluster to come up is fine, but a lot of orgs don't realize the cost implications and end up with tiny developer/analyst clusters, or aggressively spin inactive clusters down, which results in long wait times. Analyst/developer ergonomics have not traditionally been a major concern in the big data space.
1.5: Dealing with multiple files instead of a data warehouse SQL interface requires work, and can introduce interesting performance issues. Obviously you can put a SQL interface in front of the data, but most RDBMS / Data Warehouses have a lot of functionality around maintaining your data that you may not get with the native file format, so you get soft-locked into the metadata format that comes from your data lake file format. It's all open, so this is only a half issue, but there are switching costs.
(I do some training for Databricks)
1. Yeah, cluster startup time can be not fun. Here are some solutions:
- pools (keeps instances around so you don't have to wait for the cloud to provision them
- serverless SQL warehouses (viable if you're doing only SQL)
- one job with multiple tasks that share the same job cluster. Delta Live Tables does a similar thing but with streaming autoscaling
- streaming: cluster never needs to go down. Can share multiple streams on the same cluster so they load balance each other
> The one and a half downsides not mentioned here:
Make that 2.5. If you are building new analytic products the most significant issue is that 90% of the information in data lakes is already there and not in an open format like Iceberg. Instead, it's heaps of CSV and Parquet files, maybe in Hive format but maybe just named in some unique way. If your query engine can't read these it's like being all dressed up with no place to go.
I don't see any mention of sqlite. Is a sqlite file not the same thing they're talking about here? Pretty sure it has a spec and hasn't changed formats in many years so if you wanted to read it out in something that isn't sqlite, I imagine it wouldn't be too hard.
Here's one more problem not mentioned by others: SQLite has some functions that it declares but doesn't define. For example, regexp(). So, if your table is defined as having a constraint with regexp()... well, sucks to be you: it might depend on what regexp() implementation you load whether that constraint can be applied or not.
I know this because I needed this functionality in Ada code, which links statically with SQLite, and I didn't want to also link with PCRE library just to get regexp(), especially since GNATCOLL already, sort of has regexp... except it doesn't have the "fancy" features, s.a. lookaheads / lookbehinds / Unicode support etc.
So, a table that uses regexp() in its constraint definition isn't portable. But, if you only use the table data without the schema you lose a lot of valuable information...
----
Also, come to think about it: unlike server-client databases (eg. MySQL, PostgreSQL etc) SQLite doesn't have a wire-transfer format. Its interface returns values in the way C language understands them. The on-disk binary format isn't at all designed for transfer because it's optimized for access efficiency. This, beside other things, results in SQLite database file typically having tons of empty space, it doesn't use efficient (compressed) value representation etc.
So, trying to use SQLite format for transferring data isn't going to be a good arrangement. It's going to be wasteful and slow.
The main disadvantage of sqlite compared to these other formats is that sqlite is designed for single machine processing. This can be problematic for huge datasets or complex queries.
These other formats easily support using clusters to process your data.
I agree that sqlite has a number of similar benefits - openness, table abstractions and concurrent transactions. It’s also a library so close to how delta, iceberg and hudi are implemented.
I’m glad it’s had an uptick in interest recently but I haven’t yet seen it mentioned for analytics yet.
Yes, SQLite is row oriented. It's not a very space efficient format because it also doesn't support compression or compact representations of numbers in binary.
But it doesn't rely on the JVM and a typical JVM ecosystem. It is a big benefit for some use-cases, like dealing with numerical data on the edge.
These are metadata layers on top of file formats, expecting to process many files (where schema may change) in typically object storage, with volumes in excess of a petabyte. There is nothing preventing you from using sqlite with iceberg, for example - except implementing it (may involve implementation in your execution engine too). It already supports Parquet, Orc, and Avro (which is more row oriented than column oriented)
Iceberg and Sqlite might be interesting if you wanted to colocate two tables in the same file, for example. A smart enough data access layer with an appropriate execution engine could possibly see the data for both.
Column orientation is extremely important for query + storage efficiency.
Furthermore, you need a distributed query engine (Athena, Bigquery etc.) and they all support parquet.
I talk to customers basically all day about table formats. Only one customer has really brought up Hudi in a meaningful way. IMO, Hudi is basically out of contention for 95%+ of people looking at table formats.
I ask because, if I didn't know either word, the one would mean, to me, "tiny storage next to a big body of data" and the other would mean "a big body of data".
No! Haven't you heard? The new SQL "killer" is simply tying LLMs into your data and giving it commands in the common business parlance of your choice! kinda. sorta.
1: Separating storage and compute tends to result in relatively high startup latency (Databricks, here's looking at you!) when you have to provision new compute. For a massive batch job, waiting 10 minutes for a cluster to come up is fine, but a lot of orgs don't realize the cost implications and end up with tiny developer/analyst clusters, or aggressively spin inactive clusters down, which results in long wait times. Analyst/developer ergonomics have not traditionally been a major concern in the big data space.
1.5: Dealing with multiple files instead of a data warehouse SQL interface requires work, and can introduce interesting performance issues. Obviously you can put a SQL interface in front of the data, but most RDBMS / Data Warehouses have a lot of functionality around maintaining your data that you may not get with the native file format, so you get soft-locked into the metadata format that comes from your data lake file format. It's all open, so this is only a half issue, but there are switching costs.
I see a lot of companies that get sold on Databricks and then are surprised by the cost.
Make that 2.5. If you are building new analytic products the most significant issue is that 90% of the information in data lakes is already there and not in an open format like Iceberg. Instead, it's heaps of CSV and Parquet files, maybe in Hive format but maybe just named in some unique way. If your query engine can't read these it's like being all dressed up with no place to go.
They're not good structured table formats, but they are open compared to the binary storage you get with Oracle or Snowflake.
There are many other competitive query engines that do not suffer from the startup latency.
You could store your data in a SQLite database, but that's not really interoperable in the way a bunch of Parquet files are. Source: tried it.
I know this because I needed this functionality in Ada code, which links statically with SQLite, and I didn't want to also link with PCRE library just to get regexp(), especially since GNATCOLL already, sort of has regexp... except it doesn't have the "fancy" features, s.a. lookaheads / lookbehinds / Unicode support etc.
So, a table that uses regexp() in its constraint definition isn't portable. But, if you only use the table data without the schema you lose a lot of valuable information...
----
Also, come to think about it: unlike server-client databases (eg. MySQL, PostgreSQL etc) SQLite doesn't have a wire-transfer format. Its interface returns values in the way C language understands them. The on-disk binary format isn't at all designed for transfer because it's optimized for access efficiency. This, beside other things, results in SQLite database file typically having tons of empty space, it doesn't use efficient (compressed) value representation etc.
So, trying to use SQLite format for transferring data isn't going to be a good arrangement. It's going to be wasteful and slow.
These other formats easily support using clusters to process your data.
I’m glad it’s had an uptick in interest recently but I haven’t yet seen it mentioned for analytics yet.
I assume it’s row rather than column oriented?
But it doesn't rely on the JVM and a typical JVM ecosystem. It is a big benefit for some use-cases, like dealing with numerical data on the edge.
Iceberg and Sqlite might be interesting if you wanted to colocate two tables in the same file, for example. A smart enough data access layer with an appropriate execution engine could possibly see the data for both.
- https://datasette.io/examples
Deleted Comment
Here is a more technical deep dive comparing the three main open formats - https://www.onehouse.ai/blog/apache-hudi-vs-delta-lake-vs-ap...
I've been dabbling with implementing a database, and adopting a format would be much easier.
[1] https://avro.apache.org/
[2] https://parquet.apache.org/
https://www.onehouse.ai/blog/apache-hudi-vs-delta-lake-vs-ap...
It'd be great to agree on some nomenclature. Neither are very descriptive (or simple) in my view.
A lakehouse would be a collection of tables.
There would be some SQL engine such as Trino, Databricks, ClickHouse brokering access to these tables.
The lakehouse might be organised into layers of tables where we have raw, intermediate, processed tables.
The concept of a lakehouse doesn’t really work if you don’t have transactions and updates, which these table formats enable.
I always say, Lakehouse is a stupid name but an amazing concept and I’m sure the data industry will go this way.
I ask because, if I didn't know either word, the one would mean, to me, "tiny storage next to a big body of data" and the other would mean "a big body of data".