Readit News logoReadit News
crmd · 6 months ago
I am saying this as a lifelong supporter and user of open source software: issues like this are why governments and enterprises still run on Oracle and SQL Server.

The author was able to rollback his changes, but in some industries an unplanned enterprise-wide data unavailability event means the end of your career at that firm, if you don’t have a CYA email from the vendor confirming you were good to go. That CYA email, and the throat to choke, is why Oracle does 7 and 8 figure licensing deals with enterprises selling inferior software solutions versus open source options.

It seems that Linux, through Linus’ leadership, has been able to solve this risk issue and fully displace commercial UNIX operating systems. I hope many other projects up and down the stack can have the same success.

atombender · 6 months ago
Sorry, I think you misunderstood this article.

When the author is talking about rolling back his changes, it's not referring to a database, but a version of his library. If someone tried used his new version, I assume the only thing that would have gone wrong is that their code wouldn't work because Pandas didn't support the format.

This article is about how a new version of the Parquet format hasn't been widely adopted, and so now the Parquer community is in a split state where different forces are pulling the direction of the format in two directions, and this happens to be caused by two different areas of focus that don't need to be tightly coupled together.

I don't see how the problems the article discusses relate to the reliability of software.

kristianp · 6 months ago
I think the gp understood the article. They are talking about the people's software breaking when the author switched his software to v2 of Parquet.
forinti · 6 months ago
People keep using Oracle because they have a ton of code and migration would be too costly.

Oracle is not imune to software issues. In fact, this year I lost two weekends because of a buggy upgrade on the cloud that left my production cluster in a failed state.

chrismustcode · 6 months ago
A lot of these have business logic literally in the database built up over years.

It’s a mammoth task for them to migrate

taneq · 6 months ago
It’s not about being immune to software issues. It’s about having a vendor to cop the blame if something goes wrong.
1a527dd5 · 6 months ago
Polite disagree; governments and enterprises remain on Oracle / SQL Server because it is borderline sisphean. It can be done (we are doing it) but it requires a team who are doing it non-stop. It's horrible work.
rbanffy · 6 months ago
> The author was able to rollback his changes, but in some industries an unplanned enterprise-wide data unavailability event means the end of your career at that firm

If a (major) software update cause you an outage, you shouldn’t blame the software, but insufficient testing and validation. Large companies (I worked for many) are slow to adopt new technologies precisely because they are extremely cautious and want to make sure everything was properly tested before they roll it out. That’s also why they still use Oracle and SQL Server (and HP-UX, and IBMi) - these products are working and have been working for generations of employees. The grass needs to be significantly greener for them to consider the move to the other side of their fence.

duncanfwalker · 6 months ago
At the start of your comment I thought the 'issues like this' were going to be the 4 year discussions about what is and isn't core.
crmd · 6 months ago
So did I :-) but I think the concepts are related: Linus’ ability to shift into autocratic leadership mode when necessary seems to prevent issues like the 4 year indecisiveness on v2/core from compromising product quality to the point where Linux is trusted in a way that rivals commercial software.
moelf · 6 months ago
and why CERN rocking their own file format, again in, 2025, https://cds.cern.ch/record/2923186
3eb7988a1663 · 6 months ago
To be fair, CERNs needs do seem fairly niche. Petabyte numeric datasets with all sorts of access patterns from researchers. All of which they want to maintain compatible software forever.
willtemperley · 6 months ago
The reference implementation for Parquet is a gigantic Java library. I'm unconvinced this is a good idea.

Take the RLE encoding which switches between run-length encoding and bit-packing. The way bit-packing has been implemented is to generate 74,000 lines of Java to read/write every combination of bitwidth, endianness and value-length.

I just can't believe this is optimal except in maybe very specific CPU-only cases (e.g. Parquet-Java running on a giant cluster somewhere).

If it were just bit-packing I could easily offload a whole data page to a GPU and not care about having per-bitwidth optimised implementations, but having to switch encodings at random intervals just makes this a headache.

It would be really nice if actual design documents exist that specify why this is a good idea based on real-world data patterns.

ignoreusernames · 6 months ago
> The reference implementation for Parquet is a gigantic Java library. I'm unconvinced this is a good idea.

I haven't though much about it, but I believe the ideal reference implementation would be a highly optimized "service like" process that you run alongside your engine using arrow to share zero copy buffers between the engine and the parquet service. Parquet predates arrow by quite a few years and java was (unfortunately) the standard for big data stuff back then, so they simply stuck with it.

> The way bit-packing has been implemented is to generate 74,000 lines of Java to read/write every combination of bitwidth, endianness and value-length

I think they did this to avoid the dynamic dispatch nature of java. If using C++ or Rust something very similar would happen, but at the compiler level which is a much saner way of doing this kind of thing.

willtemperley · 6 months ago
Actually looking at the DuckDB source I think they re-use a single uint64 and push bits onto this a byte at a time, until bitwidth is reached, then right-shift bitwidth bits back off when a single value has been created. Very neat and presumably quick.

I've just had so many issues with total lack of clarity with this format. They tell you a total_compressed_size for a page then it turns out the _uncompressed_ page header is included in this - but the documentation barely give any clues to the layout [1].

The reality:

Each column chunk includes a list of pages written back-to-back, with an optional dictionary page first. Each of these, including the dictionary are prepended with an uncompressed PageHeader in Thrift format.

It wasn't too hard to write a paragraph about it. It was quite hard looking for magic compression bytes in hex dumps.

Maybe there should be a "minimum workable reference implementation" or something that is slow but easy to understand.

[1] https://parquet.apache.org/docs/file-format/data-pages/colum...

quotemstr · 6 months ago
If you're doing IPC to a sidecar to do purely numeric computation you could just as easily do in process something has gone terribly wrong with your software engineering methodology.
willtemperley · 6 months ago
Addendum: if something is actually decoded by RunLengthBitPackingHybridDecoder but you call the encoding RLE this is probably because it was a bad idea in the first place. Plus it makes it really hard to search for.
nerdponx · 6 months ago
I'd rather have this file format with an incomplete reference and confusing implementation, than not have this file format at all. Parquet was such a tremendous improvement in quality of life over the prior status quo for anyone that needs to move even moderate amounts of data between systems, or anyone who cares about correctness and bug prevention when working with even the tiniest data sets. Maybe HDF5 and ORC would have filled the niche if Parquet hadn't, but I think realistically we would just be stuck with fragile CSV/TSV.
quotemstr · 6 months ago
74 KLOC for a decoder? That's ridiculous. Use invokedynamic. Yes, people more typically associate invokedynamic with interpreter implementations or whatever, but it's actually perfect for this use case. Generate the right code on demand and let the JVM cache it so that subsequent invocations are just as fast as if you'd written it by hand.

Jesus Christ this isn't 2005 anymore and people need to learn to use the real power of the JVM. It's stuff like this that sets it apart

viccis · 6 months ago
Yeah I had to wait years to really use Parquet effectively in Python code back in the 2010s because there were two main ones (Pyarrow and Fastparquet), and they were neither compatible with either other nor compatible with Spark. Parquet support is much like Javascript support in browsers. You only get to use the more advanced features when they are supported compatibly on every platform you expect them to be used.
quotemstr · 6 months ago
> Although this post might seem like a critique of Parquet, that is not my intention. I am simply documenting what I have learned and explaining the challenges maintainers of an open format face when evolving it. All the benefits and utilities that a format like Parquet has far outweigh these inconveniences.

Yes, it is a critique (or at least its user community). It's a critique that's 100% justified too.

Have we all been so conditioned by corporate training that we've lost the ability to say "hey, this sucks" when it _does_ in fact suck?

We all lose when people communicate unclearly. Here, the people holding back evolution of the format do need to be critiqued, and named, and shamed, and the author shouldn't have been so shy about doing it.

adrian17 · 6 months ago
I was quite confused when I learned that the spec technically supports metadata about whether the data is already pre-sorted by some column(s); in my eyes seemed like it would allow some non-brainer optimizations. And yet, last I checked, it looked like pretty much nothing actually uses it, and some libraries don't even read this field at all.
sighansen · 6 months ago
As long as iceberg and delta lake won't support v2, adoption will be really hard. I'm working aot with parquet and wasn't even aware that there is a version 2.0.
lolive · 6 months ago
Why wouldn't they adopt the v2.0?
mr_toad · 6 months ago
Version 1 took about ten years before it became de rigueur. Version 2 is hot off the press.
1a527dd5 · 6 months ago
https://www.jeronimo.dev/the-two-versions-of-parquet/#perfor...

First paragraph under that heading as a markdown error

    which I hadn’t considered in [my previous post on compression algorithms]](/compression-algorithms-parquet/).

atbpaca · 6 months ago
Similarly, Apache Spark and Scala versions. Spark ran on Scala 2.12 for a long time to eventually support 2.13. To this day, no plans to support Scala 3.x. Databricks started supporting 2.13 only in May this year...