Readit News logoReadit News
tmoertel · 4 months ago
This optimization should provide dramatic speed-ups when taking random samples from massive data sets, especially when the wanted columns can contain large values. That's because the basic SQL recipe relies on a LIMIT clause to determine which rows are in the sample (see query below), and this new optimization promises to defer reading the big columns until the LIMIT clause has filtered the data set down to a tiny number of lucky rows.

    SELECT *
    FROM Population
    WHERE weight > 0
    ORDER BY -LN(1.0 - RANDOM()) / weight
    LIMIT 100  -- Sample size.
Can anyone from ClickHouse verify that the lazy-materialization optimization speeds up queries like this one? (I want to make sure the randomization in the ORDER BY clause doesn't prevent the optimization.)

zX41ZdbW · 4 months ago
tmoertel · 4 months ago
Thanks! That's a nice 5x improvement. Pretty good for a query that offers only modest opportunity, given that the few columns it asks for are fairly small (`title` being the largest, which isn't that large).
tschreiber · 4 months ago
Verified:

  EXPLAIN plan actions = 1
  SELECT *
  FROM amazon.amazon_reviews
  WHERE helpful_votes > 0
  ORDER BY -log(1 - (rand() / 4294967296.0)) / helpful_votes
  LIMIT 3
Lazily read columns: review_body, review_headline, verified_purchase, vine, total_votes, marketplace, star_rating, product_category, customer_id, product_title, product_id, product_parent, review_date, review_id

Note that there is a setting query_plan_max_limit_for_lazy_materialization (default value 10) that controls the max n for which lm kicks in for LIMIT n.

geysersam · 4 months ago
Sorry if this question exposes my naivety, why such a low default limit? What drawback does lazy materialization have that makes it good to have such a low limit?

Do you know any example query where lazy materialization is detrimental to performance?

tmoertel · 4 months ago
Awesome! Thanks for checking :-)
ethan_smith · 4 months ago
The optimization should work well for your sampling query since the ORDER BY and LIMIT operations would happen before materializing the large columns, but the randomization function might force early evaluation - worth benchmarking both approaches.
jurgenkesker · 4 months ago
I really like Clickhouse. Discovered it recently, and man, it's such a breath of fresh air compared to suboptimal solutions I used for analytics. It's so fast and the CLI is also a joy to work with.
lukaslalinsky · 4 months ago
I always dismissed ClickHouse, because it's all super low level. Building a reliable system out of it, requires a lot of internal knowledge. This is the only DB I know, where you will have to deal with actual files on disk, in case of problems.

However, I managed to look besides that, and oh-my-god it is so fast. It's like the tool is optimized for raw speed and whatever you do with it is up for you.

nasretdinov · 4 months ago
Yeah ClickHouse does feel like adult LEGO to me too: it lets you design your data structures and data storage layout, but doesn't force you to implement everything else. If you work on a large enough scale that's exactly what you want from a system usually
EvanAnderson · 4 months ago
Same here. I come from a strong Postgres and Microsoft SQL Server background and I was able to get up to speed with it, ingesting real data from text files, in an afternoon. I was really impressed with the docs as well as the performance of the software.
osigurdson · 4 months ago
Having a SQL like syntax where everything feels like a normal DB helps a lot I think. Of course, it works very differently behind the scenes but not having to learn a bunch of new things just to use a new data model is a good approach.

I get why some create new dialects and languages as that way there is less ambiguity and therefore harder to use incorrectly but I think ClickHouse made the right tradeoffs here.

pests · 4 months ago
I remember a few years ago when the views on Clickhouse was it some "legacy" "bulky" and used by "the big guys" and not very much discussion or opinions of it in spaces like this. Seems like its come a long way.
ksec · 4 months ago
Lots of Google analytics competitors appeared between 2017 and 2023 due to privacy reasons. And a lot of them started with normal Postgres or MySQL then switched to Clickhouse or simply started with Clickhouse knowing they could scale far better.

At least in terms of capability and reputation it was already well known by 2021 and certainly not legacy or bulky. At least on HN clickhouse is very often submitted and reached front page. Compared to MySQL when I tried multiple times no one is interested.

Edit: On another note Umami is finally supporting Clickhouse! [1], Not sure how they implementing it because it still requires Postgres. But it should hopefully be a lot more scalable.

[1] https://github.com/umami-software/umami/issues/3227

teej · 4 months ago
Clickhouse earned that reputation. However, it was spun out of Yandex in 2021. That kickstarted a new wave of development and it’s gotten much better.
theLiminator · 4 months ago
How does it compare to duckdb and/or polars?
thenaturalist · 4 months ago
This is very much an active space, so the half-life of in depth analyses is limited, but one of the best write ups from about 1.5 years ago is this one: https://bicortex.com/duckdb-vs-clickhouse-performance-compar...
nasretdinov · 4 months ago
In my understanding DuckDB doesn't have its own optimised storage that can accept writes (in a sense that ClickHouse does, where it's native storage format gives you best performance), and instead relies on e.g. reading data from Parquet and other formats. That makes sense for an embedded analytics engine on top of existing files, but might be a problem if you wanted to use DuckDB e.g. for real-time analytics where the inserted data needs to be available for querying in a few seconds after it's been inserted. ClickHouse was designed for the latter use case, but at a cost of being a full-fledged standalone service by design. There are embedded versions of ClickHouse, but they are much bulkier and generally less ergonomic to use (although that's a personal preference)
spatular · 4 months ago
Clickhouse is a network server, duckdb and polars are in-process databases. It's like postgres vs sqllite.
shmerl · 4 months ago
What's the story with the Debian package for it? It was removed as unmaintained.
simonw · 4 months ago
Unrelated to the new materialization option, this caught my eye:

"this query sorts all 150 million values in the helpful_votes column (which isn’t part of the table’s sort key) and returns the top 3, in just 70 milliseconds cold (with the OS filesystem cache cleared beforehand) and a processing throughput of 2.15 billion rows/s"

I clearly need to update my mental model of what might be a slow query against modern hardware and software. Looks like that's so fast because in a columnar database it only has to load that 150 million value column. I guess sorting 150 million integers in 70ms shouldn't be surprising.

(Also "Peak memory usage: 3.59 MiB" for that? Nice.)

This is a really great article - very clearly explained, good diagrams, I learned a bunch from it.

amluto · 4 months ago
> I guess sorting 150 million integers in 70ms shouldn't be surprising.

I find sorting 150M integers at all to be surprising. The query asks for finding the top 3 elements and returning those elements, sorted. This can be done trivially by keeping the best three found so far and scanning the list. This should operate at nearly the speed of memory and use effectively zero additional storage. I don’t know whether Clickhouse does this optimization, but I didn’t see it mentioned.

Generically, one can find the kth best of n elements in time O(n):

https://en.m.wikipedia.org/wiki/Selection_algorithm

And one can scan again to find the top k, plus some extra if the kth best wasn’t unique, but that issue is manageable and, I think, adds at most a factor of 2 overhead if one is careful (collect up to k elements that compare equal to the kth best and collect up to k that are better than it). Total complexity is O(n) if you don’t need the result sorted or O(n + k log k) if you do.

If you’re not allowed to mutate the input (which probably applies to Clickhouse-style massive streaming reads), you can collect the top k in a separate data structure, and straightforward implementations are O(n log k). I wouldn’t be surprised if using a fancy heap or taking advantage of the data being integers with smallish numbers of bits does better, but I haven’t tried to find a solution or disprove the existence of one.

danlark1 · 4 months ago
I am the author of the optimization of partial sorting and selection in Clickhouse. It uses Floyd-Rivest algorithm and we tried a lot of different things back at the time, read [1]

Overall clickhouse reads blocks of fixed sizes (64k) and finds top elements and then does top of the top until it converges.

[1] https://danlark.org/2020/11/11/miniselect-practical-and-gene...

Akronymus · 4 months ago
> This can be done trivially by keeping the best three found so far and scanning the list.

That doesnt seem to guarantee correctness. If you dont track all of the unique values, at least, you could be throwing away one of the most common values.

The wiki entry seems to be specifically about the smallest, rather than largest values.

kevinventullo · 4 months ago
With non-mutable “streaming” input, there is an O(n) algorithm to obtain the unsorted top k with only O(k) extra memory.

The basic idea is to maintain a buffer of size 2k, run mutable unsorted top k on that, drop the smaller half (i.e the lowest k elements), then stream in the next k elements from the main list. Each iteration takes O(k), but you’re processing k elements at a time, so overall runtime is O(n).

When you’re done, you can of course sort for an additional k*log(k) cost.

simonw · 4 months ago
Maybe they do have that optimization and that explains the 3.59 MiB peak memory usage for ~600MB of integers.
ww520 · 4 months ago
Let's do a back of the envelope calculation. 150M u32 integers are 600MB. Modern SSD can do 14,000MB/s sequential read [1]. So reading 600MB takes about 600MB / 14,000MB/s = 43ms.

Memory like DDR4 can do 25GB/s [2]. It can go over 600MB in 600MB / 25,000MB/s = 24ms.

L1/L2 can do 1TB/s [3]. There're 32 CPU's, so it's roughly 32TB/s of L1/L2 bandwidth. 600MB can be processed by 32TB/s in 0.018ms. With 3ms budget, they can process the 600MB data 166 times.

The rank selection algorithms like QuickSelect and Floyd-Rivest have O(N) complexity. It's entirely possible to process 600MB in 70ms.

[1] https://www.tomshardware.com/features/ssd-benchmarks-hierarc...

[2] https://www.transcend-info.com/Support/FAQ-292

[3] https://www.intel.com/content/www/us/en/developer/articles/t...

codedokode · 4 months ago
They mentioned that they use 125 MiB/s SSD. However, one can notice that the column seems to contain only about 47500 unique values. Probably there are many reviews with zero or one votes. This column is probably stored compressed so it can be loaded much faster.
skeptrune · 4 months ago
Strong and up to date intuition on "slow vs. fast" queries is an underrated software engineering skill. Reading blogs like this one is worth it just for that alone.
baq · 4 months ago
Slow VMs on overprovisioned cloud hosts which cost as much per month as a dedicated box per year have broken a generation of engineers.

You could host so much from your macbook. The average HN startup could be hosted on a $200 minipc from a closet for the first couple of years if not more - and I'm talking expensive here for the extra RAM you want to not restart every hour when you have a memory leak.

ramraj07 · 4 months ago
I don't see how that's the root cause. ClickHouse and snowflake run on your so-called slow vms on overprovisioned cloud hosts and they're efficient as hell. It's all about your optimizations.

The real problem is the lack of understanding by most engineers the degree of overprovisioning they do for code that's simple and doing stupid things using an inefficient 4th order language on top of 5 different useless (imo) abstractions.

federiconafria · 4 months ago
Not only that, you have a pile of layers that could be advantageous in some situations but are an overkill in most.

I've seen Spark clusters being replaced by a single container using less than 1 CPU core and few 100s MB of RAM.

sofixa · 4 months ago
Raw compute wise, you're almost right (almost because real cloud hosts aren't overprovisioned, you get the full CPU/memory/disk reserved for you).

But you actually need more than compute. You might need a database, cache, message broker, scheduler, to send emails, and a million other things you can always DIY with FOSS software, but take time. If you have more money than time, get off the shelf services that provide those with guarantees and maintenance; if not, the DIY route is also great for learning.

rfoo · 4 months ago
> so much from your macbook

At least on cloud I can actually have hundreds of GiBs of RAM. If I want this on my Macbook it's even more expensive than my cloud bill.

Deleted Comment

kwillets · 4 months ago
ignoreusernames · 4 months ago
Same thing with columnar/vectorized execution. It has been known for a long time that's the "correct" way to process data for olap workflows, but only became "mainstream" in the last few years(mostly due to arrow).

It's awesome that clickhouse is adopting it now, but a shame that it's not standard on anything that does analytics processing.

kwillets · 4 months ago
Nothing in C-store seems to have sunk in. In clickhouse's case I can forgive them since it was an open source, bootstrap type of project, and their cash infusion seems to be going into basic re-engineering, but in general slowly re-implementing Vertica seems like a flawed business model.
AlexClickHouse · 4 months ago
ClickHouse predates Apache Arrow.
mmsimanga · 4 months ago
IMHO if ClickHouse had Windows native release that does not need WSL or a Linux virtual machine it would be more popular than DuckDB. I remember for years MySQL being way more popular than PostgreSQL. One of the reasons being MySQL had a Windows installer.
skeptrune · 4 months ago
Is Clickhouse not already more popular than DuckDB?
nasretdinov · 4 months ago
28k stars on GitHub for DuckDB vs 40k for ClickHouse - pretty close. But, anecdotally, here on HN DuckDB gets mentioned much more often
codedokode · 4 months ago
I was under impression that servers and databases generally run on Linux though.
mmsimanga · 4 months ago
Windows still runs on 71% of the desktop and laptops [1]. In my experience a good number of applications start life on simple desktops and then graduate to servers if they are successful. I work in the field of analytics. I have a locked down Windows desktop and I have been able to try out all the other databases such as MySQL, MariaDB, PostgreSQL and DuckDB because they have windows installers or portable apps. I haven't been able to try out ClickHouse. This is my experience and YMMV.

[1]https://en.wikipedia.org/wiki/Usage_share_of_operating_syste...

Onavo · 4 months ago
Reminder clickhouse can be optionally embedded, you don't need to reach for Duck just because of hype (it's buggy as hell everytime I tried it).

https://clickhouse.com/blog/chdb-embedded-clickhouse-rocket-...

sirfz · 4 months ago
Chdb is awesome but so is duckdb
justmarc · 4 months ago
Clickhouse is a masterpiece of modern engineering with absolute attention to performance.
skeptrune · 4 months ago
>Despite the airport drama, I’m still set on that beach holiday, and that means loading my eReader with only the best.

What a nice touch. Technical information and diagrams in this were top notch, but the fact there was also some kind of narrative threaded in really put it over the top for me.