Readit News logoReadit News
Boxxed commented on The borrowchecker is what I like the least about Rust   viralinstruction.com/post... · Posted by u/jakobnissen
Boxxed · a month ago
> Use fewer references and copy data. Or: "Just clone".

> This is generally good advice. Usually, extra allocations are fine, and the resulting performance degradation is not an issue. But it is a little strange that it allocations are encouraged in an otherwise performance-forcused language, not because the program logic demands it, but because the borrowchecker does.

I often end up writing code that (seems) to do a million tiny clones. I've always been a little worried about fragmentation and such but it's never been that much of an issue -- I'm sure one day it will be. I've often wanted a dynamically scoped allocator for that reason.

Boxxed commented on Peasant Railgun   knightsdigest.com/what-ex... · Posted by u/cainxinth
jks · 2 months ago
Why the number 2,280? What keeps you from adding peasants until your projectile travels at 0.99c?
Boxxed · 2 months ago
...also, why does it have to be a ladder? Where does the ladder come from? And why can't you have all 2280 peasants just do a normal attack to do 2280d6 (or whatever) damage?
Boxxed commented on I Switched from Flutter and Rust to Rust and Egui   jdiaz97.github.io/greenbl... · Posted by u/jdiaz97
mort96 · 2 months ago
I agree that this is not a necessary downside to immediate mode GUIs, but we're talking about egui specifically here. AFAIK, egui always redraws at some relatively high rate even when nothing is happening. (I'm having trouble finding documentation about what that rate is though.)
Boxxed · 2 months ago
That's not true, it only re-renders if there's an input event or an animation running. This is very easy to see if you just put a `println!` in your UI logic.

This is also mentioned in the gui docs here https://github.com/emilk/egui#why-immediate-mode:

> egui only repaints when there is interaction (e.g. mouse movement) or an animation, so if your app is idle, no CPU is wasted.

Boxxed commented on Ancient X11 scaling technology   flak.tedunangst.com/post/... · Posted by u/todsacerdoti
kelnos · 2 months ago
It could, though. GTK has support for mixed DPI, just only for Wayland. There's no reason why it couldn't work on X11. It might be more tricky to get right, but it's just a matter of work.
Boxxed · 2 months ago
"just a matter of work."

Yeah, like every other conceivable feature, ever.

Boxxed commented on Canyon.mid   canyonmid.com/... · Posted by u/LorenDB
Boxxed · 3 months ago
Anyone have the other SoundBlaster era demo .MIDs at hand? I think there was...Self Control? rhythm.mid? A couple others that I can't remember?
Boxxed commented on ClickHouse raises $350M Series C   clickhouse.com/blog/click... · Posted by u/caust1c
zX41ZdbW · 3 months ago
Here are 75 queries from various benchmarks, that form the version benchmark: https://benchmark.clickhouse.com/versions/
Boxxed · 3 months ago
Did you look at the queries? There is not a single join in any of them.
Boxxed commented on United States Digital Service Origins   usdigitalserviceorigins.o... · Posted by u/ronbenton
Boxxed · 3 months ago
> The only reason your agency came into existence is because the government completely failed to deliver on it's promises in an exceptionally divisive way.

So...you're mad that the government tried to fix a problem it had?

Boxxed commented on ClickHouse raises $350M Series C   clickhouse.com/blog/click... · Posted by u/caust1c
AlexClickHouse · 3 months ago
Thanks for creating this issue, it is worth investigating!

I see you also created similar issues in Polars: https://github.com/pola-rs/polars/issues/17932 and DuckDB: https://github.com/duckdb/duckdb/issues/17066

ClickHouse has a built-in memory tracker, so even if there is not enough memory, it will stop the query and send an exception to the client, instead of crashing. It also allows fair sharing of memory between different workloads.

You need to provide more info on the issue for reproduction, e.g., how to fill the tables. 16 GB of memory should be enough even for a CROSS JOIN between a 10 billion-row and a 100-row table, because it is processed in a streaming fashion without accumulating a large amount of data in memory. The same should be true for a merge join.

However, there are places when a large buffer might be needed. For example, if you insert data into a table backed by S3 storage, it requires a buffer that can be in the order of 500 MB.

There is a possibility that your machine has 16 GB of memory, but most of it is consumed by Chrome, Slack, or Safari, and not much is left for ClickHouse server.

Boxxed · 3 months ago
Yeah I feel like I'm on crazy pills, I'm OoM'ing all these big data tools that everyone loves very trivially -- duckdb OoM'd just loading a CSV file, and Polars OoM'd just reading the first couple rows of a parquet file?

I do want to get a better reproduction on CH because it seems like it's an interplay between the INSERT INTO...SELECT. It's just a bit of work to generate synthetic data with the same profile as my production data (for what it's worth I did put quite a bit of effort into following the doc guidelines for dealing with low-memory machines).

Boxxed commented on ClickHouse raises $350M Series C   clickhouse.com/blog/click... · Posted by u/caust1c
mplanchard · 3 months ago
Yes (via Clickhouse Cloud, which is pretty reasonably priced).

It’s important to structure your tables and queries in a way that aligns with the ordering keys, in order to optimize how much data needs to be loaded into RAM. You absolutely CANNOT just replicate your existing postgres DB and its primary keys or whatever over to CH. There are tricks like projections and incremental materialized views that can help to get the appropriate “lenses” for your queries. We use incremental MVs to, for example, continuously aggregate all-time stats about tens of billions of records. In general, for CH, space is cheap and RAM is expensive, so it’s better to duplicate a table’s data with a different ordering key than to make an inefficient query.

As long as the queries align with the ordering keys, it is insanely fast and able to enable analytics queries for truly massive amounts of data. We’ve been very impressed.

Boxxed · 3 months ago
Well that's exactly my complaint. The bug I filed above was pretty much the optimal case (one huge table, one very small table, both ordered by the join key) and it still OoMs.
Boxxed commented on ClickHouse raises $350M Series C   clickhouse.com/blog/click... · Posted by u/caust1c
zX41ZdbW · 3 months ago
There is the "versions benchmark," which includes a lot of queries with JOINs and compares ClickHouse performance on them: https://benchmark.clickhouse.com/versions/
Boxxed · 3 months ago
I don't think that's right, it looks to be a set of 43 queries with zero joins: https://github.com/ClickHouse/ClickBench/blob/main/versions/...

u/Boxxed

KarmaCake day411July 4, 2017View Original