Readit News logoReadit News
mrlongroots commented on ClickHouse vs PostgreSQL UPDATE performance comparison   clickhouse.com/blog/updat... · Posted by u/truth_seeker
sdairs · 8 days ago
One of the post authors here; I just want to stress that the goal of the post was not at all to be "haha we're better than postgres" - we explicitly call out the caveats behind the results.

Prior to July 2025 (ClickHouse v25.7), ClickHouse did not support UPDATE statements. At all. Like most columnar, analytics databases. We spent a lot of effort designing and implementing a way to support high-performance, SQL-standard UPDATE statements. This test was pretty much just trying to see how good of a job we had done, by comparing ourselves to the gold standard, Postgres. (If you're curious, we also wrote about how we built the UPDATE support in depth https://clickhouse.com/blog/updates-in-clickhouse-2-sql-styl...)

We have some updates to the post in progress; we originally deliberately used cold runs for both ClickHouse & Postgres, because we wanted to look at the "raw" update speed of the engine, vs. the variability of cache hits. But TL;DR when you run a more "real world" test where caches are warm and Postgres is getting very high cache-hit ratio, its point updates are consistently ~2ms, while ClickHouse is somewhere ~6ms (bulk updates are still many multiples faster in ClickHouse even with the cache in play).

mrlongroots · 8 days ago
Another thought: this is not meant as criticism or anything, just an aspect of performance that I thought was interesting but not covered by the blog.

A test that would show PG's strengths over ClickHouse for OLTP would be a stress test with a long-running set of updates.

ClickHouse maintains updates as uncompacted patches merged in the background, which is how you would do it with a columnar store. But if you have an update-heavy workload, these patches would accumulate and your query performance would start to suffer. PG on the other hand completes all update work inline, and wouldn't get degrading performance under update-heavy regimes.

This is just a fundamental artifact of OLAP vs OLTP, maybe OLAP can be optimized to the point where it doesn't really matter for most workloads, but a theoretical edge remains with row-based stores and updates.

mrlongroots commented on ADHD drug treatment and risk of negative events and outcomes   bmj.com/content/390/bmj-2... · Posted by u/bookofjoe
owkman · 9 days ago
What do you mean by POTS in this context? Postural orthostatic tachycardia syndrome? Would love to hear more
mrlongroots · 9 days ago
Yep! A short summary of my n=1 findings (currently very speculative, but also sound I think, am meeting with my cardiologist next week to see what he thinks).

The type of ADHD I have seems to have an "autonomic nervous system impairment" component and a symptom profile overlapping with hyperadrenergic POTS.

1. I respond much better to Guanfacine ER (GFC) than stimulants alone (currently complementing with Vyvanse (LDX) 40mg, but I'd rate the Guanfacine as critical)

2. My blood pressure is very volatile, and GFC is supposed to have an impact but did not in my case, at least initially. I'd take GFC at bedtime and LDX in the morning, and on ChatGPT's suggestion, I asked my psych if I could take them together in the morning. Gamechanger for my blood pressure: the explanation seems to be that LDX makes my sympathetic nervous system extra simulated (on top of a poor baseline), and co-timed GFC balances it out.

3. I have poor cardiac endurance, and I find running nearly impossible. I'm a healthy young male who does weights and all. At ChatGPT's suggestion, I wore a Polar H10 and measured my resting heart rate while sitting, and then while standing still. I get a jump from 80bpm to 115bpm-ish, a strong indicator for something orthostatic.

I'm currently exploring rowing (with a concept2). I don't know why but it has a strong impact on my mental state that goes beyond general exercising: something about the rhythmic entrainment it produces, while being recumbent (good for POTS).

mrlongroots commented on Model intelligence is no longer the constraint for automation   latentintent.substack.com... · Posted by u/drivian
Wowfunhappy · 10 days ago
> LLMs have no ability to understand when to stop and ask for directions.

I haven't read TFA so I may be missing the point. However, I have had success getting Claude to stop and ask for directions by specifically prompting it to do so. "If you're stuck or the task seems impossible, please stop and explain the problem to me so I can help you."

mrlongroots · 9 days ago
Ok I think the confusion arises because of the probabilistic nature of LLM responses that blurs the line between "intelligent vs not".

Let's take driving a car as an example, and a random decision generator as a lower bound on the intelligence of the driver.

- A professionally trained human, who is not fatigued or unhealthy or substance-impaired, rarely makes a mistake, and when they do, there are reasonable mitigating factors.

- ML models, OTOH, are very brittle and probabilistic. A model trained on blue tinted windshields may suffer a dramatic drop in performance if ran on yellow-tinted windshields.

Models are unpredictably probabilistic. They do not learn a complete world model, but the very specific conditions and circumstances of their training dataset.

They continue to get better, and you are able to induce a behavior similar to true intelligence more and more often. In your case, you are able to get them to stop and ask, but if they had the ability to do this reliably, they would not make mistakes as agents at all. Right now they resemble intelligence under a very specific light, and as the regimes under which they resemble one get bigger, they will get to AGIs. But we're not there yet.

mrlongroots commented on Model intelligence is no longer the constraint for automation   latentintent.substack.com... · Posted by u/drivian
antithesizer · 9 days ago
The word you're looking for is "rebuttal" since this is neither proof nor refutation of anything, but merely an argument against the thesis.
mrlongroots · 9 days ago
A rebuttal is just an alias for "counterargument", it does not define the structure of the counterargument.

However flawed, what I said did have a structure (please refer to my other response in this thread for why).

mrlongroots commented on Model intelligence is no longer the constraint for automation   latentintent.substack.com... · Posted by u/drivian
bubblyworld · 10 days ago
This is not a proof by contradiction - you have stated an assumption followed by a bunch of non-sequitors about what LLMs can and can't do, also known as begging the question. Under the conditions of your assumption (namely that LLMs are plenty powerful with the right context) why would you believe anything in your last paragraph? That's how a proof by contradiction works.

(not saying you are wrong, necessarily, but I don't think this argument holds water)

mrlongroots · 9 days ago
> you have stated an assumption

I don't think I stated an assumption, this is an assertion, worded rhetorically. You are welcome to disagree with it and refute it, but its structural role is not that of an assumption.

"Can an LLM recognize the context deficit and frame the right questions to ask?"

> a bunch of non-sequitors

I'm guessing you're referring to the "canvas or not" bit? The sequitir there was that LLMs routinely fail to execute simple instructions for which they have all the context.

> not saying you are wrong

Happy to hear counterarguments of course, but I do not yet see an argument for why what I said was not structurally coherent as counterexamples, nor anything that weakens the specifics of what I said.

mrlongroots commented on ADHD drug treatment and risk of negative events and outcomes   bmj.com/content/390/bmj-2... · Posted by u/bookofjoe
jakkos · 10 days ago
> ADHD is kind of a weakly differentiated diagnosis that could apply to most people

I don't think this hypothesis would survive a look through the literature on google scholar. ADHD is associated with huge increases in risks of suicide, substance abuse, homelessness, accidents, crime, autoimmune disease, etc etc etc. It's not just "damn I find it hard to focus sometimes".

mrlongroots · 10 days ago
I have diagnosed ADHD and I agree largely with the person you are responding to.

The claim is not that ADHD is not a set of people with real psychiatric disorders, but it is a loose umbrella for what are actually disparate problems.

I recently learned that my symptoms, to a large extent, can be explained more accurately as POTS or something adjacent, and the meds I guided my psychiatrist towards were far more helpful than the stimulants I was being prescribed. This was a combination of me, reddit, and later LLMs arriving at me-specific diagnoses that go beyond clinical guideline regimes.

mrlongroots commented on OpenBSD is so fast, I had to modify the program slightly to measure itself   flak.tedunangst.com/post/... · Posted by u/Bogdanp
Const-me · 10 days ago
Not sure if that’s relevant, but when I do micro-benchmarks like that measuring time intervals way smaller than 1 second, I use __rdtsc() compiler intrinsic instead of standard library functions.

On all modern processors, that instruction measures wallclock time with a counter which increments at the base frequency of the CPU unaffected by dynamic frequency scaling.

Apart from the great resolution, that time measuring method has an upside of being very cheap, couple orders of magnitude faster than an OS kernel call.

mrlongroots · 10 days ago
They could've just used `clock_gettime(CLOCK_MONOTONIC)`
mrlongroots commented on Model intelligence is no longer the constraint for automation   latentintent.substack.com... · Posted by u/drivian
neom · 10 days ago
Same same human problems. Regardless of their inherent intelligence...humans perform well only when given decent context and clear specifications/data. If you place a brilliant executive into a scenario without meaningful context.... an unfamiliar board meeting where they have no idea of the company’s history, prior strategic discussions, current issues, personel dynamics...expectations..etc etc, they will struggle just as a model does surly. They may still manage something reasonably insightful, leveraging general priors, common sense, and inferential reasoning... their performance will never match their potential had they been fully informed of all context and clearly data/objectives. I think context is the primary primitive property of intelligent systems in general?
mrlongroots · 10 days ago
> they will struggle just as a model does surly

A human will struggle, but they will recognize the things they need to know, and seek out people who may have the relevant information. If asked "how are things going" they will reliably be able to say "badly, I don't have anything I need".

mrlongroots commented on Model intelligence is no longer the constraint for automation   latentintent.substack.com... · Posted by u/drivian
mrlongroots · 10 days ago
I very much disagree. To attempt a proof by contradiction:

Let us assume that the author's premise is correct, and LLMs are plenty powerful given the right context. Can an LLM recognize the context deficit and frame the right questions to ask?

They can not: LLMs have no ability to understand when to stop and ask for directions. They routinely produce contradictions, fail simple tasks like counting the letters in a word etc. etc. They can not even reliably execute my "ok modify this text in canvas" vs "leave canvas alone, provide suggestions in chat, apply an edit once approved" instructions.

Deleted Comment

u/mrlongroots

KarmaCake day164April 1, 2023View Original