Readit News logoReadit News
zamalek · 24 days ago
> The index is 214 MB! That's almost half the size of the entire table. So the analysts are happy, but you? Not so much...

This is part of a broader choice: write amplification. You'd want to, of course, have the most precise index possible - but no matter how you cut it, you are incurring extra I/O for writes - one for the tuple, one per index. How you index things is heavily influenced by the mix of reads and writes, and this is why we have data warehouses/read replicas in the first place: it allows us to avoid write amplification in the write path, while having fast filtered reads (that are slightly delayed).

If you're dealing with <ridiculous number of users>, there is a good chance that you don't want to be putting BI/OLAP indices on your OLTP database. You probably don't have enough users to worry about this - but - if you ever find that your writes are becoming an issue this is something to consider.

SigmundA · 24 days ago
Would be nice if PG supported clustered indexes (Index Organized Tables in Oracle speak) as an option if you have a table thats accessed mostly the same way you can get a index without the write amplification because the table is the index.
giovannibonetti · 24 days ago
Clustered indexes only save up to 2x write amplification in the very rare case where you're indexing the entire table (e.g. if it has very few columns).

However, that is usually the least of your concerns with write amplification. If you don't batch your writes, you can easily get 100x write amplification. For any primary key or any other index not strongly correlated with your INSERTs, you can get perhaps another 100x write amplification even if you batch you writes.

ComputerGuru · 24 days ago
Another option would be a good way of placing indexes on a different physical disk. You could use fast, ephemeral storage like you can for a WAL without amplifying the writes to the same device that is your expensive bottleneck. You could rebuild on data loss.

But it would add complexity to detect out-of-sync indexes and tables.

wreath · 23 days ago
It does support index organized tables with the CLUSTER command, or you meant something else?
msdrigg · 24 days ago
The most interesting thing for me in this article was the mention of `MERGE` almost in passing at the end.

> I'm not a big fan of using the constraint names in SQL, so to overcome both limitations I'd use MERGE instead:

``` db=# MERGE INTO urls t USING (VALUES (1000004, 'https://hakibenita.com')) AS s(id, url) ON t.url = s.url WHEN MATCHED THEN UPDATE SET id = s.id WHEN NOT MATCHED THEN INSERT (id, url) VALUES (s.id, s.url); MERGE 1 ```

I use `insert ... on conflict do update ...` all the time to handle upserts, but it seems like merge may be more powerful and able to work in more scenarios. I hadn't heard of it before.

gshulegaard · 24 days ago
IIRC `MERGE` has been part of SQL for a while, but Postgres opted against adding it for many years because it's syntax is inherently non-atomic within Postgres's MVCC model.

https://pganalyze.com/blog/5mins-postgres-15-merge-vs-insert...

This is somewhat a personal preference, but I would just use `INSERT ... ON CONFLICT` and design my data model around it as much as I can. If I absolutely need the more general features of `MERGE` and _can't_ design an alternative using `INSERT ... ON CONLFICT` then I would take a bit of extra time to ensure I handle `MERGE` edge cases (failures) gracefully.

kbolino · 24 days ago
It's kinda hard to handle MERGE failures gracefully. You generally expect the whole thing to succeed, and the syntax deceptively makes it seem like you can handle all the cases. But because of MVCC, you get these TOCTOU-style spurious constraint violations, yet there's no way to address them on a per-row basis, leading to the entire statement rolling back even for the rows that had no issues. If you are designing for concurrent OLTP workloads against the table, you should probably just avoid MERGE altogether. It's more useful for one-off manual fixups.
awesome_dude · 24 days ago
That reference - my initial gut feeling was that `MERGE` felt more readable, but then I read this paragraph

> If you want the generality of MERGE, you have to accept the fact that you might get unique constraint violations, when there are concurrent inserts, versus with INSERT ON CONFLICT, the way it's designed with its speculative insertions, guarantees that you either get an INSERT or an UPDATE and that is true even if there are concurrent inserts. You might want to choose INSERT ON CONFLICT if you need the guarantee.

Basically, `MERGE` is susceptible to a concurrent process also writing `INSERT` where that `INSERT` and `MERGE` are unaware of one another, causing a duplicate value to be used.

MarkusWinand · 23 days ago
INSERT ... ON CONFLICT has fewer surprises in context of concurrency: https://modern-sql.com/caniuse/merge#illogical-errors

Besides portability, there is IMHO nothing against INSERT ... ON CONFLICT if it does what you need.

philjohn · 24 days ago
If you're doing large batch inserts, I've found using the COPY INTO the fastest way, especially if you use the binary data format so there's no overhead on the postgres server side.
sirfz · 24 days ago
That doesn't work well with conflicts tho iirc
sc68cal · 24 days ago
Great article, shows a lot of interesting PostgreSQL features. I have used PostgreSQL and MySQL for decades, and this article showed me that I have barely scratched the surface of what is possible.
booi · 24 days ago
I've used Postgres for more than a decade and everytime I wade into the docs I feel the same way, I'm barely scratching the surface. It's so immensely powerful.
cies · 24 days ago
I love what LLMs are doing for me in PG's SQL. I discovered many features by having LLMs write them for me, often spot-on 100% on first prompt.

Since I know conceptually how RDBMSes work, I can ask vey specifically what I want. Also asking for feedback on schemas/queries really helped me. I use a lot more of PGs features now!

whalesalad · 24 days ago
PostgreSQL is like Emacs. It's an operating system disguised as something else.
mistrial9 · 24 days ago
the way PG was originally implemented does have some overlap with operating systems design IMHO.. PG internals define and use PG tables in internal schema to implement core architectural features. The PG code that bootstraps the PG environment is minimal in important ways.
jefurii · 21 days ago
I've heard Postgres described as a "database framework".
ComputerGuru · 24 days ago
The no unique constraint using hash index thing is something I always run into. Am I missing something or is it just glue code missing to translate that syntax to a exclusion constraint with a hash index, allowing foreign keys to reference the column in the process, and possibly also fixing the ON CONFLICT approach in one go?
sgarland · 24 days ago
Not discussed in TFA: BRIN indices. For the given date example, they would be perfect - if you can guarantee that whatever you’re indexing is monotonic, they’re a godsend. Tiny and performant.
ComputerGuru · 24 days ago
Minor correction: you don’t even have to guarantee the data is monotonic, it just performs best when it mostly is (good example is when handling incoming timestamps data on the server so messages/rexords are only generally monotonic but may be processed out of order).

How well do they work for UUIDv7? You’d probably have to tune (increase?) pages_per_range, but even though each index entry is 16 bytes you have to consider the btree index on the same is also similarly affected (or worse).

OptionOfT · 24 days ago
Remarkably fresh content.

It's interesting how both virtual columns and hash indexes work, but feel like they're bolted on, vs being made part of the whole ecosystem so that they work seamlessly.

ComputerGuru · 24 days ago
Virtual columns are basically one or two minor patches from being fully done. Pg18 brought us most of the way there.

Hash indices have long been crippled; they shipped almost unusable but every few years get a good QoL update. I think automatic unique constraints are the big thing left there.

pksunkara · 24 days ago
I think a stored generated column allows you to create an index on it directly. Isn't it better approach?
tczMUFlmoNk · 24 days ago
The article explains why they want to avoid this option:

> Starting at version 14, PostgreSQL supports generated columns - these are columns that are automatically populated with an expression when we insert the row. Sounds exactly like what we need but there is a caveat - the result of the expression is materialized - this means additional storage, which is what we were trying to save in the first place!

pksunkara · 24 days ago
Thanks, missed that part. I would still be interested in knowing how much additional storage that adds, if the OP is interested in updating the article.
zenmac · 24 days ago
>I think a stored generated column allows you to create an index on it directly. Isn't it better approach?

Is it also possible to create index (maybe partial index) on expressions?

masklinn · 24 days ago
That's the first solution (a function based index), however it has the drawback of fragility: a seemingly innocent change to the query can lead to not matching the index's expression anymore). Which is why the article moves on to generated columns.
rpsw · 24 days ago
I assume it would increase the storage usage, which they say they are trying to avoid in that example.
adrian17 · 23 days ago
Couldn't the first example be better done by having a Plan enum type? It'd both be a little more lightweight than text, and better at handling bad queries (mistyped filter resulting in error instead of empty result set).