Readit News logoReadit News
londons_explore · 2 years ago
I really wish the postgres query planner would gain the ability to replan a query mid way through execution...

Frequently the most pathological queries (ie. the dreadfully slow ones) are because the query planner didn't have some knowledge required of the data distribution and couldn't accurately estimate the cost of some approach to planning the query. This can easily have a 1000x impact on execution time (ie. 1s rather than 1ms).

You will never have 100% accurate table stats - there is always some odd joint distribution you will not capture.

So instead, allow the query to start, and if progress isn't as fast as the planner expects, feed current progress info back to the planner (pages scanned, tuples matching), and replan with that new data. If the updated plan shows it's quicker to discard current results and restart with a new approach, do that.

Unfortunately, postgres does streaming queries (ie. the first results of a query are sent back to the client before the query is done), which means that significant infrastructural changes would be needed to allow a midway change of plan. Any new plan would need to keep track of which results had already been sent to the client so they aren't resent. Postgres also allows a client to, midway through a query, request that the query reverse direction and re-return previous results in reverse order. That adds a lot of complexity.

davidrowley · 2 years ago
(blog author and Postgres committer here) I personally think this would be nice to have. However, I think the part about sending of tuples to the client is even more tricky than you've implied above. It's worse because a new plan does not even guarantee that the same tuples are returned. e.g. if you wrote: SELECT * FROM table LIMIT 10, there's no ORDER BY so which tuples are returned is non-deterministic. It may be easier to do something like queue up X tuples and just start sending those out when the queue is full. When the queue is full, we can say that it's too late to replan and we're locked into the current plan. People can set X to what they want to increase the time that the plan can change at the expense of using more memory and higher latency to get the first tuple.
fuy · 2 years ago
Or it could work in a way that the Planner has access to data about previous runs of each query, and it can use this data to change plans that were proven bad during execution. This way, the first execution would be slow, but Planner could self-learn and better next time. SQL Server has a bunch of similar features in its query optimizer https://learn.microsoft.com/en-us/sql/relational-databases/p....

I'm not sure Postgres has infrastructure to do that, though, because it doesn't have shared plan cache, for example.

londons_explore · 2 years ago
For many queries, even setting X=1 would probably have big benefits. If it takes far longer than expected to find the first result, it's probably time for a new plan.

Implementing only the X=1 case would also dramatically simplify the design of such a feature. Limiting it to read only queries would also make everything much simpler.

sgift · 2 years ago
I have no knowledge how common queries with ORDER BY vs no ORDER BY are, but that sounds like a first implementation which only works if ORDER BY is present would be easier and still useful? Or do you think that's not common enough to justify the effort?
nextaccountic · 2 years ago
> It may be easier to do something like queue up X tuples and just start sending those out when the queue is full. When the queue is full, we can say that it's too late to replan and we're locked into the current plan. People can set X to what they want to increase the time that the plan can change at the expense of using more memory and higher latency to get the first tuple.

Maybe warrant a new SQL syntax, like

    select * from table limit 10 queue X

sega_sai · 2 years ago
I think another way to think about it is to allow 'long planning' queries. I.e. where it is allowed to spend a second, or maybe a few seconds choosing the best plan. That may involve collecting more statistics or running a query for a little bit.
davidrowley · 2 years ago
I've considered things like this before but not had time to take it much beyond that. The idea was that the planner could run with all expensive optimisations disabled on first pass, then re-run if the estimated total cost of the plan was above some threshold with more expensive optimisations enabled. It does seem pretty silly to worry about producing a plan in a millisecond for say, an OLAP query that's going to take 6 hours to complete. On the other hand, we don't want to slow down the planner too much for a query that executes in 0.1 milliseconds.

There'd be a few hurdles to get over before we could get such a feature. The planner currently has a habit of making changes to the parsed query, so we'd either need to not do that, or make a copy of it before modifying it. The former would be best.

mkl · 2 years ago
> Postgres also allows a client to, midway through a query, request that the query reverse direction and re-return previous results in reverse order.

What is that useful for?

jasonjayr · 2 years ago
Paginating with an open cursor, perhaps?
codeflo · 2 years ago
If the sort order isn't fully determined by a query, can the query plan influence the result order? If so, what you're suggesting might be nearly impossible. The new query wouldn't be able to just skip the first N results, it would have to match each individual row against a dictionary of previously sent ones.
davidrowley · 2 years ago
> If the sort order isn't fully determined by a query, can the query plan influence the result order?

Yes. When running a query, PostgreSQL won't make any effort to provide a stable order of rows beyond what's specified in the ORDER BY.

Sesse__ · 2 years ago
You may be interested in this paper (and the papers it references): https://arxiv.org/pdf/1902.08291
Someone · 2 years ago
> and if progress isn't as fast as the planner expects, feed current progress info back to the planner (pages scanned, tuples matching), and replan with that new data.

That would require keeping track of those stats in every query execution. That has a price that may or may not be worth it.

And yes, you could make that behavior an option, but, for better or for worse, PostgreSQL tends to be opposed to having queries indicate how it should do its work.

mike_hock · 2 years ago
Alternatively, associate some confidence value with the statistics and make conservative choices when confidence is low and/or the expected difference is small.

Sometimes a sequential scan is faster than index lookups for each item. But the sequential scan is a risky gamble, whereas the index lookups have robust performance characteristics. It's not always clear which choice is the conservative one, but often it is.

davidrowley · 2 years ago
Yeah, this is similar to some semi-baked ideas I was talking about in https://www.postgresql.org/message-id/CAApHDvo2sMPF9m=i+YPPU...

I think it should always be clear which open would scale better for additional rows over what the estimated row count is. We should always know this because we already cost for N rows, so it's possible to cost for N+1 rows and use the additional costs to calculate how the plan choice will scale when faced with more rows than expected.

darksaints · 2 years ago
What I think could potentially be done is allow threshold-based alternate plans. For a pseudo example, “if subquery A returns 8 records or fewer, use this plan for subquery B, else use that plan.” It’s an explicit admission that the query planner doesn’t have enough information to make a good decision up front, but can easily be made at a later point in time while in the middle of execution.
davidrowley · 2 years ago
Yes, I think so too. There is some element of this idea in the current version of PostgreSQL. However, it does not go as far as deferring the decision until execution. It's for choosing the cheapest version of a subplan once the plan has been generated for the next query level up. See fix_alternative_subplan() in setrefs.c. Likely it would be possible to expand that and have the finish plan contain the alternative and switch between them accordingly to which one is cheaper for the number of rows that previous executions have seen. Maybe tagging on some additional details about how many rows is the crossover point where one becomes cheaper than the other so that the executor can choose without having to think too hard about it would be a good idea.

Deleted Comment

wiradikusuma · 2 years ago
I use this tool to visualize my queries: https://explain.dalibo.com/ (there's also https://www.pgexplain.dev/, last time the output was less nice, but now both look the same)
klysm · 2 years ago
The tool is great and I use it, but I don't really have a deep enough understanding to know how to fix issues in my approach from what looks bad in the plan.
davidrowley · 2 years ago
It's pretty hard to tell if a plan is good or bad from EXPLAIN without using the ANALYZE option. With EXPLAIN ANALYZE you can see where the time is being spent, so can you get an idea of which part of the plan you should focus on.

To know if it's a bad plan, it does take quite a bit of knowledge as you need to know what alternative plans could have been used instead. It takes quite a bit of time to learn that stuff. You need to know what PostgreSQL is capable of. Some computer science knowledge helps here as you'll know, for example, when a hash join is a good way to join a table vs a nested loop.

As for fixing plan you've identified as bad, that also takes quite a bit of experience. If you understand the EXPLAIN ANALYZE output well, that's a good start. Looking for places where the estimated rows differ from the actual can be key. Having an understanding of how Postgres performed the row estimations helps. That's not something that comes easily without looking at the source code, unfortunately. Understanding tools that you have to change the plan is useful. Perhaps that's CREATE STATISTICS, or adjusting the stats targets on existing single column stats. Or maybe creating a new index. Having a test environment that allows you to experiment is very useful too.

dewey · 2 years ago
There's also https://www.pgmustard.com, which gives you a bit more hints and information on the possible optimizations.
uudecoded · 2 years ago
I read your profile and see that you are a CTO of a fintech. Given that, by what method do you navigate that tool's [explain.dalibo.com] assertion of "It is recommended not to send any critical or sensitive information"?

Is there an explain plan sanitizer that is helpful for this situation?

Defman · 2 years ago
You can download the whole visualizer as a simple html file and use it this way. No need to obfuscate or sanitize anything at all.

https://github.com/dalibo/pev2

williamdclt · 2 years ago
Whatever the domain, a query isn't necessarily critical or sensitive. It only is if it contains personal information (eg querying by a bank account number or a name), or if the query itself is part of your competitive advantage (unlikely)
fabian2k · 2 years ago
Query planner improvements are always welcome, it's a very important part of the DB. Though of course most of the time you notice it is when it's not doing what you want ;-).

One part of this I found rather frustrating is the JIT in newer Postgres versions. The heuristics on when to use appear not robust at all to me. I've seen this for a rather typical ORM-generated query that is pretty straightforward, but pulls in a lot of other tables via joins. It runs in a few milliseconds without the JIT, but the JIT spent 1-1.5 seconds doing its thing on top of that and makes it incredibly slow for tiny amounts of data.

I know now to just disable the JIT, but this feature can give a pretty terrible impression to users that don't know enough yet to figure out why it's slow. I like Postgres a lot, but enabling the JIT just seems far too dangerous as a default setting to me.

davidrowley · 2 years ago
> One part of this I found rather frustrating is the JIT in newer Postgres versions. The heuristics on when to use appear not robust at all to me.

(Author of the blog here and Postgres committer). I very much agree that the code to decide if JIT should be used or not needs work. For PG16, it only takes into account the estimated total cost of the plan and does not take into account how many expressions need to be compiled. It's quite fast to compile a few expressions, but if you're querying a partitioned table with hundreds of partitions and the plan contains all those partitions, then the JIT compiler has a lot of work to do. The number of expressions is not considered. A colleague and I do have some code to improve this. Unsure if it'll make PG17 at this stage.

dur-randir · 2 years ago
If it's not ready for everyone, probably it shouldn't have been made a default, don't you think?
fuy · 2 years ago
One other thing about JIT that I feel is pretty crazy is that the generated code is not cached. I mean it's the most expensive part of the query execution a lot of the time, how come it's not cached? I couldn't find good reasons for this looking through Postgres mailing lists discussion around JIT.

Disabling JIT is the way to go for OLTP workloads.

davidrowley · 2 years ago
There's some information about why that does not happen in https://www.postgresql.org/message-id/20211104234742.ao2qzqf...

In particular:

> The immediate goal is to be able to generate JITed code/LLVM-IR that doesn't > contain any absolute pointer values. If the generated code doesn't change > regardless of any of the other contents of ExprEvalStep, we can still cache > the JIT optimization / code emission steps - which are the expensive bits.

A colleague is working on getting this patch into shape. So we might see some caching work get done after the relative pointer work is in.

SigmundA · 2 years ago
Unlike say MSSQL or Oracle PG does not cache plans at all. I think this is mostly due to its multiprocess architecture vs just sharing in memory plans between threads. In MSSQL a plan can take a while to optimize including jitting if needed but it doesn't matter that much because all plans are cached so when that statement comes in again the plan is ready to go.
Sesse__ · 2 years ago
I believe the JIT is pretty much a failure, yes. It was well-meant, but LLVM just isn't the right tool for this. I've turned it off globally. (I don't use any ORMs, so it's not simply about strange query patterns.)

Query parallelization, on the other hand, can actually be useful—and most importantly, rarely hurts.

aidos · 2 years ago
We hit a curios bug recently on production with the JIT.

I had apt updated a couple of packages and then all of a sudden a bigger query we run every 5 minutes was failing. Or rather, Postgres just started silently hanging up the connection mid query execution with even putting anything in the logs.

Took me a while of running manually in explain to see that the variations of the query that ended up using the JIT broke while those that didn’t were ok. Disabled the JIT and everything was ok again.

czl · 2 years ago
Did you try using prepared statement so the compilation is done once and compiled results are reused each time that query is run?
dur-randir · 2 years ago
There's a separate can of worms with prepared statements. Two main are:

- parameters are opaque to planner, so it prefers (or is even forced to?) to choose generic vs specific plans

- it doesn't play nice with pg_bouncer in transaction mode

fabian2k · 2 years ago
As far as I understand prepared statements don't help here as the JIT output is not saved but generated for each execution. In this case I'm also using an ORM (EF Core) which doesn't expose the ability to prepare statements.
SigmundA · 2 years ago
Only in the same session...
dur-randir · 2 years ago
I disabled JIT after it became default for our installation (~1TB data). Nice try, useful sometimes, but as a default? No, thanks.
twic · 2 years ago
I'd be interested to know how often these changes have an effect in real queries. The "Use Limit instead of Unique to implement DISTINCT, when possible" change in particular feels like it would only apply to very silly queries.

Do the PostgreSQL developers have any source of information about this?

yxhuvud · 2 years ago
I expect it will have effect fairly often - DISTINCT is something less experienced developers often add to fix their bad queries, and generally the first thing I do when I start to improve performance is to rewrite it to not have to do that. So if these improvements to DISTINCT make it more robust to bad queries, then a lot is gained.

It probably won't fix all issues, but any improvements are welcome.

davidrowley · 2 years ago
(Author of the blog and that feature here) This one did crop up on the pgsql-hackers mailing list. I very much agree that it's unlikely to apply very often, but the good thing was that detecting when it's possible is as simple as checking if a pointer is NULL. So, it's very simple to detect, most likely does not apply very often, but can provide significant performance increases when it can be applied.
maximegarcia · 2 years ago
That would be nice to also optimize SELECT DISTINCT foo FROM bar. It is usually very poor on big tables and we have to do recursive CTE. This comes a lot with admin builders for filters (<select >).
Sesse__ · 2 years ago
The problem is that ORMs have a habit of making very silly queries, and developers insist they cannot write SQL to fix that, because it is somehow impure :-) I doubt this is a very _common_ issue, but I'm not surprised if it shows up every now and then.
946789987649 · 2 years ago
> developers insist they cannot write SQL to fix that, because it is somehow impure :-)

Because then they lose many of the benefits of why they used an ORM in the first place. Though I am a big fanboy of JOOQ for exactly this reason.

joachimma · 2 years ago
Where I used to work we allowed duplicate email addresses in the user table for legacy reasons, but we did not want any new entered in the db, so we ran a "select distinct email from users where email = ?" query before creation of new users. I don't think we had more than a 100 rows with the same email though. Most of the duplicates were test users which could have been removed, but I digress.
devit · 2 years ago
I think it would be really nice to have a "strict mode" (for app testing), where PostgreSQL returns an error if an index would improve the query asymptotically and it doesn't exist (only based on the query itself, not statistics).

And a "CREATE INDICES FOR <sql>" command to create the indices (for app upgrades), plus an automatic index creation mode (for interactive and development use).

In general, the system should be architected so that asymptotically suboptimal execution never happens.

riku_iki · 2 years ago
Why wouldnt they implement hints..
davidrowley · 2 years ago
There is a pg_hint_plan extension. I think the danger with hints is that they might only be correct when written. If the table sizes or data skew changes, they might make things worse. I don't have a link to hand, but last time I recall a discussion on hints there was no general objection to them, providing the implementation could be done in a way that didn't force the planner's hand too strongly and still allowed it to adapt to the underlying data changing. For example, indicating there's a correlation between two columns, rather than specifying a given predicate matches 10 rows.
viraptor · 2 years ago
>I think the danger with hints is that they might only be correct when written.

Not "correct when written", but "scaling as written". That means if you force the execution that scales linearly or quadratically, that's what you get all the time. If the row number increases, you know what will happen. You can monitor that ahead of time and plan for the increase.

On the other hand without the hints, you don't know when and how the plan will change without testing. At some random point postgres can decide to do something terribly stupid and at that point you get to figure out what happened and how to fix that in an emergency mode. Do you know how to adjust the right statistics? Do you need to change the indexes? Do you know how long that will take?

riku_iki · 2 years ago
> I think the danger with hints is that they might only be correct when written. If the table sizes or data skew changes, they might make things worse.

they will work in prod in the way engineer is expecting. Current planner also can change its mood in unpredictable way and often generates sub-optimal plans for complex queries, because can't reason about what specific subquery will return exactly, and you learn about it when queries start work very slow in production in the middle of the night.

hibikir · 2 years ago
The one I'd love to tell the planner is that a table holds transactions in time, and that it should not expect that today's data is empty because it was empty 10 hours ago. It's an extremely common pattern, it makes any statistics gathering based on percentage of data changed dubious pretty quickly, and harms a whole lot of real queries, because in data like this, people care the most about the recent data.

There are ways to organize data to minimize the issue, but it'd be so much nicer if we could just teach the optimizer that this is the way the data is shaped.

dagss · 2 years ago
I feel the best abstraction for hints would be to declare on tables how large you expect them to be -- and even throw errors if query plans with a good scaling cannot be found.

Say I could declare "assume this table will grow very large", "assume this table will be a small enum table".

And then it would use that information instead of actual table size to guide planning AND throw an error for any query doing a full table scan on a declared-to-be-large table -- so that missing indices can be detected instantly, not after running in prod for some days/weeks.

Google Data Store has this property and it is a joy to work with for a backend developer.

What I am usually after is NOT the fastest plan, but the most consistent and robust plan across test and prod environments.

cryptonector · 2 years ago
IMO hints need to be provided out of band, that is, not in the SQL query itself. To do this it is necessary to have a way to address every table source, in every sub-query, then one can have hints as a pile of {table source, hint}. Not that this solves the problem of hints rotting, but being able to separate them from the text of the query at least keeps the query clean, and makes is possible to have different sets of hints for different contexts and different RDBMS versions.
pmontra · 2 years ago
I guess that the solution to this problem can be automated. The DB or an extension to the DB or application code can run the query without hints sometimes and compare the result with the version with hints. If the hinted version is still faster, good. If it is slower, it's time to tell the DBA. Or switch to the unhinted query automatically if it's faster for a large enough number of times.
somat · 2 years ago
I suspect the ideological problem with hints is that if the planner is producing a poor query, then the correct place to fix that is in the planner.

While I agree with this viewpoint, The problem is that most people don't want to be a Postgress dev, To actually enable people to fix the planner it would have to be exposed as a runtime service. And unless there was a lot of diligence the planner script would quickly degrade into an unmaintainable mess(low blow: just like most schemas.)

sa46 · 2 years ago
Related discussion

Why PostgreSQL doesn't have query hints

https://news.ycombinator.com/item?id=2179433 (60 comments, 2011)

The official stance from the Postgres wiki https://wiki.postgresql.org/wiki/OptimizerHintsDiscussion:

> We are not interested in implementing hints in the exact ways they are commonly implemented on other databases.

> Problems with existing Hint systems: Poor application code maintainability, Interference with upgrades, Encouraging bad DBA habits, Does not scale with data size

I don't fault their stance, but it's frustrating when Postgres picks a stupid plan and can't be convinced to do something reasonable.

s08148692 · 2 years ago
> when Postgres picks a stupid plan and can't be convinced to do something reasonable.

In my experience it can always be convinced to make a reasonable plan, but it's not always trivial. Sometimes it's just adding an index, sometimes it's entirely rewriting a query

andybak · 2 years ago
A friend of mine is a Microsoft DBA for mid-sized companies and was proclaiming how you can't do anything serious with Postgres. He said he was shocked to discover it didn't even have a query planner.

Leaving mocking him to one side for a moment - is there any plausibility to his broader claim that MSSQL can handle things at a scale where Postgres would be a poor choice? My gut instinct is that this is nonsense but I'm not a DBA by a long stretch.

danpalmer · 2 years ago
Yes there is. If what you need is a database that will do pretty much anything well enough, then MSSQL and Oracle are going to manage it. They solve this by throwing money and hardware (more money) at the problem until it works. There's some clever stuff happening in there of course, but fundamentally they've just had much more engineering work over a long time. They can scale out more than Postgres can reasonably do.

That said, Postgres is catching up, and arguably MySQL/MariaDB has always had a good story here. Scale-out options are improving all the time. The landscape has also changed in other ways too, now you can easily have a multi-terabyte Postgres cluster on a small number of machines serving large traffic volumes, and then put your "big data" into a more specialist database. The old world of shoving everything on MSSQL/Oracle may be a bit old-school.

SigmundA · 2 years ago
I develop for MSSQL extensively and PG is missing some things that can be a little surprising.

He might have been referring to the fact that PG doesn't cache query plans or have way to lock them in. PG replans for every statement unless you manually do prepared statements and that only works per connection. MSSQL will cache plans and reuse them and has done this for a very long time. Consequently the planner can take more time planning. Also MSSQL has hints and you can even lock a plan.

PG really needs hints, optimizers are great and all but sometimes I know better and I want to make sure it listens to me.

Also PG has no true clustered indexes all tables are heaps which is something most use all the time in MSSQL, usually your primary key is also set as the clustered index so that the table IS the index and any lookup on the key has no indirection. Interesting SQLite is the opposite tables always have clustered index whether you make one or not, MSSQL give you the choice for heap or index organized tables.

davidrowley · 2 years ago
> Also PG has no true clustered indexes all tables are heaps which is something most use all the time in MSSQL, usually your primary key is also set as the clustered index so that the table IS the index and any lookup on the key has no indirection.

This is true, but I believe if you have an index-organised table then subsequent indexes would have to reference the primary key. With PostgreSQL, indexes can effectively point to the record in the heap by using the block number and item pointer in the block. This should mean, in theory, index lookups are faster in PostgreSQL than non-clustered index lookups in SQL Server.

I was wondering, is there a concept of Bitmap Index Scans in SQL Server?

PostgreSQL is able to effectively "bitwise" AND and bitwise OR bitmap index scan results from multiple indexes to obtain the subset of tuple identifier (ctids) or blocks (in lossy mode) that should be scanned in the heap. Effectively, that allows indexes on a single column to be used when the query has a condition with an equality condition on multiple columns which are indexed individually. Does SQL Server allow this for heap tables? In theory, supporting both allows DBAs to choose, but I wonder how well each is optimised. It may lead to surprises if certain query plan shapes are no longer possible when someone switches to an IOT.

sgarland · 2 years ago
> Also PG has no true clustered indexes all tables are heaps which is something most use all the time in MSSQL

You can rewrite a table in PG to be clustered [0], but it a. locks the table b. is a one-shot, so you have to periodically redo it

> Interesting SQLite is the opposite tables always have clustered index whether you make one or not

AFAICT [1] you have to explicitly make a table `WITHOUT ROWID` to get a Clustered Index in SQLite.

> usually your primary key is also set as the clustered index so that the table IS the index and any lookup on the key has no indirection.

The main problem with this is there is a tendency by people unfamiliar with proper schema design (so, most) to use a UUID – usually v4 – as the PK. This causes no end of performance issues for RDBMS with and without clustered index, but since InnoDB also uses a clustered index, and MySQL is the most-installed RDBMS (modulo SQLite), it happens a lot.

This isn't the fault of the RDBMS; clustered index has some great advantages, as you point out. But it's the reality of the current situation.

[0]: https://www.postgresql.org/docs/current/sql-cluster.html

[1]: https://www.sqlite.org/withoutrowid.html

fabian2k · 2 years ago
Postgres has a query planner, I mean this entire post is about improvements to it. So I think there either was some miscommunication or your friend doesn't know anything about Postgres.

There are very large Postges databases that seem to work fine, so Postgres can certainly scale. But SQL Server also has some features that Postgres doesn't, and if those are important for you it might work better for your use case. They are in the end different databases with different strengths and weaknesses.

mmcgaha · 2 years ago
I have used both for OLTP and data warehousing and both are fine.

I started writing this to say that I would recommend my company move to Postgres if it weren't for vendor provided applications that required SQL Server but then I realized how much work it would be for me to replace the things MS includes like reporting services, integration services, jobs, AD integration, service broker (notify/listen lacks message types). I don't use analysis services any more but when I did that would have been hard to replace too.

This stuff is how they get you. I have no clue how long it would take me to replace all of this but it would not be a good ROI to spend a year replacing what you already have.

PartiallyTyped · 2 years ago
AWS' Aurora seems to be handling things pretty well tbh and is meant as a drop-in replacement for Postgresql and MySQL.
fuy · 2 years ago
Aurora is using native Postgres planner, I believe, probably with some minor enhancements.
thaumasiotes · 2 years ago
> He said he was shocked to discover it didn't even have a query planner.

How did he discover that?

jhoechtl · 2 years ago
Why is this released by citusdata instead on postgresql.org? Is this a paid feature only or an open source addition?
Sesse__ · 2 years ago
Because the poster (who also wrote some of the optimizations in question) works for Citus Data.
davidrowley · 2 years ago
Just to clarify. I'm the author of the blog. I work for Microsoft in the Postgres open-source team. All the work mentioned in the blog is in PostgreSQL 16, which is open-source.
jhoechtl · 2 years ago
There was no subtone in my question, I just wanted to know if this is a paid only feature.