Readit News logoReadit News
kiwicopple · a year ago
Another amazing release, congrats to all the contributors. There are simply too many things to call out - just a few highlights:

Massive improvements to vacuum operations:

    "PostgreSQL 17 introduces a new internal memory structure for vacuum that consumes up to 20x less memory."
Much needed features for backups:

    "pg_basebackup, the backup utility included in PostgreSQL, now supports incremental backups and adds the pg_combinebackup utility to reconstruct a full backup"
I'm a huge fan of FDW's and think they are an untapped-gem in Postgres, so I love seeing these improvements:

    "The PostgreSQL foreign data wrapper (postgres_fdw), used to execute queries on remote PostgreSQL instances, can now push EXISTS and IN subqueries to the remote server for more efficient processing."

peiskos · a year ago
A bit off topic, can someone suggest how I can learn more about using databases(postgres specifically) in real world applications? I am familiar with SQL and common ORMs, but I feel the internet is full of beginner level tutorials which lack this depth.
Superfud · a year ago
For PostgreSQL, the manual is extremely well written, and is warmly recommended reading. That should give you a robust foundation.
brunoqc · a year ago
I batch import XMLs, CSVs and mssql data into postgresql.

I'm pretty sure I could read them when needed with fdw. Is it a good idea?

I think it can be slow but maybe I could use materialized views or something.

mind-blight · a year ago
I've been using duckdb to import data into postgres (especially CSVs and JSON) and it has been really effective.

Duckdb can run SQL across the different data formats and insert or update directly into postgres. I run duckdb with python and Prefect for batch jobs, but you can use whatever language or scheduler you perfer.

I can't recommend this setup enough. The only weird things I've run into is a really complex join across multiple postgres tables and parquet files had a bug reading a postgres column type. I simplified the query (which was a good idea anyways) and it hums away

kiwicopple · a year ago
“it depends”. Some considerations for mssql:

- If the foreign server is close (latency) that’s great

- if your query is complex then it helps if the postgres planner can “push down” to mssql. That will usually happen if you aren’t doing joins to local data

I personally like to set up the foreign tables, then materialize the data into a local postgres table using pg_cron. It’s like a basic ETL pipeline completely built into postgres

baq · a year ago
check out clickhouse. you might like it.
ellisv · a year ago
> I'm a huge fan of FDW's

Do you have any recommendations on how to manage credentials for `CREATE USER MAPPING ` within the context of cloud hosted dbs?

darth_avocado · a year ago
If your company doesn't have an internal tool for storing credentials, you can always store them in the cloud provider's secrets management tool. E.g. Secrets Manager or Secure String in Parameter Store on AWS. Your CI/CD pipeline can pull the secrets from there.
kiwicopple · a year ago
in supabase we have a “vault” utility for this (for example: https://fdw.dev/catalog/clickhouse/#connecting-to-clickhouse). Sorry I can’t make recommendations for other platforms because i don’t want to suggest anything that could be considered unsafe - hopefully others can chime in

Deleted Comment

netcraft · a year ago
I remember when the json stuff started coming out, I thought it was interesting but nothing I would ever want to rely on - boy was I wrong!

It is so nice having json functionality in a relational db - even if you never actually store json in your database, its useful in so many situations.

Being able to generate json in a query from your data is a big deal too.

Looking forward to really learning json_table

jackschultz · a year ago
Very cool with the JSON_TABLE. The style of putting json response (from API, created from scraping, ect.) into jsonb column and then writing a view on top to parse / flatten is something I've been doing for a few years now. I've found it really great to put the json into a table, somewhere safe, and then do the parsing rather than dealing with possible errors on the scripting language side. I haven't seen this style been used in other places before, and to see it in the docs as a feature from new postgres makes me feel a bit more sane. Will be cool to try this out and see the differences from what I was doing before!
0cf8612b2e1e · a year ago
A personal rule of mine is to always separate data receipt+storage from parsing. The retrieval is comparatively very expensive and has few possible failure modes. Parsing can always fail in new and exciting ways.

Disk space to store the returned data is cheap and can be periodically flushed only when you are certain the content has been properly extracted.

cjonas · a year ago
Did you mean "retrieval is comparatively inexpensive"? I think I'm on the same page but this threw me off.
erichocean · a year ago
I ended up with the same design after encountering numerous exotic failure modes.
abyesilyurt · a year ago
> putting json response (from API, created from scraping, ect.) into jsonb column and then writing a view on top to parse

That’s a very good idea!

ellisv · a year ago
It is definitely an improvement on multiple `JSONB_TO_RECORDSET` and `JSONB_TO_RECORD` calls for flattening nested json.
pestaa · a year ago
Very impressive changelog.

Bit sad the UUIDv7 PR didn't make the cut just yet:

https://commitfest.postgresql.org/49/4388/

ellisv · a year ago
I've been waiting for "incremental view maintenance" (i.e. incremental updates for materialized views) but it looks like it's still a few years out.
JamesSwift · a year ago
I'm a huge fan of views to serve as the v1 solution for problems before we need to optimize our approach, and this is the main thing that serves as a blocker in those discussions. If only we were able to have v2 of the approach be an IVM-view, we could leverage them much more widely.
whitepoplar · a year ago
There's always the pg_ivm extension you can use in the meantime: https://github.com/sraoss/pg_ivm
garyclarke27 · a year ago
Agreed big disappointment that "incremental view maintenance" is taking so long to get into the core - despite several IVM extensions. For me this is by far the most important capability missing from Postgres.
0cf8612b2e1e · a year ago
I must be missing something because that feels easy to implement. A date seconds + random data in the same way as UUID4.

Where is the iceberg complexity?

lfittl · a year ago
In my understanding it was a timing issue with the UUIDv7 RFC not being finalized before the Postgres 17 feature freeze in early April. Shouldn't be an issue to get this in for Postgres 18, I think.
Meniceses · a year ago
My assumption is that because you can easily do this through software when using UUID and probably a lot do it like this, the pressure of supporting it, is a lot lower than expected.
nikita · a year ago
A number of features stood out to me in this release:

1. Chipping away more at vacuum. Fundamentally Postgres doesn't have undo log and therefore has to have vacuum. It's a trade-off of fast recovery vs well.. having to vacuum. The unfortunate part about vacuum is that it adds load to the system exactly when the system needs all the resources. I hope one day people stop knowing that vacuum exists, we are one step closer, but not there.

2. Performance gets better and not worse. Mark Callaghan blogs about MySQL and Postgres performance changes over time and MySQL keep regressing performance while Postgres keeps improving.

https://x.com/MarkCallaghanDBhttps://smalldatum.blogspot.com/

3. JSON. Postgres keep improving QOL for the interop with JS and TS.

4. Logical replication is becoming a super robust way of moving data in and out. This is very useful when you move data from one instance to another especially if version numbers don't match. Recently we have been using it to move at the speed of 1Gb/s

5. Optimizer. The better the optimizer the less you think about the optimizer. According to the research community SQL Server has the best optimizer. It's very encouraging that every release PG Optimizer gets better.

sgarland · a year ago
MySQL can be faster in certain circumstances (mostly range selects), but only if your schema and queries are designed to exploit InnoDB’s clustering index.

But even then, in some recent tests I did, Postgres was less than 0.1 msec slower. And if the schema and queries were not designed with InnoDB in mind, Postgres had little to no performance regression, whereas MySQL had a 100x slowdown.

I love MySQL for a variety of reasons, but it’s getting harder for me to continue to defend it.

on_the_train · a year ago
My boss insisted on the switch from oracle to mssql. Because "you can't trust open source for business software". Oh the pain
bityard · a year ago
I ran into a lot of that 20 years ago, surprised to hear it's still a thing at all given how it's basically common knowledge that most of the Internet and Cloud run on open source software.

I once met an older gentleman who was doing IT work for a defense contractor. He seemed nice enough. We were making small talk and I happened to mention that I had recently installed Linux on my computer at home. He tone changed almost immediately and he started ranting about how Linux was pirated source code, stolen from Microsoft, all of it contains viruses, etc. He was talking about the SCO vs Linux lawsuits but of course got absolutely ALL of the details wrong, like which companies were even involved in the lawsuits. He was so far off the deep end that I didn't even try to correct him, I just nodded and smiled and said I was actually running late to be somewhere else...

gigatexal · a year ago
So from one expensive vendor to another? Your boss seems smart. ;-)

What’s the rationale? What do you gain?

systems · a year ago
Well, from one VERY expensive vendor, to another considerably less expensive vendor

Also, MSSQL have few things going for it, and surprisingly no one seem to be even trying to catch up

    - Their BI Stacks (PowerBI, SSAS)
    - Their Database Development (SDK) ( https://learn.microsoft.com/en-us/sql/tools/sql-database-projects/sql-database-projects?view=sql-server-ver16 )
The MSSQL BI stack is unmatched , SSAS is the top star of BI cubes and the second option is not even close

SSRS is ok, SSIS is passable , but still both are very decent

PowerBI and family is also the best option for Mid to large (not FAANG large, but just normal large) companies

And finally the GEM that is database projects, you can program your DB changes declaratively, there is nothing like this in the market and again, no one is even trying

The easiest platformt todo evolutionary DB development is MS SQL

I really wish someone will implement DB Projects (dacpac) for Postgresql

on_the_train · a year ago
Exactly. Supposedly the paid solution ensures long term support. The most fun part is that our customers need to buy these database licenses, so it directly reduces our own pay. Say no to non-technical (or rational) managers :<
marcosdumay · a year ago
Microsoft is way less likely to sue you and a couple of orders of magnitude cheaper than Oracle.

Besides, managing Microsoft licensing is a bliss close to Oracle's. And yeah, MSSQL is much better in almost every way than Oracle.

If you only compare those two, it's a non-brainier.

0cf8612b2e1e · a year ago
A boolean column type.
throw351203910 · a year ago
What your boss doesn't realize is your business already depends on FOSS. Here are a few examples:

- Do you use any cloud provider? Those platforms are built on top of open source software: Linux, nginx (e.g Cloudflare's edge servers before the rust rewrite), ha-proxy (AWS ELB), etc - Either the software your business builds or depends on probably uses open source libraries (e.g: libc, etc) - The programming languages your business uses directly or indirectly are probably open source

My point is that folks that make these kinds of statements have no clue how their software is built or what kind software their business actually depends on.

WuxiFingerHold · a year ago
SQL Server (MSSQL) is not bad at all, just expensive.

The part that your boss doesn't trust Postgres is hilarious, of course.

jmull · a year ago
Well, you can't necessarily trust open source for business software.

The more deeply your business depends on something, the more careful you need to be when selecting the source for that something. (And the db is often very deeply depended on.)

You want to see why their long-term incentives align with your own needs.

But a revenue stream is just one way to do this, and not a perfect one. (Case in point: Oracle.)

In my experience, SQL Server isn't bad though. I know a couple commercial products that started with SQL Server in the late '90s and remain happy with it now. The cost hasn't been a problem and they like the support and evolution of the product. They find they can adopt newer versions and features when they need to without too much pain/cost/time.

(Not that I don't think Postgres is generally a better green-field pick, though, and even more so porting away from Oracle.)

baq · a year ago
mssql is a great rdbms. t-sql is... different... in certain places but all in all if cost isn't a big issue you really can't go wrong by picking it.
Rican7 · a year ago
Wow, yea, the performance gains and new UX features (JSON_TABLE, MERGE improvements, etc) are huge here, but these really stand out to me:

> PostgreSQL 17 supports using identity columns and exclusion constraints on partitioned tables.

> PostgreSQL 17 also includes a built-in, platform independent, immutable collation provider that's guaranteed to be immutable and provides similar sorting semantics to the C collation except with UTF-8 encoding rather than SQL_ASCII. Using this new collation provider guarantees that your text-based queries will return the same sorted results regardless of where you run PostgreSQL.

ktosobcy · a year ago
Would be awesome if PostgreSQL would finally add support for seamless major version upgrade…
dewey · a year ago
ticoombs · a year ago
I use this for my Lemmy instance & lemmy-ansible and it's been great! No longer having to support upgrade scripts and write a complete upgrade process[1] for people to follow has made my life a lot easier! Amazing product

- [1] https://github.com/LemmyNet/lemmy-ansible/blob/main/UPGRADIN...

ktosobcy · a year ago
I heard about that project but it still somewhat convoluted. Imagine being able to simply use "postgres:latest" or better yet "postgres:15" and switch to "postgres:16" and it would just update (like any other minor version does, or any other db, like mysql, does)
ellisv · a year ago
I'm curious what you feel is specifically missing.
levkk · a year ago
pg_upgrade is a bit manual at the moment. If the database could just be pointed to a data directory and update it automatically on startup, that would be great.
ktosobcy · a year ago
Being able to simply switch from "postgres:15" to "postgres:16" in docker for example (I'm aware about pg_autoupdate but it's external and I'm a bit iffy about using it)

What's more, even outside of docker running `pg_upgrade` requires both version to be present (or having older binary handy). Honestly, having the previous version logic to load and process the database seems like it would be little hassle but would improve upgrading significantly...