Readit News logoReadit News
samcheng · 9 years ago
Any support for "rolling" partitions? e.g. A partition for data updated less than a day ago, another for data from 2-7 days ago, etc.

I miss this from Oracle; it allows nice index optimizations as the query patterns are different for recent data vs. historical data.

I think it could be set up with a mess of triggers and a cron job... but it would be nice to have a canonical way to do this.

jtc331 · 9 years ago
The fundamental issue here is that you'd actually have to move the rows between relations given that Postgres maintains separate storage etc. for each. There's no good way to do that.

Deleted Comment

willvarfar · 9 years ago
Living with the cron jobs for a big mysql db, and wishing the DB understood this seemingly common use-case :(
Klathmon · 9 years ago
Honestly i wouldn't call it "common". It's useful, and if it existed I could see it changing how I design a database, but it's not something I can say i've ever thought about needing before.

But then again, maybe i'm the outlier here.

lobster_johnson · 9 years ago
How does this work in Oracle? Seeing as the partitioning constraint would be time-dependent, wouldn't it need to re-evaluate it at regular intervals in order to shuffle data around? Is the feature explicitly time-oriented?
mulmen · 9 years ago
I don't think oracle can do this exactly but the query planner does understand time based partitions so if you do something like:

   SELECT * FROM partitioned_table WHERE partition_date_key > SYSDATE - 1;
The query planner will only use the most recent partition. Combine this with Oracle's ability to merge partitions and you get "daily" partitions that become "weekly" partitions when the new week starts. Alternately you could wait a month and combine all the days of last month into a single partition and then even combine months into years.

The partition intervals are based on specific dates/times, not on the relative time from query execution.

Oracle also supports row movement which is the biggest missing feature here I believe.

Dead Comment

rachbelaid · 9 years ago
aidos · 9 years ago
I'm always amazed by the PG community - it seems like such a constructive place.

Those patches are absolutely insane. Makes you remember how much hard work goes into building the software you use on a day to day basis.

https://www.postgresql.org/message-id/attachment/45478/0001-...

MarHoff · 9 years ago
I've been professionally focused on PostgreSQL based works for the last 5 years. At the highest point of the BigData hype I sometimes felt a little bit off-track, because I never got the time to investigate NoSQL solutions...

Only recently did I realize that being focused on actual data and how to process it inside PostgreSQL was maybe the best way I could spend my working time. I really can't say what's the best part of PostgreSQL, the hyperactive community, the rock solid and clear documentation or the constant roll-out of efficient, non-disruptive, user-focused features...

reactor · 9 years ago
I could see good amount of quality engineering there, kudos.
egeozcan · 9 years ago
If you also didn't know what exactly partitioned tables are, here's a nice introduction from Microsoft:

https://technet.microsoft.com/en-us/library/ms190787(v=sql.1...

It is for the SQL server but I assume it would be mostly relevant. Please correct me if I'm wrong.

SideburnsOfDoom · 9 years ago
So this is all about partitioning data into different storage files on the same server? What is the main benefit of that?
pilif · 9 years ago
If you combine the partitions with tablespaces, you can put tables on multiple disks. Let's say you keep a record of all orders you have processed. During the day-to-day operation, you need, say, the last 2 months of data all the time, but the older data you only need for reporting here and then.

By partitioning, you can keep the recent data on a fast disk and the older data on slower disks while still being able to run reports over the whole dataset.

And once you really don't need the old data any more, you can just bulk-remove partitions which will get rid of everything in that partition without touching anything else.

Even then you don't split over tablespaces: By keeping the data that's changing often separate from the data that's static and is only read, then you gain some advantages in index management and disk load when vacuum runs as it mostly wouldn't have to touch the archive partitions.

chime · 9 years ago
> For example, if a current month of data is primarily used for INSERT, UPDATE, DELETE, and MERGE operations while previous months are used primarily for SELECT queries, managing this table may be easier if it is partitioned by month. This benefit can be especially true if regular maintenance operations on the table only have to target a subset of the data. If the table is not partitioned, these operations can consume lots of resources on an entire data set. With partitioning, maintenance operations, such as index rebuilds and defragmentations, can be performed on a single month of write-only data, for example, while the read-only data is still available for online access.

The "General Ledger Entry" table in most accounting systems ends up being millions to billions of rows. Except for rare circumstances, prior periods are read-only due to business rules.

Lozzer · 9 years ago
Metadata operations on partitions can be very fast. One simple example is date based housekeeping. Deleting a month of data will be quite intensive on most databases, whereby dropping a partition from the table is effectively instant.

Partion switching is also fast. Say you have a summary table that is rolled up by month, but you want to recalculate the summaries every so often. You can build a month into a new table and then switch the new table for a partition in the summary table.

mioelnir · 9 years ago
First, if your query used the partition key in its where clause, the database knows(can calculate) which partitions can have a result and which can not. This means smaller indexes/less data to scan to find the result.

In the MSSQL case - not sure about others, this is were I had to use it - you can also switch data segments between tables indexed over the same partition function and with the same DDL. So you recreate the existing table a second time, create all the required indexes on it (which is fast because the table is empty), and then you switch partitions between them basically via pointer manipulation. The empty partition is now in the normal table, the data partition in the recreated one. Then you drop table on the recreated table. This is much more IO efficient than a delete-from statement.

This switching of course allows for a lot of other fun stuff as well, where you switch out a partition with a couple million rows, then work on it in isolation, switch the partitions back and then only have to "replay" the few rows that hit that partition while they were switched. Which is easy because they are now in the shadow table which is not updated further.

It is of course data and application dependent if you can use these things without affecting your application; but if it is suitable, the gains can be immense.

fusiongyro · 9 years ago
I have a scenario where we want to keep 12 months of data online. When you go to delete the 13th month of data the traditional way:

- Postgres has to scan the whole table to find the old data

- Postgres marks it as free, but doesn't give it back to the OS

Handling this the naive way winds up being both slow and unproductive. With table partitioning, I just go in and DROP TABLE data_2015_11 and get on with life. It's fast and returns space to the OS.

ktopaz · 9 years ago
I don't get it? Table partition is already supported in PostgreSQL now and has been for a long time now (at least since 8.1); Where I work we utilize table partitioning with PostgreSQL 9.4 on the product we're developing.

https://www.postgresql.org/docs/current/static/ddl-partition...

fabian2k · 9 years ago
As far as I understand, this is about declarative partioning. So you don't have to implement all the details yourself anymore, you just declare how a table should be partioned instead of defining tables, triggers, ...
amitlan · 9 years ago
Note that there is no shorthand syntax (yet), where you define a partitioned schema in just one line of DDL.

As of now, you still need to create the root partitioned table as one command specifying the partitioning method (list or range), partitioning columns (aka PARTITION BY LIST | RANGE (<columns>)) and then a command for every partition specifying the partition bounds. No triggers or CHECK constraints anymore though. Why that way? Because we then don't have to assume any particular use case, for which to provide a shorthand syntax -- like fixed width/interval range partitions, etc.

That said, having the syntax described at the beginning of the last paragraph in the initial version does not preclude offering a shorthand syntax in later releases, as, and if we figure out that offering some such syntax for more common use cases is useful after all.

roller · 9 years ago
The linked patch notes specifically mention the difference between this and table inheritance based partitioning.

  Because table partitioning is less general than table inheritance, it
  is hoped that it will be easier to reason about properties of
  partitions, and therefore that this will serve as a better foundation
  for a variety of possible optimizations, including query planner
  optimizations.

rtkwe · 9 years ago
"Supported" in so far as you could basically roll your own implementation, having it managed by the engine is massively more useful and easier to support and setup. A lot of things are supported if you're willing to bodge it together like that.
sapling · 9 years ago
It sounds like this is column level partitioning.Each column or columns (based on partitioning expression) is stored as different subtable (or something similar) on disk.If only few columns are frequently accessed, they can be put on cache/faster disk or other neat optimizations for join processing.
MarHoff · 9 years ago
Maybe I don't get you, but i don't think so, PostgreSQL is not a columnar database.

If i got this patch right, each partitioned table will have the same data structure and store whole rows (it's even more restrictive than previous inheritance mechanism that allowed extending by adding additional columns).

Column or expression should only define in which table an inserted row is supposed to be stored. A single row will never been torn apart. Still it look like a foundation that facilitate sharding BigData(Set) between multiple servers when used in conjunction with foreign data. However a lot of performance improvements will still be needed to compete against solid NoSQL projects (in which you really have a BigData use case).

But looking a bit forward, developing performances improvements on top of an ACID compliant distributed database seems less difficult than to develop a NoSQL project for it to become ACID.

colanderman · 9 years ago
It is not. The partitions are by row. The novelty is in that it is truly native support, rather than a couple optimizations to make roll-your-own palatable.
lobster_johnson · 9 years ago
No, it's row-level partitioning — splits a single logical table into multiple physical tables.
sapling · 9 years ago
I was wrong.It isn't column partitioning.
tajen · 9 years ago
About donations: I believe PostgreSQL now deserves more advertising and marketing to develop its adoption in major companies and, hence, get more funding. If I donate on the website, it says it will help conferences. Where should I donate?
bigato · 9 years ago
Supposing the case in which all partitions are on the same disk and that you manage to index your data well enough according to your usage that postgres does not need to do full table scans, are there any additional performance benefits on partitioning?
lobster_johnson · 9 years ago
Well, anything that reduces the size of a search space helps performance.

Partitioning can drastically improve query times because the planner can use statistics only from a single partition (assuming the query works on a single partition). Postgres uses (among other things) a range histogram of cardinalities to determine the "selectivity" — how many rows a query is likely going to match. If you have a 1B-row table and you're looking for a value that only occurs once (low cardinality), the statistics won't help all that much. But if you partition it so that you're only looking at 1M rows instead of 1B, the planner can be a lot more precise.

Another point is cache efficiency. You want the cache to contain only "hot" data that's actively being queried for. If most of your queries are against new data, then without partitioning, any single page would likely contain tuples from all time periods, randomly intermixed, and so a cached page will contain a lot of data that's not used by any query. (Note that if you use table clustering, which requires the regularly running of the "CLUSTER" command, then you can achieve the same effect at the expense of having to rewrite the entire table.) If you partition by time, you'd ensure that the cache was being used more optimally.

Write access is also helped by partitioning by cold/hot data: B-tree management is cheaper and more cache-efficient if it doesn't need to reorganize cold data along with the hot. And smaller, frequently changed partitions can be vacuumed/analyzed more frequently, while unchanging partitions can be left alone.

gdulli · 9 years ago
1. Flexibility/freedom to distribute partitions in the future if needed.

2. Indexing doesn't work well in all cases. You can be better off scanning entire small partition tables lacking an index on a given column than with a single very large table whether that column has an index or not. (Indexes take up space and need to be read from disk if they don't fit in a memory cache, indexes don't work well for low-cardinality columns, etc.)

3. There are operations you can parallelize on a large number of small/medium tables and perform faster or more conveniently than a single very large table. One of my favorite techniques:

# usage: seq 0 255 |parallel -j16 ./alter-tablespace.sh {}

hexn=`printf "%02x" $1`

psql -Atc "ALTER TABLE tablename_${hexn} SET TABLESPACE new_tblspace" -hxxx -Uxxx xxx

4. A nice side effect of properly/evenly partitioned data you get for free is that you can do certain types of analysis on a single partition (or a few) very quickly and have it represent a sampled version of the data set. You can think of it as another index you get for free.

takeda · 9 years ago
To add to responses that you already got there's also a nice use case that partitioning helps with.

When you have table that you constantly inserting large amount of data, and simliarly you are removing old data at the same frequency (i.e. only care about month of data).

If you set partition for example per day, it's way faster to drop old tables than performing a delete.

Jweb_Guru · 9 years ago
Yes. Less latch contention for nodes of a single btree index, for instance.
bigato · 9 years ago
I didn't know what latch is, so I googled it and found a nice explanation:

https://oracle2amar.wordpress.com/2010/07/09/what-are-latche...

"A latch is a type of a lock that can be very quickly acquired and freed."

That brings me a couple more questions:

1. May I infer then that the only benefit from partitioning the table (fully located on the same disk) that can not be achieved by indexes is that queries will wait less time for this kind of lock to be released?

2. May I assume while a table is only being read and not changed, there's no performance gain from partitioning a table (fully located on the same disk) that can not be achieved by indexes?

gdulli · 9 years ago
This message was confusing to me because I've been using/abusing Postgres inheritance for partitioning for so long that I forgot Postgres didn't technically have a feature called "partitioning".

What I'm looking forward to finding out is if I can take an arbitrary expression on a column and have it derive all the same benefits of range partitioning like constraint exclusion.

vincentdm · 9 years ago
I really like this addition. We store a lot of data for different customers, and most of our queries are only about data from a single customer. If I understand it correctly, if we would partition by customer_id, once the query planner is able to take advantage of this new feature, it will be much faster to do such queries as it won't have to wade through rows of data from other customers.

Another common use case is that we want to know an average number for all/some customers. To do this, we run a subquery grouped by customer, and then calculate the average in a surrounding query. I hope that the query builder wil eventually become smart enough to use the GROUP BY clause to distribute this subquery to the different partitions.