Readit News logoReadit News
Posted by u/saisrirampur 2 years ago
Launch HN: PeerDB (YC S23) – Fast, Native ETL/ELT for Postgres
Hi HN! I'm Sai, the co-founder and CEO of PeerDB (https://www.peerdb.io/), a Postgres-first data-movement platform that makes moving data in and out of Postgres fast and simple. PeerDB is free and open (https://github.com/PeerDB-io/peerdb) and we provide a Docker stack for users to try us out. Our repo is at https://github.com/PeerDB-io/peerdb and there’s a 5-minute quickstart here: https://docs.peerdb.io/quickstart.

For the past 8 years, working at Microsoft on Postgres on Azure, and before that at Citus Data, I’ve worked closely with customers running Postgres at the heart of their data stack, storing anywhere from 10s of GB of data to 10s of TB.

This was when I got exposed to the challenges customers faced when moving data in and out of Postgres. Usually they would try existing ETL tools, fail, and decide to build in-house solutions. Common issues with these tools included painfully slow syncs - syncing 100s of GB of data took days; flaky and unreliable - frequent crashes, loss of data precision on target etc., and; feature-limited - lack of configurability, unsupported data types and so on.

I remember a specific scenario where a tool didn’t support something as simple as the Postgres’ COPY command to ingest data. This would have improved the throughput by orders of magnitude. We (customer and me) reached out to that company to request them to add this feature. They couldn’t prioritize this feature because it wasn’t very easy - their tech stack was designed to support 100s of connectors rather than supporting a native Postgres feature.

After multiple such occurrences, I thought, why not build a tool specialized for Postgres, making the lives of many Postgres users easier. I reached out to my long-time buddy Kaushik, who was building operating systems at Google and had led data teams at Safegraph and Palantir. We spent a few weeks building an MVP that streamed data in real-time from Postgres to BigQuery. It was 10 times faster than existing tools and maintained data freshness of less than 30 seconds. We realized that there were many Postgres native and infrastructural optimizations we could do to provide a rich data-movement experience for Postgres users. This is when we decided to start PeerDB!

We started with two main use cases: Real-time Change Data Capture from Postgres (demo: https://docs.peerdb.io/usecases/realtime-cdc#demo) and Real-time Streaming of query results from Postgres (demo: https://docs.peerdb.io/usecases/realtime-streaming-of-query-...). The 2nd demo shows PeerDB streaming a table with 100M rows from Postgres to Snowflake.

We implement multiple optimizations to provide a fast, reliable, feature-rich experience. For performance, we can parallelize the initial load of a large table, still ensuring consistency. Syncing 100s of GB goes from days to minutes. We do this by logically partitioning the table based on internal tuple identifiers (CTID) and parallelly streaming those partitions (inspired by this DuckDB blog - https://duckdb.org/2022/09/30/postgres-scanner.html#parallel...)

For CDC, we don’t use Debezium, rather handle replication more natively—reading the slot, replicating the changes, keeping state etc. We made this choice mainly for flexibility. Staying native helps us use existing and future Postgres enhancements more effectively. For example, if the order of rows across tables on the target is not important, we can parallelize reading of a single slot across multiple tables and improve performance. Our architecture is designed for real-time syncs, which enables data-freshness of a few 10s of seconds even at large throughputs (10k+ tps).

We have fault tolerance mechanisms for reliability (https://blog.peerdb.io/using-temporal-to-scale-data-synchron...) and support multiple features including log-based (CDC) / query based streaming, efficient syncing of tables with large (TOAST) columns, configurable batching and parallelism to prevent OOMs and crashes etc.

For usability - we provide a Postgres compatible SQL layer for data-movement. This makes the life of data engineers much easier. They can develop pipelines using a framework they are familiar with, without needing to deal with custom UIs and REST APIs. They can use Postgres' 100s of integrations to build and manage ETL. We extend Postgres' SQL grammar with a few new intuitive SQL commands to enable real-time data streaming across stores. Because of this, we were able to add dbt integration via Dagster (in private preview) in a few hours! We expect data-engineers to unravel similar integrations with PeerDB easily, and plan to make this grammar richer as we evolve.

PeerDB consists of the following components to handle data replication: (1) PeerDB Server uses the pgwire protocol to mimic a PostgreSQL server, responsible for query routing and generating gRPC requests to the Flow API. It relies on AST analysis to make informed decisions on routing. (2) Flow API: an API layer that deals with gRPC commands, orchestrating the data sync operations; (3) Flow Workers execute the data read-write operations from the source to the destination. Built to scale horizontally, they interact with Temporal for increased resilience. The types of data replication supported include CDC streaming replication and query-based batch replication. Workers do all of the heavy lifting, and have data store specific optimizations.

Currently we support 6 target data stores (BigQuery, Snowflake, Postgres, S3, Kafka etc) for data movement from Postgres. This doc captures the current status of the connectors: https://docs.peerdb.io/sql/commands/supported-connectors.

As we spoke to more customers, we realized that getting data into PostgreSQL at scale is equally important and hard. For example one of our customers wants to periodically sync data in multiple SQL Server instances (running on the edge) to their centralized Postgres database. Requests for Oracle to Postgres migrations are also common. So now we’re also supporting source data stores with Postgres as the target (currently SQL Server and Postgres itself, with more to come).

We are actively working with customers to onboard them to our self-hosted enterprise offering. Our fully hosted offering on the cloud is in private preview. We haven’t yet decided on the pricing. One common concern we’ve heard from customers is that existing tools are expensive and charge based on the amount of data transferred. To address this, we are considering a more transparent way of pricing—for example, pricing based on provisioned hardware (cpu, memory, disk). We’re open for feedback on this!

Check out our github repo - https://github.com/PeerDB-io/peerdb and go ahead and give it a spin (5-minute quickstart https://docs.peerdb.io/quickstart).

We want to provide the world’s best data-movement experience for Postgres. We would love to get your feedback on product experience, our thesis and anything else that comes to your mind. It would be super useful for us. Thank you!

zzztimbo · 2 years ago
How does column exclusion work? We would like to prevent PII columns from being replicated into BigQuery.
dgea005 · 2 years ago
Looks awesome! Maybe I missed it in the docs but I wonder how you solved the problem of a source table's initial snapshot when performing CDC (locking a large table on a production DB can be problematic).

Did you implement something like described here? https://netflixtechblog.com/dblog-a-generic-change-data-capt...

Lastly, I wonder what plugin you're using for the Postgres logical replication?

Congrats!

Edit: re-read it and saw about the DuckDB blog's inspired ctid solution for the initial snapshot. Do you find querying by ctid okay performance wise? As far as I remember it will use sequential scans

saisrirampur · 2 years ago
Great question. It uses something called as Tid Scan, which kind of means directly reading from disk based on an address and Tid Scans are very efficient. The snapshot approach captured in the blog I shared, ensures consistency.
jgraettinger1 · 2 years ago
It does involve pinning the WAL while the backfill is happening, though, which is quite different from the incremental backfill achieved by DBLog.

The former can be faster, because you’re able to walk the table in physical storage order, but your DB cannot reclaim WAL segments while this happens (disk can fill).

The latter allows for resumable backfills that can be tuned to the available spare capacity of the source DB, and doesn’t cause WAL segments to back up, but must walk tables in logical order which can sometimes be slower (for example, if you key using a random UUID).

leononame · 2 years ago
Congratulations on the launch. Does PostgreSQL to PostgreSQL streaming using PeerDB have any benefit over just using Streaming Replication?

Could this be used as a sort of "live backup" of your data? (i.e. just making sure that data isn't lost if the server dies down completely, not thinking of HA)

Sorry if it's a bit of a stupid question, I realize it's not the main focus of PeerDB.

saisrirampur · 2 years ago
Postgres Streaming replication is very robust and has been in Postgres since multiple 10s of years. Logical replication/decoding (that PeerDB uses) is more recent - introduced in the last decade. However streaming replication is harder to manage/setup and a bit restrictive - most cloud providers don't give access to WAL, so you cannot use streaming replication to replicate data across cloud providers.

Sure you can use PeerDB for backing up data - using CDC based replication or query based replication and both of these are pretty fast with PeerDB. You can have cold backups (store data to s3, blob etc) or hot backups (another postgres database). However note that the replication is async and there is some lag (few 10s of seconds) on the target data-store. So if you are expecting 0 data-loss, this won't be the right approach for backups/HA. With streaming replication, replication can be synchronous (synchronous_commit setting), which helps with 0 data-loss.

antupis · 2 years ago
I would add rather soon GIS stuff there like ArcGIS and Geopackage support you could find really good niche at GIS world becouse how powerfull Postgis is and there is that many tools that support ETL in Postgis.
saisrirampur · 2 years ago
Niche in GIS world is a very good idea. 100% agreed on how powerful (and mature) postgis is. Worked with many customers for whom postgis was critical to their app. Thanks for taking the time to share this input. Very useful! We will iterate and explore this further.
cyanydeez · 2 years ago
QGis and postgis are the primary open source universal tools
maptime · 2 years ago
I would most orgs that have massive Geo ETL problems use FME, love it or hate it, it's probably able to do it
aeyes · 2 years ago
Excited to give this a try. I tested almost every existing solution and as you said, nothing worked at scale.

In the end I wrote my own tool specifically optimized for the use cases. It’s been rock solid for years so I know that better solutions are possible. But it has its limitations with little flexibility and I wouldn’t want to try an initial load of a table with 1B rows.

It’s great to see someone with experience with large high throughput instances working on this.

saisrirampur · 2 years ago
Thank you for the above comment. This is exactly why we are building PeerDB. Let us know when you are testing peerdb out, we would love to collaborate and help as much as possible. It will be great feedback for us too!
mritchie712 · 2 years ago
Good stuff! It's pretty nuts Snowflake doesn't offer an integration like this out of the box. BigQuery kind of supports this[1], but it's not easy to set up or monitor.

Good luck!

1 - https://www.youtube.com/watch?v=ZNvuobLvL6M

saisrirampur · 2 years ago
Thanks for the comment! Yep Google provides Datastream. We tried it out and the experience was pretty good! However it was very much tied to the GCP eco-system. With PeerDB, our goal is to be open - be community driven than cloud driven. Also just to mention, as called out in the post there are more features apart from cdc (query based streaming, postgres as the target etc) that we will keep adding to help Postgres users.
samaysharma · 2 years ago
Nice! I like the focus on Postgres. Most ETL tools end up trying to build for a larger matrix of source and targets which limits using database specific features and optimizations. Is the CDC built primarily on top of the logical replication / logical decoding infrastructure in Postgres? If so, what are the limitations in that infrastructure which you'd like to see addressed in future Postgres versions?
saisrirampur · 2 years ago
That is a really good question! A few of them that come to my mind:

1/ logical replication support for schema (DDL) changes

2/ a native logical replication plugin (not wal2json) which is easier to read from the client side. pgoutput is fast but from reading/parsing from the client side is not as straightforward.

3/ improve decoding perf - i've observed pgoutput to cap at 10-15k changes per sec, for an average usecase. This is after good amount of tuning - ex: logical_replication_work_mem etc. Enabling larger tps - 50k+ tps would be great. Also this is important for Postgres, considering the diverse variety of workloads users are running. For example at Citus, I saw customers doing 500k rps (with COPY), I am not sure logical replication can handle those cases.

4/ logical replication slots in remote storage. one big risk with slots is that they can grow in size (if not read properly) and use up storage on the source. allowing shipping slots to remote storage would really help. i think Oracle allows something like this, but not 100% sure.

5/ logical decoding on standby. it is coming in postgre 16! we will aim to support in PeerDB, right after it is available.

I can think of many more, but sharing a few top ones that came to my mind!

furkansahin · 2 years ago
Congratulations on the launch Sai. Having worked with him over the years, I know that Sai knows what postgres migration means. I have seen him deal with countless migrations in and out of our services. I am excited to see what they have built