> If it takes a long time to copy a database and it gets updated midway through, rsync may give me an invalid database file. The first half of the file is pre-update, the second half file is post-update, and they don’t match. When I try to open the database locally, I get an error
Of course! You can't copy the file of a running, active db receiving updates, that can only result in corruption.
> You can't copy the file of a running, active db receiving updates, that can only result in corruption
To push back against "only" -- there is actually one scenario where this works. Copying a file or a subvolume on Btrfs or ZFS can be done atomically, so if it's an ACID database or an LSM tree, in the worst case it will just rollback. Of course, if it's multiple files you have to take care to wrap them in a subvolume so that all of them are copied in the same transaction, simply using `cp --reflink=always` won't do.
Possibly freezing the process with SIGSTOP would yield the same result, but I wouldn't count on that
It can't be done without fs specific snapshots - otherwise how would it distinguish between a cp/rsync needing consistent reads vs another sqlite client wanting the newest data?
While I run and love litestream on my own system, I also like that they have a pretty comprehensive guide on how to do something like this manually, via built-in tools: https://litestream.io/alternatives/cron/
Litestream is really cool! I'm planning to use it to backup and restore my SQLite in the container level, just like what that ex-google guy who started a startup of a small KVM and had a flood in his warehouse while on vacation did. If I'm not mistaken. I would link here the perfect guide he wrote but there's 0 chance I'll find it. If you understand the reference please post the link.
Despite the beta label and lack of a 1.x release, I would consider the project pretty stable. We've used it in production for over 18 months to support an offline-first point of sale system. We haven't had any issues with Litestream.
>You can't copy the file of a running, active db receiving updates, that can only result in corruption
There is a slight 'well akshully' on this. A DB flush and FS snapshot where you copy the snapshotted file will allow this. MSSQL VSS snapshots would be an example of this.
Similarly you can rsync a Postgres data directory safely while the db is running, with the caveat that you likely lose any data written while the rsync is running. And if you want that data, you can get it with the WAL files.
It’s been years since I needed to do this, but if I remember right, you can clone an entire pg db live with a `pg_backup_start()`, rsync the data directory, pg_backup_stop() and rsync the WAL files written since backup start.
If the corruption is detectable and infrequent enough for your purposes, then it does work, with a simple “retry until success” loop. (That’s how TCP works, for example.)
Not all corruption is detectable. You could make a copy during a transaction where only a subset of the transactions saved pages are persisted but all branch & leaf pages are pointed to correctly. That would give you a state of the database that never actually existed and break atomicity.
> Of course! You can't copy the file of a running, active db receiving updates, that can only result in corruption
Do people really not understand how file storage works? I cannot rightly apprehend the confusion of ideas that would produce an attempt to copy a volatile database without synchronization and expect it to work.
The confusion of ideas here is understandable IMO: people assume everything is atomic. Databases of course famously have ACID guarantees. But it's easy for people to assume copying is also an atomic operation. Honestly if someone works too much with databases and not enough with filesystems it's a mistake easily made.
How to copy databases between computers? Just send a circle and forget about the rest of the owl.
As others have mentioned an incremental rsync would be much faster, but what bothers me the most is that he claims that sending SQL statements is faster than sending database and COMPLETELY omiting the fact that you have to execute these statements. And then run /optimize/. And then run /vacuum/.
Currently I have scenario in which I have to "incrementally rebuild *" a database from CSV files. While in my particular case recreating the database from scratch is more optimal - despite heavy optimization it still takes half an hour just to run batch inserts on an empty database in memory, creating indexes, etc.
For my use case (recreating in-memory from scratch) it basically boils down to three points: (1) journal_mode = off (2) wrapping all inserts in a single transaction (3) indexes after inserts.
For whatever it's worth I'm getting 15M inserts per minute on average, and topping around 450k/s for trivial relationship table on a stock Ryzen 5900X using built-in sqlite from NodeJS.
It's worth noting that the data in that benchmark is tiny (28MB). While this varies between database engines, "one transaction for everything" means keeping some kind of allocations alive.
The optimal transaction size is difficult to calculate so should be measured, but it's almost certainly never beneficial to spend multiple seconds on a single transaction.
There will also be weird performance changes when the size of data (or indexed data) exceeds the size of main memory.
Hilarious, 3000+ votes for a Stack Overflow question that's not a question. But it is an interesting article. Interesting enough that it gets to break all the rules, I guess?
As with any optimization, it matters where your bottleneck is here. Sounds like theirs is bandwidth but CPU/Disk IO is plentiful since they mentioned that downloading 250MB database takes minute where I just grabbed 2GB SQLite test database from work server in 15 seconds thanks to 1Gbps fiber.
30 minutes seems long. Is there a lot of data? I’ve been working on bootstrapping sqlite dbs off of lots of json data and by holding a list of values and then inserting 10k at a time with inserts, Ive found a good perf sweet spot where I can insert plenty of rows (millions) in minutes. I had to use some tricks with bloom filters and LRU caching, but can build a 6 gig db in like 20ish minutes now
I create a new in-mem db, run schema and then import every table in one single transaction (in my testing it showed that it doesn't matter if it's a single batch or multiple single inserts as long are they part of single transaction).
I do a single string replacement per every CSV line to handle an edge case. This results in roughly 15 million inserts per minute (give or take, depending on table length and complexity). 450k inserts per second is a magic barrier I can't break.
I then run several queries to remove unwanted data, trim orphans, add indexes, and finally run optimize and vacuum.
Millions of rows in minutes sounds not ok, unless your tables have a large number of columns. A good rule is that SQLite's insertion performance should be at least 1% of sustained max write bandwidth of your disk; preferably 5%, or more. The last bulk table insert I was seeing 20%+ sustained; that came to ~900k inserts/second for an 8 column INT table (small integers).
> The protocol is for the replica to send a cryptographic hash of each of its pages over to the origin side, then the origin sends back the complete content of any page for which the hash does not match.
Yeah, but unfortunately the SQLite team doesn't include that tool with their "autotools" tarball, which is what most distros (and brew) use to package SQLite. The only way to use the tool is to compile it yourself.
The recently released sqlite_rsync utility uses a version of the rsync algorithm optimized to work on the internal structure of a SQLite database. It compares the internal data pages efficiently, then only syncs changed or missing pages.
Nice tricks in the article, but you can more easily use the builtin utility now :)
sqlite_rsync can only be used in WAL mode. A further constraint of WAL mode is the database file must be stored on local disk. Clearly, you'd want to do this almost all the time, but for the times this is not possible this utility won't work.
I just checked in an experimental change to sqlite3_rsync that allows it to work on non-WAL-mode database files, as long as you do not use the --wal-only command-line option. The downside of this is that the origin database will block all writers while the sync is going on, and the replicate database will block both reads and writers during the sync, because to do otherwise requires WAL-mode. Nevertheless, being able to sync DELETE-mode databases might well be useful, as you observe.
The main point is to skip the indices, which you have to do pre-compression.
When I do stuff like this, I stream the dump straight into gzip. (You can usually figure out a way to stream directly to the destination without an intermediate file at all.)
Plus this way it stays stored compressed at its destination. If your purpose is backup rather than a poor man's replication.
The main point was decreasing the transfer time - if rsync -z makes it short enough, it doesn't matter if the indices are there or not, and you also skip the step of re-creating the DB from the text file.
Depends. Run a benchmark on your own hardware/network. ZFS uses in-flight compression because CPUs are generally faster than disks. That may or may not be the case for your setup.
What? Compression is absolutely essential throughout computing as a whole, especially as CPUs have gotten faster. If you have compressible data sent over the network (or even on disk / in RAM) there's a good chance you should be compressing it. Faster links have not undercut this reality in any significant way.
He absolutely should be doing this, because by using rsync on a compressed file he's passing by the whole point of using rsync, which is the rolling-checksum based algorithm that allows to transfer diffs.
The VACUUM command with an INTO clause is an alternative to the backup API for generating backup copies of a live database. The advantage of using VACUUM INTO is that the resulting backup database is minimal in size and hence the amount of filesystem I/O may be reduced.
It's cool but it does not address the issue of indexes, mentioned in the original post. Not carrying index data over the slow link was the key idea. The VACUUM INTO approach keeps indexes.
A text file may be inefficient as is, but it's perfectly compressible, even with primitive tools like gzip. I'm not sure the SQLite binary format compresses equality well, though it might.
> A text file may be inefficient as is, but it's perfectly compressible, even with primitive tools like gzip. I'm not sure the SQLite binary format compresses equality well, though it might.
I hope you’re saying because of indexes? I think you may want to revisit how compression works to fix your intuition. Text+compression will always be larger and slower than equivalent binary+compression assuming text and binary represent the same contents? Why? Binary is less compressible as a percentage but starts off smaller in absolute terms which will result in a smaller absolute binary. A way to think about it is information theory - binary should generally represent the data more compactly already because the structure lived in the code. Compression is about replacing common structure with noise and it works better if there’s a lot of redundant structure. However while text has a lot of redundant structure, that’s actually bad for the compressor because it has to find that structure and process more data to do that. Additionally, is using generic mathematical techniques to remove that structure which are genetically optimal but not as optimal as removing that structure by hand via binary is.
There’s some nuance here because the text represents slightly different things than the raw binary SQLite (how to restore data in the db vs the precise relationships + data structures for allowing insertion/retrieval. But still I’d expect it to end up smaller compressed for non trivial databases
In DuckDB you can do the same but export to Parquet, this way the data is an order of magnitude smaller than using text-based SQL statements. It's faster to transfer and faster to load.
That's not it. This only exports the table's data, not the database. You lose the index, comments, schemas, partitioning, etc... The whole point of OP's article is how to export the indices in an efficient way.
Also I wonder how big your test database is and it's schema. For large tables Parquet is way more efficient than a 20% reduction.
If there's UUIDs, they're 36 bits each in text mode and 16 bits as binary in Parquet. And then if they repeat you can use a dictionary in your Parquet to save the 16 bits only once.
It's also worth trying to use brotli instead of zstd if small files is your goal.
SQLite has an session extension, which will track changes to a set of tables and produce a changeset/patchset which can patch previous version of an SQLite database.
I have yet to see a single SQLite binding supporting this, so it’s quite useless unless you’re writing your application in C, or are open to patching the language binding.
In one of my projects I have implemented my own poor man’s session by writing all the statements and parameters into a separate database, then sync that and replay. Works well enough for a ~30GB database that changes by ~0.1% every day.
Well, my upcoming Elixir wrapper of a Rust wrapper of SQLite (heh, I am aware how it sounds) will support it. I am pretty sure people do find it useful and would use it. If not, the 1-2 days of hobby coding to deliver it are not something I would weep over.
I have updated the Lua binding to support the session extension (http://lua.sqlite.org/home/timeline?r=session) and it's been integrated into the current version of cosmopolitan/redbean. This was partially done to support application-level sync of SQLite DBs, however this is still a work in progress.
Of course! You can't copy the file of a running, active db receiving updates, that can only result in corruption.
For replicating sqlite databases safely there is
https://github.com/benbjohnson/litestream
A reminder that litestream can run over plain old SFTP[1] which means you can stream database replication to just about any UNIX endpoint over SSH.
I have a favorite[2] but any SFTP server will do ...
[1] https://github.com/benbjohnson/litestream/issues/140
[2] https://www.rsync.net/resources/notes/2021-q3-rsync.net_tech...
To push back against "only" -- there is actually one scenario where this works. Copying a file or a subvolume on Btrfs or ZFS can be done atomically, so if it's an ACID database or an LSM tree, in the worst case it will just rollback. Of course, if it's multiple files you have to take care to wrap them in a subvolume so that all of them are copied in the same transaction, simply using `cp --reflink=always` won't do.
Possibly freezing the process with SIGSTOP would yield the same result, but I wouldn't count on that
https://mtlynch.io/litestream/
And here's the flooding story:
https://mtlynch.io/solo-developer-year-6/#the-most-terrifyin...
Sidenote: I still use Litestream in every project where I use SQLite.
Is Litestream still an active project?
There is a slight 'well akshully' on this. A DB flush and FS snapshot where you copy the snapshotted file will allow this. MSSQL VSS snapshots would be an example of this.
It’s been years since I needed to do this, but if I remember right, you can clone an entire pg db live with a `pg_backup_start()`, rsync the data directory, pg_backup_stop() and rsync the WAL files written since backup start.
https://learn.microsoft.com/en-us/sql/relational-databases/b...
Wot? There are multiple ways of snapshotting/checkpointing, starting at the virty level and working on down the stack through the application level.
https://www.sqlite.org/lang_vacuum.html
Do people really not understand how file storage works? I cannot rightly apprehend the confusion of ideas that would produce an attempt to copy a volatile database without synchronization and expect it to work.
I see you are a man of culture.
As others have mentioned an incremental rsync would be much faster, but what bothers me the most is that he claims that sending SQL statements is faster than sending database and COMPLETELY omiting the fact that you have to execute these statements. And then run /optimize/. And then run /vacuum/.
Currently I have scenario in which I have to "incrementally rebuild *" a database from CSV files. While in my particular case recreating the database from scratch is more optimal - despite heavy optimization it still takes half an hour just to run batch inserts on an empty database in memory, creating indexes, etc.
It's a very good writeup on how to do fast inserts in sqlite3
For my use case (recreating in-memory from scratch) it basically boils down to three points: (1) journal_mode = off (2) wrapping all inserts in a single transaction (3) indexes after inserts.
For whatever it's worth I'm getting 15M inserts per minute on average, and topping around 450k/s for trivial relationship table on a stock Ryzen 5900X using built-in sqlite from NodeJS.
The optimal transaction size is difficult to calculate so should be measured, but it's almost certainly never beneficial to spend multiple seconds on a single transaction.
There will also be weird performance changes when the size of data (or indexed data) exceeds the size of main memory.
CREATE INDEX then INSERT vs. INSERT then CREATE INDEX
i.e. they only time INSERTs, not the CREATE INDEX after all the INSERTs.
I create a new in-mem db, run schema and then import every table in one single transaction (in my testing it showed that it doesn't matter if it's a single batch or multiple single inserts as long are they part of single transaction).
I do a single string replacement per every CSV line to handle an edge case. This results in roughly 15 million inserts per minute (give or take, depending on table length and complexity). 450k inserts per second is a magic barrier I can't break.
I then run several queries to remove unwanted data, trim orphans, add indexes, and finally run optimize and vacuum.
Here's quite recent log (on stock Ryzen 5900X):
It works at the page level:
> The protocol is for the replica to send a cryptographic hash of each of its pages over to the origin side, then the origin sends back the complete content of any page for which the hash does not match.
Nice tricks in the article, but you can more easily use the builtin utility now :)
I blogged about how it works in detail here: https://nochlin.com/blog/how-the-new-sqlite3_rsync-utility-w...
sqlite3_rsync is now built into the rsync.net platform.
… just added last week and not rolled out in all regions but … all initial users reported it worked exactly as they expected it to.If you are able, please try out this enhancement and let me know if it solves your problem. See <https://sqlite.org/src/info/2025-05-01T16:07Z> for the patch.
sqlite transaction- and WAL-aware rsync with inflight compression.
When I do stuff like this, I stream the dump straight into gzip. (You can usually figure out a way to stream directly to the destination without an intermediate file at all.)
Plus this way it stays stored compressed at its destination. If your purpose is backup rather than a poor man's replication.
For every other network, you should compress as you are likely dealing with multiple tenants that would all like a piece of your 40Gbps bandwidth.
A text file may be inefficient as is, but it's perfectly compressible, even with primitive tools like gzip. I'm not sure the SQLite binary format compresses equality well, though it might.
You can modify the database before vacuuming by making a new in-memory database, copying selected tables into it, and then vacuuming that to disk.
I hope you’re saying because of indexes? I think you may want to revisit how compression works to fix your intuition. Text+compression will always be larger and slower than equivalent binary+compression assuming text and binary represent the same contents? Why? Binary is less compressible as a percentage but starts off smaller in absolute terms which will result in a smaller absolute binary. A way to think about it is information theory - binary should generally represent the data more compactly already because the structure lived in the code. Compression is about replacing common structure with noise and it works better if there’s a lot of redundant structure. However while text has a lot of redundant structure, that’s actually bad for the compressor because it has to find that structure and process more data to do that. Additionally, is using generic mathematical techniques to remove that structure which are genetically optimal but not as optimal as removing that structure by hand via binary is.
There’s some nuance here because the text represents slightly different things than the raw binary SQLite (how to restore data in the db vs the precise relationships + data structures for allowing insertion/retrieval. But still I’d expect it to end up smaller compressed for non trivial databases
https://duckdb.org/docs/stable/sql/statements/export.html
You'd want to do this:
Also I wonder how big your test database is and it's schema. For large tables Parquet is way more efficient than a 20% reduction.If there's UUIDs, they're 36 bits each in text mode and 16 bits as binary in Parquet. And then if they repeat you can use a dictionary in your Parquet to save the 16 bits only once.
It's also worth trying to use brotli instead of zstd if small files is your goal.
https://www.sqlite.org/sessionintro.html
In one of my projects I have implemented my own poor man’s session by writing all the statements and parameters into a separate database, then sync that and replay. Works well enough for a ~30GB database that changes by ~0.1% every day.
https://github.com/crawshaw/sqlite
https://github.com/eatonphil/gosqlite/
Ended up with the latter, but did have to add one function binding in C, to inspect changesets.
Idea for an offline first app, where each app install call pull a changeset and apply it to their local db.