The single-writer limitation in SQLite is per-database, not per-connection. You can shard your SQLite tables into multiple database files and query across all of them from a single connection.
I agree that "the single-writer limitation isn't just a theoretical concern", but it's also solvable without forking SQLite. ulimit's the limit! If your goal is resource maximization of a given computer, though, Postgres is likely a better fit.
> You can shard your SQLite tables into multiple database files and query across all of them from a single connection.
You mean using ATTACH statement, right? If you use WAL mode, then you cannot get transaction safety / ACID with ATTACH [0]
> If the main database is ":memory:" or if the journal_mode is WAL, then transactions continue to be atomic within each individual database file. But if the host computer crashes in the middle of a COMMIT where two or more database files are updated, some of those files might get the changes where others might not.
Moreover, ATTACH do not support more than 125 databases, so that limits the shards to 125. [1]
ATTACH does not solve the concurrency problems. That's why SQLite also has a BEGIN CONCURRENT experimental branch
Joins and Transactions are a pretty big part of SQL. I'm no expert, but if my quick search results are right, both are lost in the separate file per table scenario.
Kind of cool to see work on this. I do hope that the final db file result is still binary compatible with SQLite 3 in whatever direction Turso moves towards though... Rust or not.
I've been advocating with several projects over recent years to get SQLite3 as an archive/export/interchange format for data. Need to archive 2019 data from the database, dump it into a SQLite db with roughly the same schema... Need to pass multiple CSVs worth of data dumps, use a SQLite file instead.
As a secondary, I wonder if it's possible to actively use a SQLite interface against a database file on S3, assuming a single server/instance is the actual active connection.
For read-write it's a terrible idea. Object storage assumes objects are immutable. There may be some support for appends, but modifying the middle of an object in place involves copying the entire thing.
What is on the verge of becoming viable is to use Litestream to do asynchronous replication to S3, and have read replicas that stream the data directly from S3. But what's stored in S3 isn't a database file, but a format created for the purpose called LTX.
ncruces helped me with some code I made for VFS. It uses [zstd seekable](https://github.com/jtarchie/sqlitezstd) for reading a file. I thought it would be really well-suited for S3.
- Support for HTTP range queries
- "Fast" read times
- No disk required
I was wrong.
It turns out that for specific SQL queries, it might be fine, but not fast.
For queries that do aggregations, like `COUNT`, sqlite loads the whole database anyway.
> As a secondary, I wonder if it's possible to actively use a SQLite interface against a database file on S3, assuming a single server/instance is the actual active connection.
You could achieve this today using one of the many adapters that turn S3 into a file system, without needing to wait for any SQLite buy in.
I've been thinking Turso in the browser would be some eventual addition, is something the Rust rewrite would eventually unlock. And that would require considerable extra backing to make go. But here it is now!
Even in the “low-performing” zone of the single threaded SQLite and Turso, we are still talking about 50k rows per second and 1000 microseconds, aka 1 ms. It is insanely fast. 1ms is just the round-trip for my Postgres on RDS. It is amazing that SQLite is so awesome. I understand it is not for every use-case, but still awesome
How does the mvcc / row versioning changes that accumulate get shared among multiple processes? Is it another file like the WAL file? Or is it just local memory as part of the client? I guess with everything having to commit or entirely roll back at the end, it doesn't need to be shared.
How does this compare with hctree then? It sounds similar, both using an MVCC model.
Really cool though, love the technical details and would be interested to see further. I feel like there is certainly a space for sqlite+fancier features. Though it would break backwards compatibility, I've always wondered what kind of performance gains could be had by using a different varint implementation. And enforcing or always using "strict" for smarter column layouts.
Also would be really cool to have indexed views.
And columnstore seems like a logical direction as well.
TursoDB aims to be fully compatible with sqlite, so files you create with tursodb can be read by sqlite and vice-versa. Like sqlite, turso is in-process, so it runs alongside your application, which is quite different from postgres' client-server architecture.
The only think turso has in common with postgres is mvcc, which is a rather standard concurrency control model across modern databases. Idk if I answered your question :)
Can't use it locally (yet?) but it's definitely an interesting move in the space. For my personal projects lately I've been defaulting to sqlite in dev, and having a database wrapper layer to use something else in prod.
I'm imagining some insane replication behind the scenes, where every write is happening concurrently on a different SQLite DB, and then merged together sequentially into some master DB.
More specifically, I didn't see any commits about multiple simultaneous writers having landed in main and it's not document as a feature in the release notes https://github.com/tursodatabase/turso/commits
I agree that "the single-writer limitation isn't just a theoretical concern", but it's also solvable without forking SQLite. ulimit's the limit! If your goal is resource maximization of a given computer, though, Postgres is likely a better fit.
You mean using ATTACH statement, right? If you use WAL mode, then you cannot get transaction safety / ACID with ATTACH [0]
> If the main database is ":memory:" or if the journal_mode is WAL, then transactions continue to be atomic within each individual database file. But if the host computer crashes in the middle of a COMMIT where two or more database files are updated, some of those files might get the changes where others might not.
Moreover, ATTACH do not support more than 125 databases, so that limits the shards to 125. [1]
ATTACH does not solve the concurrency problems. That's why SQLite also has a BEGIN CONCURRENT experimental branch
[0] - https://www.sqlite.org/lang_attach.html
[1] - https://www.sqlite.org/limits.html
Joins: https://simonwillison.net/2021/Feb/21/cross-database-queries...
Transactions: https://www.sqlite.org/atomiccommit.html#_multi_file_commit
Though this stuff moves slowly (that announcement was almost 3 years ago!), so I'm glad to see Turso giving us options today.
Deleted Comment
I've been advocating with several projects over recent years to get SQLite3 as an archive/export/interchange format for data. Need to archive 2019 data from the database, dump it into a SQLite db with roughly the same schema... Need to pass multiple CSVs worth of data dumps, use a SQLite file instead.
As a secondary, I wonder if it's possible to actively use a SQLite interface against a database file on S3, assuming a single server/instance is the actual active connection.
For example, from Go, you could use my driver, and point it to a database file stored in S3 using this: https://pkg.go.dev/github.com/ncruces/go-sqlite3/vfs/readerv...
For read-write it's a terrible idea. Object storage assumes objects are immutable. There may be some support for appends, but modifying the middle of an object in place involves copying the entire thing.
What is on the verge of becoming viable is to use Litestream to do asynchronous replication to S3, and have read replicas that stream the data directly from S3. But what's stored in S3 isn't a database file, but a format created for the purpose called LTX.
- Support for HTTP range queries - "Fast" read times - No disk required
I was wrong.
It turns out that for specific SQL queries, it might be fine, but not fast. For queries that do aggregations, like `COUNT`, sqlite loads the whole database anyway.
I also got sqlite-s3vfs working from Python a few months ago: https://simonwillison.net/2025/Feb/7/sqlite-s3vfs/
Both of these are very much read-only mechanisms though.
You could achieve this today using one of the many adapters that turn S3 into a file system, without needing to wait for any SQLite buy in.
I've been thinking Turso in the browser would be some eventual addition, is something the Rust rewrite would eventually unlock. And that would require considerable extra backing to make go. But here it is now!
How does this compare with hctree then? It sounds similar, both using an MVCC model.
Really cool though, love the technical details and would be interested to see further. I feel like there is certainly a space for sqlite+fancier features. Though it would break backwards compatibility, I've always wondered what kind of performance gains could be had by using a different varint implementation. And enforcing or always using "strict" for smarter column layouts.
Also would be really cool to have indexed views.
And columnstore seems like a logical direction as well.
The only think turso has in common with postgres is mvcc, which is a rather standard concurrency control model across modern databases. Idk if I answered your question :)
I'm imagining some insane replication behind the scenes, where every write is happening concurrently on a different SQLite DB, and then merged together sequentially into some master DB.
neither the prerelease
https://github.com/tursodatabase/turso/releases/tag/v0.3.0-p...
nor their latest release.
https://github.com/tursodatabase/turso/releases/tag/v0.2.2
The core/mvcc directory does have commits though, so maybe you can.
https://github.com/tursodatabase/turso/commits/main/core/mvc...