Many interesting things, for instance, I've been hearing a lot about how fast Java is, that it can be as fast as C++, and then I see this:
> But after a few weeks, it compiled and the results surprised us. The code was 10x faster than our carefully tuned Kotlin implementation – despite no attempt to make it faster. To put this in perspective, we had spent years incrementally improving the Kotlin version from 2,000 to 3,000 transactions per second (TPS). The Rust version, written by Java developers who were new to the language, clocked 30,000 TPS.
I feel like there is more to this, like some kind of a bottleneck, memory footprint, some IO overhead?
> Our conclusion was to rewrite our data plane entirely in Rust.
The point is well taken, figuring it out is not worth it, if you can just "rewrote" or have green field projects.
> These extension points are part of Postgres’ public API, allowing you to modify behavior without changing core code
Also, interesting. So PostgreSQL evolved to the point that it has a stable API for extensibility? This great for the project, maintain a modular design, and some stable APIs and, you can let people mix and match and reduce duplication of effort.
I see, then they're probably saying they found the internal APIs that are just more naturally stable, perhaps because they are close to the APIs used for extensions.
I understand that AWS did one TPC-C 95/5 read/write benchmark and got 700k transactions for 100k DPUs, but that’s not nearly enough context.
There either needs to be a selection of other benchmark-based pricing (especially for a primarily 50/50 read/write load), actual information on how a DPU is calculated or a way to return DPU per query executed, not just an aggregate CloudWatch figure.
We were promised DSQL pricing similar to DynamoDB and insofar as it’s truly serverless (i.e. no committed pricing) they’ve succeeded, but one of the best parts of DynamoDB is absolute certainty on cost, even if that can sometimes be high.
You still have absolute certainty. Read or write x amount of data and it will use exactly y R/WCU.
It then just becomes a modeling problem allowing you to determine your costs upfront during design. That’s one of the most powerful features of the truly serverless products in AWS in my opinion.
I don't use it, but have been keeping an eye on it.
At launch, they limited the number of affected tuples to 10000, including tuples in secondary indexes. They recently changed this limit to:
> A transaction cannot modify more than 3,000 rows. The number of secondary indexes does not influence this number. This limit applies to all DML statements (INSERT, UPDATE, DELETE).
There are a lot of other (IMO prohibitive) restrictions listed in their docs.
Which features would you like to see the team build first? Which limits would you like to see lifted first?
Most of the limitations you can see in the documentation are things we haven't gotten to building yet, and it's super helpful to know what folks need so we can prioritize the backlog.
indexes! vector, trigram and maybe geospatial. (some may be in by now I didn't follow the service as closely as others)
note, doesn't have to be pg_vector pg_trgm or PostGIS, just the index component even if it's a clean room implementation would make this way more useful.
Who would use Preview products in production? I'm building out some software that would fit perfectly into the constraints set for DSQL, but I realistically can't commit to something with no pricing / guarantees.
Which ones? It seems eminently usable from the outside now, at least for greenfield work. The subset of Postgres it supports is most of good/core/essential Postgres. (But I haven't tried it)
Good read. I like the part that both writing low level as well as high level component in Rust was proven worthwhile.
Maybe one can transform slow code from high level languages to low level language via LLMs in future. That can be nice performance boost for those who don't have Amazon engineers and budgets
> Maybe one can transform slow code from high level languages to low level language via LLMs in future.
This is one of the areas I'm most excited for LLM developer tooling. Choosing a language, database, or framework is a really expensive up-front decision for a lot of teams, made when they have the least information about what they're building, and very expensive to take back.
If LLM-powered tools could take 10-100x off the cost of these migrations, it would significantly reduce the risk of early decisions, and make it a ton easier to make software more reliable and cheaper to run.
It's very believable to me that, even with today's model capabilities, that 10-100x is achievable.
I remember many years back one of Go language author wrote C to Go trasformer and used that to convert all compiler, runtime, GC etc into Go.
Now in today's time some experts like above could create base transformer for high level language and frameworks to low level language and frameworks and this all get exposed via llm interfaces.
One can say why all this instead of generating fast binary directly from high level code. But generating textual transformation would give developers opportunity to understand, tweak and adjust transformed code which generating direct binary would not.
Compiler is involved in both cases but I was thinking of 2) where slower code in high level lang is converted to another lang code. The compiler of which is known to produce fast runinng code.
Where can I go to read about distributed SQL and big JOINs or WHERE IN clauses? I was hoping this article would cover that elephant in the room, rather than Rust being significantly more performant than JVM languages.
Marc Brooker has written and spoken about DSQL quite a bit. It’s still rather high level. I’d expect one or more papers to come out in the next few months, similarly to other Amazon databases.
Well Java need it because it fragments memory a lot. With Rust one has value types and stack allocation which takes care of one of the biggest cause of fragmentation.
With 10x the throughput (TPS) and the lack of GC pauses (which were the cause of the rewrite), how would they measure such a regression, let alone worry about it?
> But after a few weeks, it compiled and the results surprised us. The code was 10x faster than our carefully tuned Kotlin implementation – despite no attempt to make it faster. To put this in perspective, we had spent years incrementally improving the Kotlin version from 2,000 to 3,000 transactions per second (TPS). The Rust version, written by Java developers who were new to the language, clocked 30,000 TPS.
I feel like there is more to this, like some kind of a bottleneck, memory footprint, some IO overhead?
> Our conclusion was to rewrite our data plane entirely in Rust.
The point is well taken, figuring it out is not worth it, if you can just "rewrote" or have green field projects.
> These extension points are part of Postgres’ public API, allowing you to modify behavior without changing core code
Also, interesting. So PostgreSQL evolved to the point that it has a stable API for extensibility? This great for the project, maintain a modular design, and some stable APIs and, you can let people mix and match and reduce duplication of effort.
Not across major versions, no. I seriously doubt we will ever make promises around that. It would hamper development way too much.
blocking/nonblocking IO can explain this numbers
I understand that AWS did one TPC-C 95/5 read/write benchmark and got 700k transactions for 100k DPUs, but that’s not nearly enough context.
There either needs to be a selection of other benchmark-based pricing (especially for a primarily 50/50 read/write load), actual information on how a DPU is calculated or a way to return DPU per query executed, not just an aggregate CloudWatch figure.
We were promised DSQL pricing similar to DynamoDB and insofar as it’s truly serverless (i.e. no committed pricing) they’ve succeeded, but one of the best parts of DynamoDB is absolute certainty on cost, even if that can sometimes be high.
That depends if its On Demand or Provisioned, even if they recently added On Demand limits.
It then just becomes a modeling problem allowing you to determine your costs upfront during design. That’s one of the most powerful features of the truly serverless products in AWS in my opinion.
At launch, they limited the number of affected tuples to 10000, including tuples in secondary indexes. They recently changed this limit to:
> A transaction cannot modify more than 3,000 rows. The number of secondary indexes does not influence this number. This limit applies to all DML statements (INSERT, UPDATE, DELETE).
There are a lot of other (IMO prohibitive) restrictions listed in their docs.
https://docs.aws.amazon.com/aurora-dsql/latest/userguide/wor...
Most of the limitations you can see in the documentation are things we haven't gotten to building yet, and it's super helpful to know what folks need so we can prioritize the backlog.
note, doesn't have to be pg_vector pg_trgm or PostGIS, just the index component even if it's a clean room implementation would make this way more useful.
https://aws.amazon.com/blogs/aws/amazon-aurora-dsql-is-now-g...
Maybe one can transform slow code from high level languages to low level language via LLMs in future. That can be nice performance boost for those who don't have Amazon engineers and budgets
This is one of the areas I'm most excited for LLM developer tooling. Choosing a language, database, or framework is a really expensive up-front decision for a lot of teams, made when they have the least information about what they're building, and very expensive to take back.
If LLM-powered tools could take 10-100x off the cost of these migrations, it would significantly reduce the risk of early decisions, and make it a ton easier to make software more reliable and cheaper to run.
It's very believable to me that, even with today's model capabilities, that 10-100x is achievable.
Now in today's time some experts like above could create base transformer for high level language and frameworks to low level language and frameworks and this all get exposed via llm interfaces.
One can say why all this instead of generating fast binary directly from high level code. But generating textual transformation would give developers opportunity to understand, tweak and adjust transformed code which generating direct binary would not.
I think you are describing a compiler?
Deleted Comment
1) Kotlin code --> Java byte code --> JVM execution (slow)
vs
2) Kotin code --> Rust/Zig code --> Zig compiler --> native execution (fast)
Compiler is involved in both cases but I was thinking of 2) where slower code in high level lang is converted to another lang code. The compiler of which is known to produce fast runinng code.
Probably looks a lot like
Pseudocode -> C -> Assembly
Although the first is easier to run tests on and compare the outputs.
https://brooker.co.za/blog/2025/04/17/decomposing.html (includes talk)
https://brooker.co.za/blog/2024/12/03/aurora-dsql.html
https://brooker.co.za/blog/2024/12/04/inside-dsql.html
https://brooker.co.za/blog/2024/12/05/inside-dsql-writes.htm...
https://brooker.co.za/blog/2024/12/06/inside-dsql-cap.html
https://brooker.co.za/blog/2024/12/17/occ-and-isolation.html