I don’t think we reviewed your Go benchmarking code at the time—and that there were no technical critiques probably should not have been taken as explicit sign off.
IIRC we were more concerned at the deeper conceptual misunderstanding, that one could “roll your own” TB over PG with safety/performance parity, and that this would somehow be better than just using open source TB, hence the discussion focused on that.
The shopping process and queuing process puts considerably more load on our systems than the final purchase transaction, which ultimately is constrained by the size of the venue, which we can control by managing the queue throughput.
Even with a queue system in place, you inevitably end up with the thundering heard problem when ticket sales open, as a large majority of users will refresh their browsers regardless of instructions to the contrary
In other words, to count not only the money changing hands, but also the corresponding goods/services being exchanged.
These are all transactions: goods/services and the corresponding money.
For transparency here's the full Golang benchmarking code and our results if you want to replicate it: https://gist.github.com/KelseyDH/c5cec31519f4420e195114dc9c8...
We shared the code with the Tigerbeetle team (who were very nice and responsive btw), and they didn't raise any issues with the script we wrote of their Tigerbeetle client. They did have many comments about the real-world performance of PostgreSQL in comparison, which is fair.
I searched the recent history of our community Slack but it seems it may have been an older conversation.
We typically do code review work only for our customers so I’m not sure if there was some misunderstanding.
Perhaps the assumption that because we didn’t say anything when you pasted the code, therefore we must have reviewed the code?
Per my other comment, your benchmarking environment is also a factor. For example, were you running on EBS?
These are all things that our team would typically work with you on to accelerate you, so that you get it right the first time!
For example, if you have 8K transactions through 2 accounts, a naive system might read the 2 accounts, update their balances, then write the 2 accounts… for all 8K (!) transactions.
Whereas TB does vectorized concurrency control: read the 2 accounts, update them 8K times, write the 2 accounts.
This is why stored procedures only get you typically about a 10x win, you don’t see the same 1000x as with TB, especially at power law contention.
Was there something wrong with our test of the individual transactions in our Go script that caused the drop in transaction performance we observed?
We’d love to roll up our sleeves and help you get it right. Please drop me an email.
It's almost enough to make me believe in the independent existence of Platonic truths. Almost.
I have no suggestion for a better name off the top of my head. The issue I see is that you already have to know well its context and when it applies, also in order to not misremember it as “Write First, Read Last”, and to not mistake it as LIFO, or to relate it to a read-modify-write scenario in which you naturally would read first and write last anyway, though in a different sense. You see how the name is confusing?
Do you not think if someone can remember those four words, they’re less likely to get it wrong?
If you could contribute some better suggestions we could consider them!