I think the approach of multi-format, multi-UI, and new (to you) programming language isn't optimal even with AI help. Any mistake that is made in the API design or internal architecture will impact time and cost since everything will need to be refactored and tested.
The approach I'm trying to take for my own projects is to create a polished vertical slice and then ask the AI to replicate it for other formats / vertical slices. Are there any immediate use cases to even use and maintain a UI?
So a few comments on the code:
- feature claims rate limiting, but the code seems unused other than in unit tests... if so why wasn't this dead code detected?
- should probably follow Google/Buf style guide on protos and directory structure for them
- besides protos, we probably need to rely more on openapi spec as well for code generation to save on AI costs, I see openapi spec was only used as task input for the AI?
- if the AI isn't writing a postgres replacement for us, why have it write anything to do with auth as well? perhaps have setup instructions to use something like Keycloak or the Ory system?
The MCP community is just reinventing, but yes, improving, what we've done before in the previous generation: Microsoft Bot Framework, Speaktoit aka Google Dialogflow, Siri App Shortcuts / Spotlight.
And interactive UIs in chats go back at least 20 years, maybe not with an AI agent attached...
The next thing that will be reinvented is the memory/tool combination, aka a world model.
I’m curious though: what’s an example scenario you’ve seen that requires so many distinct types? I haven’t personally come across a case with 4,096+ protocol messages defined.
git clone https://github.com/googleapis/googleapis.git
cd googleapis
find . -name '*.proto' -and -not -name '*test*' -and -not -name '*example*' -exec grep '^message' {} \; | wc -l
I think this more speaks to the tradeoff of not having an IDL where the deserializer either knows what type to expect if it was built with the IDL file version that defined it, e.g., this recent issue:https://github.com/apache/fory/issues/2818
But now I do see that the 4096 is just arbitrary:
If schema consistent mode is enabled globally when creating fory, type meta will be written as a fory unsigned varint of type_id. Schema evolution related meta will be ignored.Protobuf is very much a DOP (data‑oriented programming) approach — which is great for some systems. But in many complex applications, especially those using polymorphism, teams don’t want to couple Protobuf‑generated message structs directly into their domain models. Generated types are harder to extend, and if you embed them everywhere (fields, parameters, return types), switching to another serialization framework later becomes almost impossible without touching huge parts of the codebase.
In large systems, it’s common to define independent domain model structs used throughout the codebase, and only convert to/from the Protobuf messages at the serialization boundary. That conversion step is exactly what’s represented in our benchmarks — because it’s what happens in many real deployments.
There’s also the type‑system gap: for example, if your Rust struct has a Box<dyn Trait> field, representing that cleanly in Protobuf is tricky. You might fall back to a oneof, but that essentially generates an enum variant, which often isn’t what users actually want for polymorphic behavior.
So, yes — we include the conversion in our measurements intentionally, to reflect the real‑world large systems practices.
So to reflect the real‑world practices, the benchmark code should then allocate and give the protobuf serializer an 8K Vec like in tonic, and not an empty one that may require multiple re-allocations?
https://github.com/apache/fory/blob/fd1d53bd0fbbc5e0ce6d53ef...
It seems if the serialization object is not a "Fory" struct, then it is forced to go through to/from conversion as part of the measured serialization work:
https://github.com/apache/fory/blob/fd1d53bd0fbbc5e0ce6d53ef...
The to/from type of work includes cloning Strings:
https://github.com/apache/fory/blob/fd1d53bd0fbbc5e0ce6d53ef...
reallocating growing arrays with collect:
https://github.com/apache/fory/blob/fd1d53bd0fbbc5e0ce6d53ef...
I'd think that the to/from Fory types is shouldn't be part of the tests.
Also, when used in an actual system tonic would be providing a 8KB buffer to write into, not just a Vec::default() that may need to be resized multiple times:
https://github.com/hyperium/tonic/blob/147c94cd661c0015af2e5...
I can see the source of an 10x improvement on an Intel(R) Xeon(R) Gold 6136 CPU @ 3.00GHz, but it drops to 3x improvement when I remove the to/from that clones or collects Vecs, and always allocate an 8K Vec instead of a ::Default for the writable buffer.
If anything, the benches should be updated in a tower service / codec generics style where other formats like protobuf do not use any Fory-related code at all.
Note also that Fory has some writer pool that is utilized during the tests:
https://github.com/apache/fory/blob/fd1d53bd0fbbc5e0ce6d53ef...
Original bench selection for Fory:
Benchmarking ecommerce_data/fory_serialize/medium: Collecting 100 samples in estimated 5.0494 s (197k it
ecommerce_data/fory_serialize/medium
time: [25.373 µs 25.605 µs 25.916 µs]
change: [-2.0973% -0.9263% +0.2852%] (p = 0.15 > 0.05)
No change in performance detected.
Found 4 outliers among 100 measurements (4.00%)
2 (2.00%) high mild
2 (2.00%) high severe
Compared to original bench for Protobuf/Prost: Benchmarking ecommerce_data/protobuf_serialize/medium: Collecting 100 samples in estimated 5.0419 s (20k
ecommerce_data/protobuf_serialize/medium
time: [248.85 µs 251.04 µs 253.86 µs]
Found 18 outliers among 100 measurements (18.00%)
8 (8.00%) high mild
10 (10.00%) high severe
However after allocating 8K instead of ::Default and removing to/from it for an updated protobuf bench: fair_ecommerce_data/protobuf_serialize/medium
time: [73.114 µs 73.885 µs 74.911 µs]
change: [-1.8410% -0.6702% +0.5190%] (p = 0.30 > 0.05)
No change in performance detected.
Found 14 outliers among 100 measurements (14.00%)
2 (2.00%) high mild
12 (12.00%) high severehttps://github.com/apache/fory/blob/fd1d53bd0fbbc5e0ce6d53ef...
https://github.com/apache/fory/blob/fd1d53bd0fbbc5e0ce6d53ef...
It seems if the serialization object is not a "Fory" struct, then it is forced to go through to/from conversion as part of the measured serialization work:
https://github.com/apache/fory/blob/fd1d53bd0fbbc5e0ce6d53ef...
The to/from type of work includes cloning Strings:
https://github.com/apache/fory/blob/fd1d53bd0fbbc5e0ce6d53ef...
reallocating growing arrays with collect:
https://github.com/apache/fory/blob/fd1d53bd0fbbc5e0ce6d53ef...
I'd think that the to/from Fory types is shouldn't be part of the tests.
Also, when used in an actual system tonic would be providing a 8KB buffer to write into, not just a Vec::default() that may need to be resized multiple times:
https://github.com/hyperium/tonic/blob/147c94cd661c0015af2e5...
TXSE's goal is to provide greater alignment with issuers and investors and address the high cost of going and staying public.
The alignment part translates IMO to avoiding political / social science policy issues like avoiding affirmative action listing requirements like the Nasdaq Board Diversity Rules that was just recently repealed: https://corpgov.law.harvard.edu/2025/01/12/fifth-circuit-vac....So it is as one might imagine, the formation was probably for similar reasons why owners are moving their company registration out of Delaware.
mmm delicious slop may i have more please sir
https://github.com/currentspace/capn-rs/blob/a816bfca5fb6ae5...
Yet there is a public interface that allows for initialization with "0":
https://github.com/currentspace/capn-rs/blob/a816bfca5fb6ae5...
It's like the LLM was able to predict that 1 is needed in the protocol, but wasn't relevant to check in the boilerplate.
I don't have a problem with newtype-all-the-things to ensure correctness in some areas, but no comments/constants does not lead to confidence.