Protocol buffers suck but so does everything else. Name another serialization declaration format that both (a) defines which changes can be make backwards-compatibly, and (b) has a linter that enforces backwards compatible changes.
Just with those two criteria you’re down to, like, six formats at most, of which Protocol Buffers is the most widely used.
And I know the article says no one uses the backwards compatible stuff but that’s bizarre to me – setting up N clients and a server that use protocol buffers to communicate and then being able to add fields to the schema and then deploy the servers and clients in any order is way nicer than it is with some other formats that force you to babysit deployment order.
The reason why protos suck is because remote procedure calls suck, and protos expose that suckage instead of trying to hide it until you trip on it. I hope the people working on protos, and other alternatives, continue to improve them, but they’re not worse than not using them today.
> Typical offers a new solution ("asymmetric" fields) to the classic problem of how to safely add or remove fields in record types without breaking compatibility. The concept of asymmetric fields also solves the dual problem of how to preserve compatibility when adding or removing cases in sum types.
This seems interesting. Still not sure if `required` is a good thing to have (for persistent data like log you cannot really guarantee some field's presence without schema versioning baked into the file itself) but for an intermediate wire use cases, this will help.
I've never heard of Typical but the fact they didn't repeat protobuf's sin regarding varint encoding (or use leb128 encoding...) makes me very interested! Thank you for sharing, I'm going to have to give it a spin.
Seems like a lot of effort to avoid adding a message version field. I’m not a web guy, so maybe I’m missing the point here, but I always embed a schema version field in my data.
We use protocol buffers on a game and we use the back compat stuff all the time.
We include a version number with each release of the game. If we change a proto we add new fields and deprecate old ones and increment the version. We use the version number to run a series of steps on each proto to upgrade old fields to new ones.
This. Plus ASN.1 is pluggable as to encoding rules and has a large family of them:
- BER/DER/CER (TLV)
- OER and PER ("packed" -- no tags and
no lengths wherever
possible)
- XER (XML!)
- JER (JSON!)
- GSER (textual representation)
- you can add your own!
(One could add one based on XDR,
which would look a lot like OER/PER
in a way.)
ASN.1 also gives you a way to do things like formalize typed holes.
Not looking at ASN.1, not even its history and evolution, when creating PB was a crime.
I agree that saying that no-one uses backwards compatible stuff is bizarre. Rolling deploys, being able to function with a mixed deployment is often worth the backwards compatibility overhead for many reasons.
In Java, you can accomplish some of this with using of Jackson JSON serialization of plain objects, where there several ways in which changes can be made backwards-compatibly (e.g. in the recent years, post-deserialization hooks can be used to handle more complex cases), which satisfies (a). For (b), there’s no automatic linter. However, in practice, I found that writing tests that deserialize prior release’s serialized objects get you pretty far along the line of regression protection for major changes. Also it was pretty easy to write an automatic round-trip serialization tester to catch mistakes in the ser/deser chain. Finally, you stay away from non-schemable ser/deser (such as a method that handles any property name), which can be enforced w/ a linter, you can output the JSON schema of your objects to committed source. Then any time the generated schema changes, you can look for corresponding test coverage in code reviews.
I know that’s not the same as an automatic linter, but it gets you pretty far in practice. It does not absolve you from cross-release/upgrade testing, because serialization backwards-compatibility does not catch all backwards-compatibility bugs.
Additionally, Jackson has many techniques, such as unwrapping objects, which let you execute more complicated refactoring backwards-compatibly, such as extracting a set of fields into a sub-object.
I like that the same schema can be used to interact with your SPA web clients for your domain objects, giving you nice inspectable JSON. Things serialized to unprivileged clients can be filtered with views, such that sensitive fields are never serialized, for example.
You can generate TypeScript objects from this schema or generate clients for other languages (e.g. with Swagger). Granted it won’t port your custom migration deserialization hooks automatically, so you will either have to stay within a subset of backwards-compatible changes, or add custom code for each client.
You can also serialize your RPC comms to a binary format, such as Smile, which uses back-references for property names, should you need to reduce on-the-wire size.
It’s also nice to be able to define Jackson mix-ins to serialize classes from other libraries’ code or code that you can’t modify.
Protobufs are better but not best. Still, by far, the easiest thing to use and the safest is actual APIs. Like, in your application. Interfaces and stuff.
Obviously if your thing HAS to communicate over the network that's one thing, but a lot of applications don't. The distributed system micro service stuff is a choice.
Guys, distributed systems are hard. The extremely low API visibility combined with fragile network calls and unsafe, poorly specified API versioning means your stuff is going to break, and a lot.
Want a version controlled API? Just write in interface in C# or PHP or whatever.
> Name another serialization declaration format that both (a) defines which changes can be make backwards-compatibly, and (b) has a linter that enforces backwards compatible changes.
The article covers this in the section "The Lie of Backwards- and Forwards-Compatibility." My experience working with protocol buffers matches what the author describes in this section.
This is always the thing to look for; "What are the alternatives?", and/why aren't there better ones.
I don't understand most use cases of protobufs, including ones that informed their design. I use it for ESP-hosted, to communicate between two MCUs. It is the highest-friction serialization protocol I've seen, and is not very byte-efficient.
Maybe something like the specialized serialization libraries (bincode, postcard etc) would be easier? But I suspect I'm missing something about the abstraction that applies to networked systems, beyond serialization.
> And I know the article says no one uses the backwards compatible stuff but that’s bizarre to me – setting up N clients and a server that use protocol buffers to communicate and then being able to add fields to the schema and then deploy the servers and clients in any order is way nicer than it is with some other formats that force you to babysit deployment order.
Yet the author has the audacity to call the authors of protobuf (originally Jeff Dean et al) "amateurs."
What about Cap’n Proto https://capnproto.org/ ? (Don't know much about these things myself, but it's a name that usually comes up in these discussions.)
> Just with those two criteria you’re down to, like, six formats at most, of which Protocol Buffers is the most widely used.
What I dislike the most about blog posts like this is that, although the blogger is very opinionated and critical of many things, the post dates back to 2018, protobuf is still dominant, and apparently during all these years the blogger failed to put together something that they felt was a better way to solve the problem. I mean, it's perfectly fine if they feel strongly about a topic. However, investing so much energy to criticize and even throw personal attacks on whoever contributed to the project feels pointless and an exercise in self promotion at the expense of shit-talking. Either you put something together that you feel implements your vision and rights some wrongs, or don't go out of your day to put down people. Not cool.
TLV style binary formats are all you need. The “Type” in that acronym is a 32-bit number which you can use to version all of your stuff so that files are backwards compatible. Software that reads these should read all versions of a particular type and write only the latest version.
Code for TLV is easy to write and to read, which makes viewing programs easy. TLV data is fast for computers to write and to read.
Protobuf is overused because people are fucking scared to death to write binary data. They don’t trust themselves to do it, which is just nonsense to me. It’s easy. It’s reliable. It’s fast.
A major value of protobuf is in its ecosystem of tools (codegen, lint, etc); it's not only an encoding. And you don't generally have to build or maintain any of it yourself, since it already exists and has significant industry investment.
I prefer a little builtin backwards (and forwards!) compatibility (by always enforcing a length for each object, to be zero-padded or truncated as needed), but yes "don't fear adding new types" is an important lesson.
Protobufs aren’t new. They’re really just rpc over https. I’ve used dce-rpc in 1997 which had IDL. I believe CORBA used IDL as well although I personally did not use it. There have been other attempts like ejb, etc. which are pretty much the same paradigm.
The biggest plus with protobuf is the social/financial side and not the technology side. It’s open source and free from proprietary hacks like previous solutions.
Apart from that, distributed systems of which rpc is a sub topic are hard in general. So the expectation would be that it sucks.
Backwards compatibility is just not an issue in self-describing structures like JSON, Java serialization, and (dating myself) Hessian. You can add fields and you can remove fields. That's enough to allow seamless migrations.
It's only positional protocols that have this problem.
You can remove JSON fields at the cost of breaking your clients at runtime that expect those fields. Of course the same can happen with any deserialization libraries, but protobufs at least make it more explicit - and you may also be more easily able to track down consumers using older versions.
> Name another serialization declaration format that both (a) defines which changes can be make backwards-compatibly, and (b) has a linter that enforces backwards compatible changes.
Just FYI: an obligatory comment from the protobuf v2 designer.
Yeah, protobuf has lots of design mistakes but this article is written by someone who does not understand the problem space. Most of the complexity of serialization comes from implementation compatibility between different timepoints. This significantly limits design space.
To clarify. Protobuf’s simplest change is adding a field to a message so wrapping maps of maps, maps of fields, oneof fields into a message makes these play to its strengths. It feels like over engineering to turn your Inventory map of items into a Inventory message, but you will be grateful for it when you need a capacity field later.
>Most of the complexity of serialization comes from implementation compatibility between different timepoints.
The author talks about compatibility a fair bit, specifically the importance of distinguishing a field that wasn't set from one that was intentionally set to a default, and how protobuffs punted on this.
If you see some statements like below on the serialization topic:
> Make all fields in a message required. This makes messages product types.
> One possible argument here is that protobuffers will hold onto any information present in a message that they don't understand. In principle this means that it's nondestructive to route a message through an intermediary that doesn't understand this version of its schema. Surely that's a win, isn't it?
> Granted, on paper it's a cool feature. But I've never once seen an application that will actually preserve that property.
Then it is fair to raise eyebrows on the author's expertise. And please don't ask if I'm attached to protobuf; I can roast the protocol buffer on its wrong designs for hours. It is just that the author makes series of wrong claims presumably due to their bias toward principled type systems and inexperience of working on large scale systems.
Granted, on paper it’s a cool feature. But I’ve never once seen an application that will actually preserve that property.
Chances are, the author literally used software that does it as he wrote these words. This feature is critical to how Chrome Sync works. You wouldn’t want to lose synced state if you use an older browser version on another device that doesn’t recognize the unknown fields and silently drops them. This is so important that at some point Chrome literally forked protobuf library so that unknown fields are preserved even if you are using protobuf lite mode.
I share the author's sentiment. I hate these things.
True story: trying to reverse engineer macOS Photos.app sqlite database format to extract human-readable location data from an image.
I eventually figured it out, but it was:
A base64 encoded
Binary Plist format
with one field containing a ProtoBuffer
which contained another protobuffer
which contained a unicode string
which contained improperly encoded data (for example, U+2013 EN DASH was encoded as \342\200\223)
I mean... you can nest-encode stuff in any serial format. You're not describing a problem either intrinsic or unique to Protobuf, you're just seeing the development org chart manifested into a data structure.
Good points this wasn't entirely a protobuf-specific issue, so much as it was a (likely hierarchical and historical set of) bad decisions to use it at all.
Using Protobuffers for a few KB of metadata, when the photo library otherwise is taking multiple GB of data, is just pennywise pound foolish.
Of course, even my preference for a simple JSON string would be problematic: data in a database really should be stored properly normalized to a separate table and fields.
My guess is that protobuffers did play a role here in causing this poor design. I imagine this scenario:
- Photos.app wants to look up location data
- the server returns structured data in a ProtoBuffer
- there's no easy or reasonable way to map a protobuf to database fields (one point of TFA)
- Surrender! just store the binary blob in SQLITE and let the next poor sod deal with it
The JSON version would have also had the wrong encoding - all formats are just a framing for data fed in from code written by a human. In mac's case, em dash will always be an issue because that's just what Mac decided on intentionally.
That's horrendous. For some reason I imagine Apple's software to be much cleaner, but I guess that's just the marketing getting to my head. Under the hood it's still the same spaghetti.
Yeah, the problem is Apple and all the other contemporary tech companies have engineers bounce around between them all the time, and they take their habits with them.
At some point there becomes a critical mass of xooglers in an org, and when a new use case happens no one bothers to ask “how is serialization typically done in Apple frameworks”, they just go with what they know. And then you get protobuf serialization inside a plist. (A plist being the vanilla “normal” serialization format at Apple. Protobuf inside a plist is a sign that somebody was shoehorning what they’re comfortable with into the code.)
I'm starting to wonder if some of those bad design decisions are symptoms of a larger "cultural bias" at Google. Specifically the "No Compositionality" point: It reminds me of similar bad designs in Go, CSS and the web platform at large.
The pattern seems to be that generalized, user-composable solutions are discouraged in favor of a myriad of special constructs that satisfy whatever concrete use cases seem relevant for the designers in the moment.
This works for a while and reduces the complexity of the language upfront, while delivering results - but over time, the designs devolve into a rats's nest of hyperspecific design features with awkward and unintuitive restrictions.
Eventually, the designers might give up and add more general constructs to the language - but those feel tacked on and have to coexist with specific features that can't be removed anymore.
It works both ways. General constructs tend to become overly abstract and you end up with sneaky errors in different places due to a minor change to an abstraction.
Like the old adage, this is just a matter of preference. Good software engineering requires, first and foremost, great discipline, regardless of the path or tool you choose.
If there are errors in implementation of general constructs, they tend to be visible at their every use, and get rapidly fixed.
Some general constructs are better than the others, because they have an algebraic theory behind them, and sometimes that theory was already researched for a few hundred years.
For example, product/coproduct types mentioned in the article are quite close to addition and multiplication that we've all learned in school, and obey the same laws.
So there are several levels where the choice of ad-hoc constructs is wrong, and in the end the only valid reason to choose them is time constraints.
If they had 24 years to figure out how to do it properly, but they didn't, the technology is just dead.
> This works for a while and reduces the complexity of the language upfront, while delivering results - but over time, the designs devolve into a rats's nest of hyperspecific design features with awkward and unintuitive restrictions.
There are a lot of great comments on these old threads, and I don't think there's a lot of new science in this field since 2018, so the old threads might be a better read than today's.
I don't know if the author is right or wrong; I've never dealt with protobufs professionally. But I recently implemented them for a hobby project and it was kind of a game-changer.
At some stage with every ESP or Arduino project, I want to send and receive data, i.e. telemetry and control messages. A lot of people use ad-hoc protocols or HTTP/JSON, but I decided to try the nanopb library. I ended up with a relatively neat solution that just uses UDP packets. For my purposes a single packet has plenty of space, and I can easily extend this approach in the future. I know I'm not the first person to do this but I'll probably keep using protobufs until something better comes along, because the ecosystem exists and I can focus on the stuff I consider to be fun.
Embedded/constrained UDP is where protobuf wire format (but not google's libraries) rocks: IoT over cellular and such, where you need to fit everything into a single datagram (number of roundtrips is what determines power consumption). As to those who say "UDP is unreliable" - what you do is you implement ARQ on the application level. Just like TCP does it, except you don't have to waste roundtrips on SYN-SYN-ACK handshake nor waste bytes on sending data that are no longer relevant.
Varints for the win. Send time series as columns of varint arrays - delta or RLL compression becomes quite straightforward. And as a bonus I can just implement new fields in the device and deploy right away - the server-side support can wait until we actually need it.
No, flatbuffers/cap'n'proto are unacceptably big because of fixed layout. No, CBOR is an absolute no go - why on earth would you waste precious bytes on schema every time? No, general-purpose compression like gzip wouldn't do much on such a small size, it will probably make things worse. Yes, ASN is supposed to be the right solution - but there is no full-featured implementation that doesn't cost $$$$ and the whole thing is just too damn bloated.
Kinda fun that it sucks for what it is supposed to do, but actually shines elsewhere.
> why on earth would you waste precious bytes on schema every time
cbor doesn't prescribe sending schema, in fact there is no schema, like json.
i just switched from protobuf to cbor because i needed better streaming support and find use it quite delightful. losing protobuf schema hurts a bit, but the amount of boilerplate code is actually less than what i had before with nanopb (embedded context). on top of it, i am saving approx. 20% in message size compared to protobuf bc i am using mostly arrays with fixed position parameters.
> Yes, ASN is supposed to be the right solution - but there is no full-featured implementation that doesn't cost $$$$ and the whole thing is just too damn bloated.
Oh for crying out loud! PB had ZERO tooling available when it was created! It would have been much easier to create ASN.1 tooling w/ OER/PER and for some suitable subset of ASN.1 in 2001 that it was to a) create an IDL, b) create an encoding, and c) write tooling for N programming languages.
In fact, one thing one could have done is write a transpiler from the IDL to an AST that does all linting, analysis, and linking, and which one can then use to drive codegen for N languages. Or even better: have the transpiler produce a byte-coded representation of the modules and then for each programming language you only need to codegen the types but not the codecs -- instead for each language you need only write the interpreter for the byte-coded modules. I know because I've extended and maintained an [open source] ASN.1 compiler that fucking does [some of] these things.
Stop spreading this idea that ASN.1 is bloated. It's not. You can cut it down for your purposes. There's only 4 specifications for the language itself, of which the base one (x.680) is enough for almost everything (the others, X.681, X.682, and X.683, are mainly for parameterized types and formal typed hole specifications [the ASN.1 "information object system], which are awesome but you can live without). And these are some of the best-written and most-readable specifications ever written by any standards development organization -- they are a great gift from a few to all of mankind.
Other than ASN.1 PER, is there any other widely used encoding format that isn't self-describing? Using TLV certainly adds flexibility around schema evolution, but I feel like collectively we are wasting a fair amount of bytes because of it...
Using protobuf is practical enough in embedded. This person isn't the first and won't be the last. Way faster than JSON, way slower than C structs.
However protobuf is ridiculously interchangeable and there are serializers for every language. So you can get your interfaces fleshed out early in a project without having to worry that someone will have a hard time ingesting it later on.
Yes it's a pain how an empty array is a valid instance of every message type, but at least the fields that you remember to send are strongly typed. And field optionality gives you a fighting chance that your software can still speak to the unit that hasn't been updated in the field for the last five years.
On the embedded side, nanopb has worked well for us. I'm not missing having to hand maintain ad-hoc command parsers on the embedded side, nor working around quirks and bugs of those parsers on the desktop side
The reasons for that line get at a fundamental tension. As David Wheeler famously said, "All problems in computer science can be solved by another level of indirection, except for the problem of too many indirections."
Over time we accumulate cleverer and cleverer abstractions. And any abstraction that we've internalized, we stop seeing. It just becomes how we want to do things, and we have no sense of what cost we are imposing with others. Because all abstractions leak. And all abstractions pose a barrier for the maintenance programmer.
All of which leads to the problem that Brian Kernighan warned about with, "Everyone knows that debugging is twice as hard as writing a program in the first place. So if you’re as clever as you can be when you write it, how will you ever debug it?" Except that the person who will have to debug it is probably a maintenance programmer who doesn't know your abstractions.
One of the key pieces of wisdom that show through Google's approaches is that our industry's tendency towards abstraction is toxic. As much as any particular abstraction is powerful, allowing too many becomes its own problem. This is why, for example, Go was designed to strongly discourage over-abstraction.
Protobufs do exactly what it says on the tin. As long as you are using them in the straightforward way which they are intended for, they work great. All of his complaints boil down to, "I tried to do some meta-manipulation to generate new abstractions, and the design said I couldn't."
That isn't the result of them being written by amateurs. That's the result of them being written to incorporate a piece of engineering wisdom that most programmers think that they are smart enough to ignore. (My past self was definitely one of those programmers.)
Can the technology be abused? Do people do stupid things with them? Are there things that you might want to do that you can't? Absolutely. But if you KISS, they work great. And the more you keep it simple, the better they work. I consider that an incentive towards creating better engineered designs.
I think you nailed it. So many complaints about Go for example basically come down to "it didn't let me create X abstraction" and that's basically the point.
Yeah, let's pretend that type algebra doesn't exist, and even if it does exist then it's not useful and definitely isn't practical in data protocols. Let's believe that the authors of protobuf considered everything, and since they aren't amateurs (by the virtue of having worked on protobuf at Google, presumably), every elaborated opinion that draws them as amateurs at applying type algebra in data protocol designs is a personal ad-hominem attack.
IMO it's a pretty reasonable claim about experience level, not intelligence, and isn't at all an ad hominem attack because it's referring directly to the fundamental design choices of protocol buffers and thus is not at all a fallacy of irrelevance.
It's a terrible attitude and I agree that sort of thing shouldn't be (and generally isn't) tolerated for long in a professional environment.
That said the article is full of technical detail and voices several serious shortcomings of protobuf that I've encountered myself, along with suggestions as to how it could be done better. It's a shame it comes packaged with unwarranted personal attacks.
Imagine calling google amateurs, and then the only code you write has a first year student error in failing to distinguish assignment from comparision operator.
There's a class of rant on the internet where programmers complain about increasingly foundational tech instead of admitting skill issues. If you go far deep into that hole, you end up rewriting the kernel in Rust.
Just with those two criteria you’re down to, like, six formats at most, of which Protocol Buffers is the most widely used.
And I know the article says no one uses the backwards compatible stuff but that’s bizarre to me – setting up N clients and a server that use protocol buffers to communicate and then being able to add fields to the schema and then deploy the servers and clients in any order is way nicer than it is with some other formats that force you to babysit deployment order.
The reason why protos suck is because remote procedure calls suck, and protos expose that suckage instead of trying to hide it until you trip on it. I hope the people working on protos, and other alternatives, continue to improve them, but they’re not worse than not using them today.
https://github.com/stepchowfun/typical
> Typical offers a new solution ("asymmetric" fields) to the classic problem of how to safely add or remove fields in record types without breaking compatibility. The concept of asymmetric fields also solves the dual problem of how to preserve compatibility when adding or removing cases in sum types.
An asymmetric field in a struct is considered required for the writer, but optional for the reader.
We include a version number with each release of the game. If we change a proto we add new fields and deprecate old ones and increment the version. We use the version number to run a series of steps on each proto to upgrade old fields to new ones.
It sounds like you've built your own back-compat functionality on top of protobuf?
The only functionality protobuf is giving you here is optional-by-default (and mandatory version numbers, but most wire formats require that)
Not looking at ASN.1, not even its history and evolution, when creating PB was a crime.
In Java, you can accomplish some of this with using of Jackson JSON serialization of plain objects, where there several ways in which changes can be made backwards-compatibly (e.g. in the recent years, post-deserialization hooks can be used to handle more complex cases), which satisfies (a). For (b), there’s no automatic linter. However, in practice, I found that writing tests that deserialize prior release’s serialized objects get you pretty far along the line of regression protection for major changes. Also it was pretty easy to write an automatic round-trip serialization tester to catch mistakes in the ser/deser chain. Finally, you stay away from non-schemable ser/deser (such as a method that handles any property name), which can be enforced w/ a linter, you can output the JSON schema of your objects to committed source. Then any time the generated schema changes, you can look for corresponding test coverage in code reviews.
I know that’s not the same as an automatic linter, but it gets you pretty far in practice. It does not absolve you from cross-release/upgrade testing, because serialization backwards-compatibility does not catch all backwards-compatibility bugs.
Additionally, Jackson has many techniques, such as unwrapping objects, which let you execute more complicated refactoring backwards-compatibly, such as extracting a set of fields into a sub-object.
I like that the same schema can be used to interact with your SPA web clients for your domain objects, giving you nice inspectable JSON. Things serialized to unprivileged clients can be filtered with views, such that sensitive fields are never serialized, for example.
You can generate TypeScript objects from this schema or generate clients for other languages (e.g. with Swagger). Granted it won’t port your custom migration deserialization hooks automatically, so you will either have to stay within a subset of backwards-compatible changes, or add custom code for each client.
You can also serialize your RPC comms to a binary format, such as Smile, which uses back-references for property names, should you need to reduce on-the-wire size.
It’s also nice to be able to define Jackson mix-ins to serialize classes from other libraries’ code or code that you can’t modify.
Dragging your org away from using poorly specified json is often worth these papercuts IMO.
Obviously if your thing HAS to communicate over the network that's one thing, but a lot of applications don't. The distributed system micro service stuff is a choice.
Guys, distributed systems are hard. The extremely low API visibility combined with fragile network calls and unsafe, poorly specified API versioning means your stuff is going to break, and a lot.
Want a version controlled API? Just write in interface in C# or PHP or whatever.
The article covers this in the section "The Lie of Backwards- and Forwards-Compatibility." My experience working with protocol buffers matches what the author describes in this section.
I don't understand most use cases of protobufs, including ones that informed their design. I use it for ESP-hosted, to communicate between two MCUs. It is the highest-friction serialization protocol I've seen, and is not very byte-efficient.
Maybe something like the specialized serialization libraries (bincode, postcard etc) would be easier? But I suspect I'm missing something about the abstraction that applies to networked systems, beyond serialization.
Yet the author has the audacity to call the authors of protobuf (originally Jeff Dean et al) "amateurs."
What I dislike the most about blog posts like this is that, although the blogger is very opinionated and critical of many things, the post dates back to 2018, protobuf is still dominant, and apparently during all these years the blogger failed to put together something that they felt was a better way to solve the problem. I mean, it's perfectly fine if they feel strongly about a topic. However, investing so much energy to criticize and even throw personal attacks on whoever contributed to the project feels pointless and an exercise in self promotion at the expense of shit-talking. Either you put something together that you feel implements your vision and rights some wrongs, or don't go out of your day to put down people. Not cool.
For client facing protocol Protobufs is a nightmare to use. For Machine to Machine services, it is ok-ish, yet personally I still don't like it.
When I was at Spotify we ditched it for client side apis (server to mobile/web), and never looked back. No one liked working with it.
Deleted Comment
Deleted Comment
Code for TLV is easy to write and to read, which makes viewing programs easy. TLV data is fast for computers to write and to read.
Protobuf is overused because people are fucking scared to death to write binary data. They don’t trust themselves to do it, which is just nonsense to me. It’s easy. It’s reliable. It’s fast.
https://protobuf.dev/programming-guides/encoding/
A major value of protobuf is in its ecosystem of tools (codegen, lint, etc); it's not only an encoding. And you don't generally have to build or maintain any of it yourself, since it already exists and has significant industry investment.
If you make any change, it's a new message type.
For compatibility you can coerce the new message to the old message and dual-publish.
The biggest plus with protobuf is the social/financial side and not the technology side. It’s open source and free from proprietary hacks like previous solutions.
Apart from that, distributed systems of which rpc is a sub topic are hard in general. So the expectation would be that it sucks.
It's only positional protocols that have this problem.
ASCII text (tongue in cheek here)
Just FYI: an obligatory comment from the protobuf v2 designer.
Yeah, protobuf has lots of design mistakes but this article is written by someone who does not understand the problem space. Most of the complexity of serialization comes from implementation compatibility between different timepoints. This significantly limits design space.
> oneof fields can’t be repeated.
Wrap oneof field in message which can be repeated
> map fields cannot be repeated.
Wrap in message which can contain repeated fields
> map values cannot be other maps.
Wrap map in message which can be a value
Perhaps this is slightly inconvenient/un-ergonomic, but the author is positioning these things as "protos fundamentally can't do this".
Deleted Comment
The author talks about compatibility a fair bit, specifically the importance of distinguishing a field that wasn't set from one that was intentionally set to a default, and how protobuffs punted on this.
What do you think they don't understand?
> Make all fields in a message required. This makes messages product types.
> One possible argument here is that protobuffers will hold onto any information present in a message that they don't understand. In principle this means that it's nondestructive to route a message through an intermediary that doesn't understand this version of its schema. Surely that's a win, isn't it?
> Granted, on paper it's a cool feature. But I've never once seen an application that will actually preserve that property.
Then it is fair to raise eyebrows on the author's expertise. And please don't ask if I'm attached to protobuf; I can roast the protocol buffer on its wrong designs for hours. It is just that the author makes series of wrong claims presumably due to their bias toward principled type systems and inexperience of working on large scale systems.
Chances are, the author literally used software that does it as he wrote these words. This feature is critical to how Chrome Sync works. You wouldn’t want to lose synced state if you use an older browser version on another device that doesn’t recognize the unknown fields and silently drops them. This is so important that at some point Chrome literally forked protobuf library so that unknown fields are preserved even if you are using protobuf lite mode.
True story: trying to reverse engineer macOS Photos.app sqlite database format to extract human-readable location data from an image.
I eventually figured it out, but it was:
A base64 encoded Binary Plist format with one field containing a ProtoBuffer which contained another protobuffer which contained a unicode string which contained improperly encoded data (for example, U+2013 EN DASH was encoded as \342\200\223)
This could have been a simple JSON string.
There's nothing "simple" about parsing JSON as a serialization format.
Using Protobuffers for a few KB of metadata, when the photo library otherwise is taking multiple GB of data, is just pennywise pound foolish.
Of course, even my preference for a simple JSON string would be problematic: data in a database really should be stored properly normalized to a separate table and fields.
My guess is that protobuffers did play a role here in causing this poor design. I imagine this scenario:
- Photos.app wants to look up location data
- the server returns structured data in a ProtoBuffer
- there's no easy or reasonable way to map a protobuf to database fields (one point of TFA)
- Surrender! just store the binary blob in SQLITE and let the next poor sod deal with it
At some point there becomes a critical mass of xooglers in an org, and when a new use case happens no one bothers to ask “how is serialization typically done in Apple frameworks”, they just go with what they know. And then you get protobuf serialization inside a plist. (A plist being the vanilla “normal” serialization format at Apple. Protobuf inside a plist is a sign that somebody was shoehorning what they’re comfortable with into the code.)
The pattern seems to be that generalized, user-composable solutions are discouraged in favor of a myriad of special constructs that satisfy whatever concrete use cases seem relevant for the designers in the moment.
This works for a while and reduces the complexity of the language upfront, while delivering results - but over time, the designs devolve into a rats's nest of hyperspecific design features with awkward and unintuitive restrictions.
Eventually, the designers might give up and add more general constructs to the language - but those feel tacked on and have to coexist with specific features that can't be removed anymore.
Like the old adage, this is just a matter of preference. Good software engineering requires, first and foremost, great discipline, regardless of the path or tool you choose.
Some general constructs are better than the others, because they have an algebraic theory behind them, and sometimes that theory was already researched for a few hundred years.
For example, product/coproduct types mentioned in the article are quite close to addition and multiplication that we've all learned in school, and obey the same laws.
So there are several levels where the choice of ad-hoc constructs is wrong, and in the end the only valid reason to choose them is time constraints.
If they had 24 years to figure out how to do it properly, but they didn't, the technology is just dead.
But that's true for almost anything, though.
https://news.ycombinator.com/item?id=18188519 (299 comments)
https://news.ycombinator.com/item?id=21871514 (215 comments)
https://news.ycombinator.com/item?id=35281561 (59 comments)
Here's a fun one:
https://news.ycombinator.com/item?id=21873926
At some stage with every ESP or Arduino project, I want to send and receive data, i.e. telemetry and control messages. A lot of people use ad-hoc protocols or HTTP/JSON, but I decided to try the nanopb library. I ended up with a relatively neat solution that just uses UDP packets. For my purposes a single packet has plenty of space, and I can easily extend this approach in the future. I know I'm not the first person to do this but I'll probably keep using protobufs until something better comes along, because the ecosystem exists and I can focus on the stuff I consider to be fun.
Varints for the win. Send time series as columns of varint arrays - delta or RLL compression becomes quite straightforward. And as a bonus I can just implement new fields in the device and deploy right away - the server-side support can wait until we actually need it.
No, flatbuffers/cap'n'proto are unacceptably big because of fixed layout. No, CBOR is an absolute no go - why on earth would you waste precious bytes on schema every time? No, general-purpose compression like gzip wouldn't do much on such a small size, it will probably make things worse. Yes, ASN is supposed to be the right solution - but there is no full-featured implementation that doesn't cost $$$$ and the whole thing is just too damn bloated.
Kinda fun that it sucks for what it is supposed to do, but actually shines elsewhere.
cbor doesn't prescribe sending schema, in fact there is no schema, like json.
i just switched from protobuf to cbor because i needed better streaming support and find use it quite delightful. losing protobuf schema hurts a bit, but the amount of boilerplate code is actually less than what i had before with nanopb (embedded context). on top of it, i am saving approx. 20% in message size compared to protobuf bc i am using mostly arrays with fixed position parameters.
Oh for crying out loud! PB had ZERO tooling available when it was created! It would have been much easier to create ASN.1 tooling w/ OER/PER and for some suitable subset of ASN.1 in 2001 that it was to a) create an IDL, b) create an encoding, and c) write tooling for N programming languages.
In fact, one thing one could have done is write a transpiler from the IDL to an AST that does all linting, analysis, and linking, and which one can then use to drive codegen for N languages. Or even better: have the transpiler produce a byte-coded representation of the modules and then for each programming language you only need to codegen the types but not the codecs -- instead for each language you need only write the interpreter for the byte-coded modules. I know because I've extended and maintained an [open source] ASN.1 compiler that fucking does [some of] these things.
Stop spreading this idea that ASN.1 is bloated. It's not. You can cut it down for your purposes. There's only 4 specifications for the language itself, of which the base one (x.680) is enough for almost everything (the others, X.681, X.682, and X.683, are mainly for parameterized types and formal typed hole specifications [the ASN.1 "information object system], which are awesome but you can live without). And these are some of the best-written and most-readable specifications ever written by any standards development organization -- they are a great gift from a few to all of mankind.
https://github.com/sbidy/pywizlight?tab=readme-ov-file#examp...
to learn and play with it it's fine, else why complicate life?
However protobuf is ridiculously interchangeable and there are serializers for every language. So you can get your interfaces fleshed out early in a project without having to worry that someone will have a hard time ingesting it later on.
Yes it's a pain how an empty array is a valid instance of every message type, but at least the fields that you remember to send are strongly typed. And field optionality gives you a fighting chance that your software can still speak to the unit that hasn't been updated in the field for the last five years.
On the embedded side, nanopb has worked well for us. I'm not missing having to hand maintain ad-hoc command parsers on the embedded side, nor working around quirks and bugs of those parsers on the desktop side
This is a rage bait, not worth the read.
Over time we accumulate cleverer and cleverer abstractions. And any abstraction that we've internalized, we stop seeing. It just becomes how we want to do things, and we have no sense of what cost we are imposing with others. Because all abstractions leak. And all abstractions pose a barrier for the maintenance programmer.
All of which leads to the problem that Brian Kernighan warned about with, "Everyone knows that debugging is twice as hard as writing a program in the first place. So if you’re as clever as you can be when you write it, how will you ever debug it?" Except that the person who will have to debug it is probably a maintenance programmer who doesn't know your abstractions.
One of the key pieces of wisdom that show through Google's approaches is that our industry's tendency towards abstraction is toxic. As much as any particular abstraction is powerful, allowing too many becomes its own problem. This is why, for example, Go was designed to strongly discourage over-abstraction.
Protobufs do exactly what it says on the tin. As long as you are using them in the straightforward way which they are intended for, they work great. All of his complaints boil down to, "I tried to do some meta-manipulation to generate new abstractions, and the design said I couldn't."
That isn't the result of them being written by amateurs. That's the result of them being written to incorporate a piece of engineering wisdom that most programmers think that they are smart enough to ignore. (My past self was definitely one of those programmers.)
Can the technology be abused? Do people do stupid things with them? Are there things that you might want to do that you can't? Absolutely. But if you KISS, they work great. And the more you keep it simple, the better they work. I consider that an incentive towards creating better engineered designs.
You can kinda see how this author got bounced out of several major tech firms in one year or less, each, according to their linkedin.
That said the article is full of technical detail and voices several serious shortcomings of protobuf that I've encountered myself, along with suggestions as to how it could be done better. It's a shame it comes packaged with unwarranted personal attacks.
Imagine calling google amateurs, and then the only code you write has a first year student error in failing to distinguish assignment from comparision operator.
There's a class of rant on the internet where programmers complain about increasingly foundational tech instead of admitting skill issues. If you go far deep into that hole, you end up rewriting the kernel in Rust.
Deleted Comment