It has the packet header, exactly the code part that directs the traffic. In reality, everything has a "code" part and a separation for understanding. In language, we have spaces and question marks in text. This is why it’s so important to see the person when communicating, Sound alone might not be enough to fully understand the other side.
What we call code, and what we call data, is just a question of convenience. For example, when editing or copying WMF files, it's convenient to think of them as data (mix of raster and vector graphics) - however, at least in the original implementation, what those files were was a list of API calls to Windows GDI module.
Or, more straightforwardly, a file with code for an interpreted language is data when you're writing it, but is code when you feed it to eval(). SQL injections and buffer overruns are a classic examples of what we thought was data being suddenly executed as code. And so on[0].
Most of the time, we roughly agree on the separation of what we treat as "data" and what we treat as "code"; we then end up building systems constrained in a way as to enforce the separation[1]. But it's always the case that this separation is artificial; it's an arbitrary set of constraints that make a system less general-purpose, and it only exists within domain of that system. Go one level of abstraction up, the distinction disappears.
There is no separation of code and data on the wire - everything is a stream of bytes. There isn't one in electronics either - everything is signals going down the wires.
Humans don't have this separation either. And systems designed to mimic human generality - such as LLMs - by their very nature also cannot have it. You can introduce such distinction (or "separate channels", which is the same thing), but that is a constraint that reduces generality.
Even worse, what people really want with LLMs isn't "separation of code vs. data" - what they want is for LLM to be able to divine which part of the input the user would have wanted - retroactively - to be treated as trusted. It's unsolvable in general, and in terms of humans, a solution would require superhuman intelligence.
--
[0] - One of these days I'll compile a list of go-to examples, so I don't have to think of them each time I write a comment like this. One example I still need to pick will be one that shows how "data" gradually becomes "code" with no obvious switch-over point. I'm sure everyone here can think of some.
[1] - The field of "langsec" can be described as a systematized approach of designing in a code/data separation, in a way that prevents accidental or malicious misinterpretation of one as the other.
It has the packet header, exactly the code part that directs the traffic. In reality, everything has a "code" part and a separation for understanding. In language, we have spaces and question marks in text. This is why it’s so important to see the person when communicating, Sound alone might not be enough to fully understand the other side.
K8s and Lambda serve different scopes and use cases. You can adopt a Lambda-style architecture using tools like Fargate. But if a company has already committed to a k8s, and this direction has been approved by engineering leadership, then pushing a completely different serverless model without alignment is a recipe for friction.
IHMO, the author seems to genuinely want to try something new and that’s great. But they may have overlooked how their company’s architecture and team dynamics were already structured. What comes across in the post isn’t just a technical argument — it reads like venting frustration after failing to get buy-in.
I’ve worked with “Lambda-style” architectures myself. And yes, while there are limitations (layer size, deployment package limits, cold starts), the real charm of serverless is how close it feels to the old CGI-bin days: write your code, upload it, and let it run. But of course, that comes with new challenges: observability, startup latency, vendor lock-in, etc...
On the other side, the engineer in this story could have been more constructive. There’s clearly a desire from the dev team to experiment with newer tools. Sometimes, the dev just wants to try out that “cool shiny thing” in a staging environment — and that should be welcomed, not immediately shut down.
The biggest problem I see here is culture. The author wanted to innovate, but did it by diminishing the current status quo. The engineer felt attacked, and the conversation devolved into ego clashes. When DevOps loses the trust of developers, it creates long-term instability and resentment within teams.
Interestingly, K8S itself was born from that very tension. If you read Beautiful Code or the original Borg paper (which inspired), you’ll see it was designed to abstract complexity away from developers — not dump it on their heads in YAML format.
At the end of the day, this shouldn’t be a religious debate. Good architecture comes from understanding context, constraints, and cooperation, not just cool tech.
What you want to do is offer resources that make you money when they're "exploited".
look how all companies have super system for crm/sales but when you go to backoffice all run in sheets and sometimes in real paper.
The pocket chip it been revolutionary but waiting 2 years to receive lost all the time to been completive with other plataforms