Readit News logoReadit News
shcallaway commented on The Sazabi Manifesto   sazabi.com/manifesto... · Posted by u/shcallaway
shcallaway · 8 days ago
AI has changed what's possible in software systems. Observability hasn't caught up. This manifesto lays out a path forward.
shcallaway commented on Observability's past, present, and future   blog.sherwoodcallaway.com... · Posted by u/shcallaway
buchanae · a month ago
I share a lot of this sentiment, although I struggle more with the setup and maintenance than the diagnosis.

It's baffling to me that it can still take _so_much_work_ to set up a good baseline of observability (not to mention the time we spend on tweaking alerting). I recently spent an inordinate amount of time trying to make sense of our telemetry setup and fill in the gaps. It took weeks. We had data in many systems, many different instrumentation frameworks (all stepping on each other), noisy alerts, etc.

Part of my problem is that the ecosystem is big. There's too much to learn: OpenTelemetry, OpenTracing, Zipkin, Micrometer, eBPF, auto-instrumentation, OTel SDK vs Datadog Agent, and on and on. I don't know, maybe I'm biased by the JVM-heavy systems I've been working in.

I worked for New Relic for years, and even in an observability company, it was still a lot of work to maintain, and even then traces were not heavily used.

I can definitely imagine having Claude debug an issue faster than I can type and click around dashboards and query UIs. That sounds fun.

shcallaway · a month ago
I completely agree w/ your points about why observability sucks: - Too much setup - Too much maintenance - Too steep of a learning curve

This isn't the whole picture, but it's a huge part of the picture. IMO, observability shouldn't be so complex that it warrants specialized experience; it should be something that any junior product engineer can do on their own.

> I can definitely imagine having Claude debug an issue faster than I can type and click around dashboards and query UIs. That sounds fun.

Working on it :)

shcallaway commented on Observability's past, present, and future   blog.sherwoodcallaway.com... · Posted by u/shcallaway
vrnvu · a month ago
First. Love that more tools like Honeycomb (amazing) are popping up in the space. I agree with the post.

But. IMO, statistics and probability can’t be replaced with tooling. As software engineering can’t be replaced with no-code services to build applications…

If you need to profile some bug or troubleshoot complex systems (distributed, dbs). You must do your math homework consistently as part of the job.

If you don’t comprehend the distribution of your data, the seasonality, noise vs signal; how can you measure anything valuable? How can you ask the right questions?

shcallaway · a month ago
Vibe-coders don't comprehend how the code works, yet they can create impressive apps that are completely functional.

I don't see why the same isn't true for "vibe-fixers" and their data (telemetry).

shcallaway commented on Observability's past, present, and future   blog.sherwoodcallaway.com... · Posted by u/shcallaway
esafak · a month ago
We need more automation. Less data, more insight. We're at the firehose stage, and nobody's got time for that. ML-based anomaly detection is not widespread and automated RCA barely exists. We'll have solved the problem when AI detects the problem and submits the bug fix before the engineers wake up.
shcallaway · a month ago
You're so right.

> We'll have solved the problem when AI detects the problem and submits the bug fix before the engineers wake up.

Working on it :)

shcallaway commented on Observability's past, present, and future   blog.sherwoodcallaway.com... · Posted by u/shcallaway
Veserv · a month ago
Of course that sucks. Just enable full time-travel recording in production and then you can use a standard multi-program trace visualizer and time travel debugger to identify the exact execution down to the instruction and precisely identify root causes in the code.

Everything is then instrumented automatically and exhaustively analyzable using standard tools. At most you might need to add in some manual instrumentation to indicate semantic layers, but even that can frequently be done after the fact with automated search and annotation on the full (instruction-level) recording.

shcallaway · a month ago
You're not the first person I've met that has articulated an idea like this. It sounds amazing. Do you have an idea about why this approach isn't broadly popular?
shcallaway commented on Observability's past, present, and future   blog.sherwoodcallaway.com... · Posted by u/shcallaway
camel_gopher · a month ago
2006 - Bryan Cantrill publishes this work on software observability https://queue.acm.org/detail.cfm?id=1117401

2015 - Ben Sigelman (one of the Dapper folks) cofounds Lightstep

shcallaway · a month ago
These are great. I should have included them in my timeline!

Huge fan of historical artifacts like Cantrill's ACM paper

shcallaway commented on Observability's past, present, and future   blog.sherwoodcallaway.com... · Posted by u/shcallaway
tech_ken · a month ago
> Observability made us very good at producing signals, but only slightly better at what comes after: interpreting them, generating insights, and translating those insights into reliability.

I'm a data professional who's kind of SRE adjacent for a big corpo's infra arm and wow does this post ring true for me. I'm tempted to just say "well duh, producing telemetry was always the low hanging fruit, it's the 'generating insights' part that's truly hard", but I think that's too pithy. My more reflective take is that generating reliability from data lives in a weird hybrid space of domain knowledge and data management, and most orgs headcount strategy don't account for this. SWEs pretend that data scientists are just SQL jockeys minutes from being replaced by an LLM agent; data scientists pretend like stats is the only "hard" thing and all domain knowledge can be learned with sufficient motivation and documentation. In reality I think both are equally hard, it's rare that you find someone who can do both, and that doing both is really what's required for true "observability".

At a high level I'd say there are three big areas where orgs (or at least my org) tend to fall short:

* extremely sound data engineering and org-wide normalization (to support correlating diverse signals with highly disparate sources during root-cause)

* telemetry that's truly capable of capturing the problem (ie. it's not helpful to monitor disk usage if CPU is the bottleneck)

* true 'sleuths' who understand how to leverage the first two things to produce insights, and have the org-wide clout to get those insights turned into action

I think most orgs tend to pick two of these, and cheap out on the third, and the result is what you describe in your post. Maybe they have some rockstar engineers who understand how to overcome the data ecosystem shortcomings to produce a root-cause analysis, or maybe they pay through the nose for some telemetry/dashboard platform that they then hand over to contract workers who brute-force reliability through tons of work hours. Even when they do create dedicated reliability teams, it seems like they are more often than not hamstrung by not having any leverage with the people who actually build the product. And when everything is a distributed system it might actually be 5 or 6 teams who you have no leverage with, so even if you win over 1 or 2 critical POCs you're left with an incomplete patchwork of telemetry systems which meet the owning team's (teams') needs and nothing else.

All this to say that I think reliability is still ultimately an incentive problem. You can have the best observability tooling in the world, but if don't have folks at every level of the org who understand (a) what 'reliable' concretely looks like for your product and (b) have the power to effect necessary changes then you're going to get a lot of churn with little benefit.

shcallaway · a month ago
This is a super insightful comment & there is a bunch that I want to respond to but I can't do it all neatly in one comment. Hahaha

I'll choose this point:

> reliability is still ultimately an incentive problem

This is a fascinating argument and it feels true.

Think about it. Why do companies give a shit about reliability at all? They only care b/c it impacts bottom line. If the app is "reliable enough" such that customers aren't complaining and churning, it makes sense that the company would not make further investments in reliability.

This same logic is true at all levels of the organization, but the signal gets weaker as you go down the chain. A department cares about reliability b/c it impacts the bottom line of the org, but that signal (revenue) is not directly and attributable to the department. This is even more true for a team, or an individual.

I think SLOs are, to some extent, a mechanism that is designed to mitigate this problem; they serve as stronger incentive signals for departments and teams.

shcallaway commented on Observability's past, present, and future   blog.sherwoodcallaway.com... · Posted by u/shcallaway
antsou · a month ago
Observability and APM are way, way older than depicted in this simplistic post!
shcallaway · a month ago
Hello! Yes, you are right - observability and APM have both been around for many decades, but the incarnations that most people are familiar with are the ones that emerged in the 2010s.

My intention wasn't for this post to be a comprehensive historical record. That would have taken many more words & would have put everyone to sleep. My goal was to unpack and analyze _modern observability_ - the version that we are all dealing w/ today.

Good point though!

Deleted Comment

u/shcallaway

KarmaCake day26April 1, 2014
About
YC Badge: 0x001d548b6c20b5f1565c02118b8a1b31739d70b8
View Original