How do you solve the context propagation issue with eBPF based instrumentation?
E.g. if you get a RPC request coming in, and make an RPC request in order to serve the incoming RPC request. The traced program needs to track some ID for that request from the time it comes in, through to the place where the the HTTP request comes out. And then that ID has to get injected into a header on the wire so the next program sees the same request ID.
IME that's where most of the overhead (and value) from a manual tracing library comes from.
100%. Context propagation is _the_ key to distributed tracing, otherwise you're only seeing one side of every transaction.
I was hoping odigos was language/runtime-agnostic since it's eBPF-based, but I see it's mentioned in the repo that it only supports:
> Java, Python, .NET, Node.js, and Go
Apart from Go (that is a WIP), these are the languages already supported with Otel's (non-eBPF-based) auto-instrumentation. Apart from a win on latency (which is nice, but could in theory be combated with sampling), why else go this route?
eBPF instrumentation does not require code changes, redeployment or restart to running applications.
We are constantly adding more language support for eBPF instrumentation and are aiming to cover the most popular programming languages soon.
Btw, not sure that sampling is really the solution to combat overhead, after all you probably do want that data. Trying to fix production issue when the data you need is missing due to sampling is not fun
It depends on the programming language being instrumented.
For Go we are assuming the context.Context object is passed around between different functions or goroutines.
For Java, we are using a combination of ThreadLocal tracing and Runnable tracing to support use cases like reactive and multithreaded applications.
The eBPF programs handle passing the context through the requests by adding a field to the header as you mentioned. The injected field is according to the w3c standard.
They don't really show any of the settings they used, but for traces, I imagine if you have a reasonable sampling rate, then you aren't going to be running any code for most requests, so it won't increase latency. (Looking at their chart, I guess they are sampling .1% of requests, since 99.9% is where latency starts increasing. I am not sure if I would trace .1% of pages loads to google.com, as their table implies. Rather, I'd pick something like 1 request per second, so that latency does not increase as load increases.)
A lot of Go metrics libraries, specifically Prometheus, introduce a lot of lock contention around incrementing metrics. This was unacceptably slow for our use case at work and I ended up writing a metrics system that doesn't take any locks for most cases.
(There is the option to introduce a lock for metrics that are emitted on a timed basis; i.e. emit tx_bytes every 10s or 1MiB instead of at every Write() call. But this lock is not global to the program; it's unique to the metric and key=value "fields" on the metric. So you can have a lot of metrics around and not content on locks.)
The metrics are then written to the log, which can be processed in real time to synthesize distributed traces and prometheus metrics, if you really want them: https://github.com/pachyderm/pachyderm/blob/master/src/inter... (Our software is self-hosted, and people don't have those systems set up, so we mostly consume metrics/traces in log form. When customers have problems, we prepare a debug bundle that is mostly just logs, and then we can further analyze the logs on our side to see event traces, metrics, etc.)
As for eBPF, that's something I've wanted to use to enrich logs with more system-level information, but most customers that run our software in production aren't allowed to run anything as root, and thus eBPF is unavailable to them. People will tolerate it for things like Cilium or whatever, but not for ordinary applications that users buy and request that their production team install for them. Production Linux at big companies is super locked down, it seems, much to my disappointment. (Personally, my threat model for Linux is that if you are running code on the machine, you probably have root through some yet-undiscovered kernel bug. Historically, I've been right. But that is not the big companies' security teams' mental model, it appears. They aren't paranoid enough to run each k8s pod in a hypervisor, but are paranoid enough to prevent using CAP_SYS_ADMIN or root.)
Thanks for the valuable feedback!
We used a constant throughout of 10,000 rps. The exact testing setup can be found under “how we tested”.
I think the example you gave for the lock used by Prometheus library is a great example why generation of traces/metrics is a great fit for offloading to different process (an agent).
Patchyderm looks very interesting however I am not sure how you can generate distributed traces based on metrics, how do you fill in the missing context propagation?
Our way to deal with eBPF root requirements is to be transparent as possible. This is why we donated the code to the CNCF and developing as part of the OpenTelemetry community. We hope that being open will make users trust us. You can see the relevant code here: https://github.com/open-telemetry/opentelemetry-go-instrumen...
> I am not sure how you can generate distributed traces based on metrics
Every log line gets an x-request-id field, and then when you combine the logs from the various components, you can see the propagation throughout our system. The request ID is a UUIDv4 but the mandatory 4 nibble in the UUIDv4 gets replaced with a digit that represents where the request came from; background task, web UI, CLI, etc. I didn't take the approach of creating a separate span ID to show sub-requests. Since you have all the logs, this extra piece of information isn't super necessary though my coworkers have asked for it a few times because every other system has it.
Since metrics are also log lines, they get the request-id, so you can do really neat things like "show me when this particular download stalled" or "show me how much bandwidth we're using from the upstream S3 server". The aggregations can take place after the fact, since you have all the raw data in the logs.
If we were running this such that we tailed the logs and sent things to Jaeger/Prometheus, a lot of this data would have to go away for cardinality reasons. But squirreling the logs away safely, and then doing analysis after the fact when a problem is suspected ends up being pretty workable. (We still do have a Prometheus exporter not based on the logs, for customers that do want alerts. For log storage, we bundle Loki.)
In the age of supply chain attack weariness and general risk skyrocketing, it is a bit funny seeing the many observability vendors wanting you to give them kernel mode access. And it's sad that most apps that will be most in need of automatic instrumentation are "frozen" rarely developed / updated apps at critical institutions like banks.
As for the original post, opentelemetry is forced to be relatively slow because of a huge amount of semantic conventions that are meant to make data more useful. I won't go into the legitimacy of that, but while I haven't been able to verify the data this solution records, it is very unlikely to be recording as much information. Manual instrumentation would never loose to eBPF in principle, at least in a compiled language like Go, but eBPF does have great potential to perform better than OTel while recording far less data. Then comes blog post, users giving the keys to their kernel, and data ending up in the hands of an enemy state. I doubt that's the case this time but it's only a matter of time.
Banking apps if you see this, please just instrument your code. Thank you.
Somewhat related, I mainly code in Kotlin. Adding open telemetry was just adding agent to command line args (usual Java/JVM magic most people don't like). Then I had a project in Go and I got so tired of all the steps it took (setup and ensuring each context is instrumented) and just gave up. We still add our manual instrumentation for customization, but auto-instrumentation made adoption much easier in the day 0.
I think eBPF has also great potential to help JVM-based languages. Especially around performance aspects even comparing to the current java agents which use bytecode manipulation.
The article mentions avoiding GC pressure and separation between recording and processing as big wins for performance for runtimes like Java but you could do the same inside Java by using ring buffer, no?
can we add manual spans (at service level) also as part of automated traces (at eBPF auto instrumentation) created by this approach? like can we access context in a running application? or will there be any sort of "traceparent" header present in incoming request?
The column in the table claiming the "number of page loads that would experience the 99th %ile" is mathematically suspect. It directly contradicts what a percentile is.
By definition, at 99th percentile, if I have 100 page loads, the one with the worst latency would be over the 99th percentile. That's not 85.2%, 87.1%, 67.6%, etc. The formula shown in that column makes no sense at all.
That's not what that column is supposed to mean afaict. The way I read it is it's showing that if the website requires hundreds of different parallel backend service calls to serve the page load, what's the probability a page load hits the p99 instrumentation latency?
We have a similar chart at my job to illustrate the point that high p99 latency on a backend service doesn't mean only 1% of end-user page loads are affected.
Ah, I see. So, for example, if one page request would result in 190 different backend requests to fulfill, then the possibility that at least one of those subrequests exceeds the 99th percentile would be 85.2%. That makes a lot more sense.
How hard is it to use Odigos without k8s?
We mainly use docker compose for our deployments (because it's convenient, and we don't need scale), but I'm having trouble finding anything in the documentation that explains the mechanism for hooking into the container (and hence I have no clue how to repurpose it).
E.g. if you get a RPC request coming in, and make an RPC request in order to serve the incoming RPC request. The traced program needs to track some ID for that request from the time it comes in, through to the place where the the HTTP request comes out. And then that ID has to get injected into a header on the wire so the next program sees the same request ID.
IME that's where most of the overhead (and value) from a manual tracing library comes from.
I was hoping odigos was language/runtime-agnostic since it's eBPF-based, but I see it's mentioned in the repo that it only supports:
> Java, Python, .NET, Node.js, and Go
Apart from Go (that is a WIP), these are the languages already supported with Otel's (non-eBPF-based) auto-instrumentation. Apart from a win on latency (which is nice, but could in theory be combated with sampling), why else go this route?
We are constantly adding more language support for eBPF instrumentation and are aiming to cover the most popular programming languages soon.
Btw, not sure that sampling is really the solution to combat overhead, after all you probably do want that data. Trying to fix production issue when the data you need is missing due to sampling is not fun
A lot of Go metrics libraries, specifically Prometheus, introduce a lot of lock contention around incrementing metrics. This was unacceptably slow for our use case at work and I ended up writing a metrics system that doesn't take any locks for most cases.
(There is the option to introduce a lock for metrics that are emitted on a timed basis; i.e. emit tx_bytes every 10s or 1MiB instead of at every Write() call. But this lock is not global to the program; it's unique to the metric and key=value "fields" on the metric. So you can have a lot of metrics around and not content on locks.)
The metrics are then written to the log, which can be processed in real time to synthesize distributed traces and prometheus metrics, if you really want them: https://github.com/pachyderm/pachyderm/blob/master/src/inter... (Our software is self-hosted, and people don't have those systems set up, so we mostly consume metrics/traces in log form. When customers have problems, we prepare a debug bundle that is mostly just logs, and then we can further analyze the logs on our side to see event traces, metrics, etc.)
As for eBPF, that's something I've wanted to use to enrich logs with more system-level information, but most customers that run our software in production aren't allowed to run anything as root, and thus eBPF is unavailable to them. People will tolerate it for things like Cilium or whatever, but not for ordinary applications that users buy and request that their production team install for them. Production Linux at big companies is super locked down, it seems, much to my disappointment. (Personally, my threat model for Linux is that if you are running code on the machine, you probably have root through some yet-undiscovered kernel bug. Historically, I've been right. But that is not the big companies' security teams' mental model, it appears. They aren't paranoid enough to run each k8s pod in a hypervisor, but are paranoid enough to prevent using CAP_SYS_ADMIN or root.)
I think the example you gave for the lock used by Prometheus library is a great example why generation of traces/metrics is a great fit for offloading to different process (an agent).
Patchyderm looks very interesting however I am not sure how you can generate distributed traces based on metrics, how do you fill in the missing context propagation?
Our way to deal with eBPF root requirements is to be transparent as possible. This is why we donated the code to the CNCF and developing as part of the OpenTelemetry community. We hope that being open will make users trust us. You can see the relevant code here: https://github.com/open-telemetry/opentelemetry-go-instrumen...
Every log line gets an x-request-id field, and then when you combine the logs from the various components, you can see the propagation throughout our system. The request ID is a UUIDv4 but the mandatory 4 nibble in the UUIDv4 gets replaced with a digit that represents where the request came from; background task, web UI, CLI, etc. I didn't take the approach of creating a separate span ID to show sub-requests. Since you have all the logs, this extra piece of information isn't super necessary though my coworkers have asked for it a few times because every other system has it.
Since metrics are also log lines, they get the request-id, so you can do really neat things like "show me when this particular download stalled" or "show me how much bandwidth we're using from the upstream S3 server". The aggregations can take place after the fact, since you have all the raw data in the logs.
If we were running this such that we tailed the logs and sent things to Jaeger/Prometheus, a lot of this data would have to go away for cardinality reasons. But squirreling the logs away safely, and then doing analysis after the fact when a problem is suspected ends up being pretty workable. (We still do have a Prometheus exporter not based on the logs, for customers that do want alerts. For log storage, we bundle Loki.)
As for the original post, opentelemetry is forced to be relatively slow because of a huge amount of semantic conventions that are meant to make data more useful. I won't go into the legitimacy of that, but while I haven't been able to verify the data this solution records, it is very unlikely to be recording as much information. Manual instrumentation would never loose to eBPF in principle, at least in a compiled language like Go, but eBPF does have great potential to perform better than OTel while recording far less data. Then comes blog post, users giving the keys to their kernel, and data ending up in the hands of an enemy state. I doubt that's the case this time but it's only a matter of time.
Banking apps if you see this, please just instrument your code. Thank you.
By definition, at 99th percentile, if I have 100 page loads, the one with the worst latency would be over the 99th percentile. That's not 85.2%, 87.1%, 67.6%, etc. The formula shown in that column makes no sense at all.
We have a similar chart at my job to illustrate the point that high p99 latency on a backend service doesn't mean only 1% of end-user page loads are affected.
Could you please elaborate on a few more details about your benchmark?
- Did you measure the CPU usage of the eBPF agent?
- How does Odigos handle eBPF's perfmap overflow, and did you measure any lost events between the kernel and the agent?