For our team, it is very simple:
* we use a library send traces and traces only[0]. They bring the most value for observing applications and can contain all the data the other types can contain. Basically hash-maps vs strings and floats.
* we use manual instrumentation as opposed to automatic - we are deliberate in what we observe and have great understand of what emits the spans. We have naming conventions that match our code organization.
* we use two different backends - an affordable 3rd party service and an all-on-one Jaeger install (just run 1 executable or docker container) that doesn't save the spans on disk for local development. The second is mostly for piece of mind of team members that they are not going to flood the third party service.
[0] We have a previous setup to monitor infrastructure and in our case we don't see a lot of value of ingesting all the infrastructure logs and metrics. I think it is early days for OTEL metrics and logs, but the vendors don't tell you this.
The regex is dead simple: /Authorization: Basic (.*)\ngrant_type=refresh_token/ "." does not match newline, so I'm basically looking two lines that conform to a template.
Specific cases can be transformed with some grep/awk magic, but IMO the concept of pattern matching against a stream is interesting regardless.
These Thelio desktops are in-house, and they are way better than the laptops.
1 37.4
2 73.3
3 107.3
4 141.4
5 171.6
6 199.5
7 226.0
8 251.1
9 235.4
10 243.0
11 264.5
12 281.9
13 303.7
14 323.0
15 339.6
16 354.4
17 299.0
18 286.3
19 300.9
20 310.6
21 325.7
22 339.2
23 352.6
24 364.3
25 305.8
26 309.0
27 319.6
28 326.5
29 335.4
30 345.5
31 356.7
32 364.9
And then it settles around there.