Readit News logoReadit News
tstack · 4 months ago
Nice work! The TUI looks really sharp and I like the histogram on top. Going to play with this today.

TIL awk patterns can be more than just regexes and can be combined with boolean operators. I've written a bit of awk and never realized this.

philsnow · 4 months ago
There's a ton of overlap between "powerful stuff that's easy to write in (g)awk" and "powerful stuff that's trivial to write in perl", but I find awk to be succinct without losing readability.
piterrro · 4 months ago
This is a really cool project — love the simplicity and the TUI approach, especially with the timeline histogram and remote-first design. I had similar pain points at my end when dealing with logs across many hosts, which led to building Logdy[1] — a tool with a slightly different philosophy.

Logdy is more web-based and focuses on live tailing, structured log search, and quick filtering across multiple sources, also without requiring a centralized server. Not trying to compare directly, but if you're exploring this space, you might find it useful as a complementary approach or for different scenarios. Although we need to work more on adding ability to query multiple hosts.

Anyway, kudos on nerdlog—always great to see lean tools in the logging space that don’t require spinning up half a dozen services.

[1] https://logdy.dev

Zopieux · 4 months ago
journalctl is mentioned once in the landing page and it seems to imply that journalctl is not supported per se, as logs need to be stored plaintext to legacy syslog (?).

I do not want to store plaintext logs and use ancient workarounds like logrotate. journald itself has the built-in ability to receive logs from remote hosts (journald remote & gateway) and search them using --merge.

dimonomid · 4 months ago
That's true, as of today nerdlog doesn't use journalctl, and needs plain log files. There were a few reasons of that, primarily related to the sheer amount of logs that we were dealing with.

As mentioned in the article, my original use case was: having a fleet of hosts, each printing pretty sizeable amount of logs, e.g. having more than 1-2GB log file on every host on a single day was pretty common. My biggest problem with journalctl is that, during some intensive spikes of logs, it might drop logs; we were definitely observing this behavior that some messages are clearly missing from the journalctl output, but when we check the plain log files, the messages are there. I don't remember details now, but I've read about some kind of ratelimiting / buffer overflow going on there (and somehow the part which writes to the files enjoys not having these limits, or at least having more permissive limits). So that's the primary one; I definitely didn't want to deal with missing logs. Somehow, old school technology like plain log files keeps being more reliable.

Second, at least back then, journalctl was noticeably slower than simply using tail+head hacks to "select" the requested time range.

Third, having a dependency like journalctl it's just harder to test than plain log files.

Lastly, I wanted to be able to use any log files, not necessarily controlled by journalctl.

I think adding support for journalctl should be possible, but I still do have doubts on whether it's worth it. You mention that you don't want to store plaintext logs and using logrotate, but is it painful to simply install rsyslog? I think it takes care of all this without us having to worry about it.

lenova · 4 months ago
I appreciate this response, and want to say I really like your tool's UI over something like lazyjournal. But like the above commentor, I would love to see journald support as well, just because it's the default these days on the distros I use, and seems like the direction the Linux system industry has headed in.
whalesalad · 4 months ago
can't you just read from stdin?

i use lnav in this way all the time: journalctl -f -u service | lnav

this is the ethos of unix tooling

tstack · 4 months ago
The article makes it sound like it uses various command-line tools (bash/awk/head/tail) to process the logs. So, I imagine it's not a huge leap to extend support to using journalctl to do that work instead.
mamcx · 4 months ago
One small hitch I found is that this kind of tools are fixes in what to process, so for example I can't use them for structured logging. If it has an escape hatch where I can supply my own pipe (for example `process = 'vector ....'`) then it will be enough.
nodesocket · 4 months ago
Very nice work. Anyway to specify a group of log files in the config that are shared across many hosts? For example:

  log_files:
    mygroup:
      - /var/log/syslog
      - /var/log/foo
      - /var/log/bar
  log_streams:
    myhost-01:
      hostname: actualhost1.com
      port: 1234
      user: myuser
      log_files: mygroup
    myhost-02:
      hostname: actualhost2.com
      port: 7890
      user: myuser
      log_files: mygroup
    myhost-03:
      hostname: actualhost3.com
      port: 8888
      user: myuser
      log_files: mygroup

dimonomid · 4 months ago
Thanks. And no, as of today, there's no way to define a group like that. Might be a viable idea though.

However, before we go there, I want to double check that we're on the same page: this `log_files` field specifies only files _in the same logstream_; meaning, these files need to have consecutive logs. So for example, it can be ["/var/log/syslog", "/var/log/syslog.1"], or it can be ["/var/log/auth.log", "/var/log/auth.log.1"], but it can NOT be something like ["/var/log/syslog", "/var/log/auth.log"].

mdaniel · 4 months ago
At the very grave risk of scope creep, I'll point out that the GP's yaml is very close to an Ansible inventory file so rather than just making up a new structure one could leverage any existing muscle memory (and create helpful defaults for folks who have not yet seen Ansible but have seen your tool)

https://docs.ansible.com/ansible/11/collections/ansible/buil...

e.g.

  all:
    children:
      mygroup:
        hosts:
          myhost-01:
            hostname: actualhost1.com
            port: 1234
            user: myuser
          myhost-02:
            hostname: actualhost2.com
            port: 7890
            user: myuser
          myhost-03:
            hostname: actualhost3.com
            port: 8888
            user: myuser
        vars:
          files:
          - /var/log/syslog
          - /var/log/foo
          - /var/log/bar
That first "children" key is because in ansible's world one can have "vars" and "hosts" that exist at the very top, too; the top-level "vars" would propagate down to all hosts which one can view as "not necessary" in the GP's example, or "useful" if those files are always the same for every single host in the whole collection. Same-same for the "user:" but I wasn't trying to get bogged down in the DRY for this exercise

ryanhecht · 4 months ago
I definitely intend on playing around with this later! I see that [gzipped log archives aren't supported](https://dmitryfrank.com/projects/nerdlog/article#depends_on_...), minimizing the use case for me personally. You've at least thought enough about that to bring it up as a limitation you think people will call attention to -- any plans to eventually support it?
dimonomid · 4 months ago
Thanks for the feedback!

Yeah it would be great, and I do want to support it, especially if the demand is popular. In fact, even if you ungzip them manually, as of today nerdlog doesn't support more than 2 files in a logstream, which needs to be fixed first.

Specifically about supporting gzipped logs though, the UX I'm thinking about is like this: if the requested time range goes beyond the earliest available ungzipped file, then warn the user that we'll have to ungzip the next file (that warning can be turned off in options though, but by default I don't want to just ungzip it silently, because it can consume a signficant amount of disk space). So if the user agrees, nerdlog ungzips it and places somewhere under tmp. It'll never delete it manually though, relying on the regular OS means of cleaning up /tmp, and will keep using it as long as it's available.

Does it make sense?

ryanhecht · 4 months ago
Definitely makes sense!

> In fact, even if you ungzip them manually, as of today nerdlog doesn't support more than 2 files in a logstream

Ah, interesting! I read the limitation as "we don't support zipped files," not "we only support two files!"

Best of luck, this is neat!

dloss · 4 months ago
Very nice! Added to my little list of log viewers at https://github.com/dloss/klp#alternative-tools
dimonomid · 4 months ago
Thanks, appreciate that!
esafak · 4 months ago
Looks nice. You might want to get help from the community to get it packaged for major linux distros, if you want more users.
dimonomid · 4 months ago
Thanks, and yeah getting distro packages would be dope. Hopefully, some day.
knowitnone · 4 months ago
Nice. I needed this a few years ago. No license file?
dimonomid · 4 months ago
Yeah right, my bad, and thanks for reminding me. Just added one (the BSD 2-clause).