qlog uses an inverted index (like search engines) to search millions of log lines in milliseconds. It's 10-100x faster than grep and way simpler than setting up Elasticsearch.
Features: - Lightning fast indexing (1M+ lines/sec using mmap) - Sub-millisecond searches on indexed data - Beautiful terminal output with context lines - Auto-detects JSON, syslog, nginx, apache formats - Zero configuration - Works offline - Pure Python
Example: qlog index './logs/*/*.log' qlog search "error" --context 3
I've tested it on 10GB of logs and it's consistently 3750x faster than grep. The index is stored locally so repeated searches are instant.
Demo: Run `bash examples/demo.sh` to see it in action.
GitHub: https://github.com/Cosm00/qlog
Perfect for developers/DevOps folks who search logs daily.
Happy to answer questions!
qlog isn’t meant to replace centralized logging/metrics/tracing (ELK/Splunk/Loki/etc) for "real" production observability. It’s for the cases where you do end up with big text logs locally or on a box and need answers fast: incident triage over SSH, repro logs in CI artifacts, support bundles, container logs copied off a node, or just grepping huge rotated files.
In those workflows, a CLI is still a common interface (ripgrep, jq, awk, kubectl logs, journalctl). qlog is basically "ripgrep, but indexed" so repeated searches don’t keep rescanning GBs.
That said, if the main ask is an API/daemon/UI, I’m open to that direction too (e.g. emit JSON for piping, or a small HTTP wrapper around the index/search). Curious what tooling you do reach for in your day-to-day?
In both cases qlog setup would be better than elastic search or other remote search index setup .
(modern o11y, as typically viewed through Grafana, where IRL you need more than logs)
Access logs were one of the main motivations (lots of repeated queries like IP/user-agent/path/status). If you try it, two tips:
1) Index once, then iterate on searches: qlog index './access*.log' qlog search 'status=403'
2) If you’re hunting patterns (e.g. suspicious UAs or a specific path), qlog really shines because it doesn’t have to rescan the whole file on each query.
If you run into anything weird with common log formats (nginx/apache variants), feel free to paste a few sample lines and I’ll make the parser more robust.
Right now qlog is a Python CLI, so the cleanest “npm” story is probably a small wrapper package that installs qlog (pipx/uv/pip) and shells out to it, so Node projects can do `npx qlog ...` / `import { search } from 'qlog'` without reimplementing the indexer.
A native JS/TS port is possible, but I wanted to keep v0.x focused on correctness + format parsing + index compatibility first.
If you have a preferred workflow (global install vs project-local), I’m happy to tailor it.
Dead Comment