Just curious, what is the most challenging thing in your opinion when building such log viewer?
For me, the most challenging parts are still ahead - live tailing and a plugin system to support different storage backends beyond just ClickHouse. Those will be interesting problems to solve! What was the biggest challenge for you?
You can think of it as just one part of a logging platform, where a full platform might consist of multiple components like a UI, ingestion engine, logging agent, and storage. In this setup, Telescope is only the UI.
From my perspective, a ClickHouse-based setup can be cheaper and possibly faster in certain conditions – here’s some comparison made by ClickHouse Inc. - https://clickhouse.com/blog/clickhouse_vs_elasticsearch_the_...
My motto is "Know your data". I’m not a big fan of schemaless setups - I believe in designing a proper database schema rather than just pushing data into a black hole and hoping the system will handle it.
At the moment, I have no plans to support arbitrary data visualization in Telescope, as I believe there are better BI-like tools for that scenario.
Also, what service did you use to make the video, if you don't mind my asking?
I haven't tested the new JSON format in ClickHouse yet, but even if something doesn't work at the moment, fixing it should be trivial.
As for the video service, it wasn’t actually a service but rather a set of local tools:
- Video capture/screenshots - macOS default tools
- Screenshot editing - GIMP
- Voice generation - https://elevenlabs.io/app/speech-synthesis/text-to-speech
- Montage – DaVinci Resolve 19
How do I get my logs (e.g. local text files from disk like nginx logs, or files that need transformation like systemd journal logs) into ClickHouse in a way that's useful for Telescope?
What kind of indices do I have to configure so that queries are fast? Ideally with some examples.
How can I make that full-text substring search queries are fast (e.g. "unexpected error 123")? When I filter with regex, is that still fast / use indices?
From the docs it isn't quite clear to me how to configure the system so that I can just put a couple TB of logs into it and have queries be fast.
Thanks!
I will consider providing a how-to guide on setting up log storage in ClickHouse, but I’m afraid I won’t be able to cover all possible scenarios. This is a highly specific topic that depends on the infrastructure and needs of each organization.
If you’re looking for a all-in-one solution that can*both collect and visualize logs, you might want to check out https://www.highlight.io or https://signoz.io or other similar projects.
And also, by the way, I’m not trying to create a "Grafana Loki killer" or a "killer" of any other tool. This is just an open source project - I simply want to build a great log viewer without worrying about how to attract users from Grafana Loki or Elastic or any other tool/product.
Regarding the name, "Telescope" is also the name of a Neovim fuzzy finder[0] that dominates the ecosystem there. Other results appear by searching "telescope github".