We are launching OpenObserve. An open source Elasticsearch/Splunk/Datadog alternative written in rust and vue that is super easy to get started with and has 140x lower storage cost compared to elasticsearch. It offers logs, metrics, traces, dashboards, alerts, functions (run aws lambda like functions during ingestion and query to enrich, redact, transform, normalize and whatever else you want to do. Think redacting email IDs from logs, adding geolocation based on IP address, etc). You can do all of this from the UI, no messing up with configuration files.
OpenObserve can use local disk for storage in single node mode or s3/gcs/minio/azure blob or any s3 compatible store in HA mode.
We found that setting up observability often involved setting up 4 different tools (grafana for dashboarding, elasticsearch/loki/etc for logs, jaeger for tracing, thanos, cortex etc for metrics) and its not simple to do these things.
Here is a blog on why we built OpenObserve - https://openobserve.ai/blog/launching-openobserve.
We are in early days and would love to get feedback and suggestions.
As someone running a homelab, and hadn't set up logging yet, it was a great find. I didn't have to learn and combine 3+ log technologies, it's just a single all-in-one monitoring server with web UI, dashboards, log filtering/search, etc. RAM usage of the Docker container was under 100MB.
Then they started sending me emails from their personal gmail accounts…
https://www.linkedin.com/posts/adrianmacneil_sdrs-are-the-fr...
We need something new and better.
We made sure to build a solution that is easy to setup and maintain. One single binary/container for single node setup and one stack to setup/upgrade/maintain for HA setup.
We also made sure that it is easy to use and learn. We allow standard SQL to be used for querying logs, metrics and traces - nothing extra to learn here. Metrics querying is supported using PromQL too. Drag and drop for creating panels and dashboards is there too :-).
Dashboarding is supported within the same stack. No need to setup Grafana or something else separately.
OpenObserve also offer functions that grafana stack does not offer. Give it a shot, you are going to love it.
Data is stored in open parquet format, allowing its usage even outside of OpenObserve if so desired.
Hope this helps.
Having a simple language/YAML to define those panels and an easy way to preview them (eg paste in your file to preview the page, paste in a panel source to try it) in addition to editing via GUI and being able to copy changes back to revision control would be great
In Elasticsearch tuning these indices is one of the bigger operational burdens. Loki solves this by asking users to not put data in the indexed fields, instead brute force through it all.
Does OpenObserve have any other approach to this?
OpenObserve looks very promising from the first sight! It prioritizes the same properties as VictoriaMetrics products:
- Easy setup and operation
- Low resource usage (CPU, RAM, storage)
- Fast performance
It would be interesting to compare the operation complexity, the performance and resource usage of OpenObserve for metrics vs VictoriaMetrics.
P.S. VictoriaMetrics is going to present its own log storage and log analytics solution - VictoriaLogs - at the upcoming Monitorama conference in Portland [1]. It will provide much smoother experience for ELK and Grafana Loki users, while requiring lower amounts of RAM, disk space and CPU [2].
[1] https://monitorama.com/2023/pdx.html
[2] https://youtu.be/Gu96Fj2l7ls
It's easy to overlook these aspects but they will make all the difference to your team implementing the solution, so if you don't want to gamble, their products are a solid choice.
PS: I have no personal relation or connection with them. I am a user of VictoriaMetrics. Just want to point out things that matter but get ignored when choosing your software stack.
Will you compare with qryn? Self-hosted sentry?
https://qryn.metrico.in/
https://develop.sentry.dev/self-hosted/
Additionally, VictoriaLogs will provide more user-friendly query language - LogsQL, which is easier to use than SQL at ClickHouse or the query language provided by Grafana Loki.
At the bottom of your GitHub project home page, you say the best way to join the project is to join a WeChat group (in Chinese text), but likely only a very small minority of us outside China use WeChat, so that may be a stumbling block if you are trying to encourage people outside Asia to contribute to the project.
Per https://openobserve.ai/about , the address at the bottom says San Francisco, California, but in the same page it says "headquartered in Bangalore, India". So where are you based out of?
Also curious what the relationship is between OpenObserve the open-source project and Zinc Labs, which is referenced in the website (but not in the GitHub project).
> headquartered in Bangalore, India This is embarrassing, just fixed it. We are a delaware based company, headquartered in San Francisco. Pure copy paste error.
Zinc Labs (Legal name) is the company behind the product OpenObserve.
Copy and paste from where? Did you even build this product or just hire cheap labor out of India to build it?
Is my math right? Or do you use something different for compression?
2 Orders of magnitude of storage saving is pretty impressive.
> Currently OpenObserve support steams of type logs, in future streams types metrics & traces will be supported.
- are metrics and traces queryable yet? I admit, I feel a little misled, if only logs are supported for now that should be made more clear.
- do (or will) metric and trace queries use a similar SQL syntax as log search?
Finally... is there a demo? I would love to be able to try out the product without actually putting in the effort to set it up.
[0]: https://openobserve.ai/docs/user-guide/streams/
Metrics is still in its infancy though.
For me, thought, setting up a system is not the primary pain point today. FWIW, signing up for a cloud service is not hard.
The problem starts at the ingestion point. I am writing my apps according to 12 factors and running them in docker containers. What I want is an agent companion that will collect logs and metrics from these apps and containers and forward to the system. Datadog and Grafana has that, do OpenObserve?
Also, interesting that in your quickstart tutorial you have a step of unzipping logs file before sending to the system. I would suggest the ability to send them in (g)zipped format, as it is an organic format of keeping logs.
We did not want to reinvent the wheel and recommend that you use one of these.
quickstart tutorial does not reflect what you will do in real life scenario.
You will want to learn how to use log forwarders like fluentbit and vector. These are the ones that you will generally use in real life. These log forwarders read the log files and send them to OpenObserve. Here is a blog on how to use fluentbit to capture kubernetes logs using fluentbit and send them to OpenObserve - https://openobserve.ai/blog/how-to-send-kubernetes-logs-usin...
We will add more tutorials and examples soon.
Hope this helps.