"Sell the sizzle, not the steak" is a real thing for a reason.
Down to the style of the webpages and the details of the pricing?
Here's the pricing calculator, for example:
https://share.cleanshot.com/JddPvNj3https://share.cleanshot.com/9zqx5ypp
As a happy turbopuffer user, not sure why I'd want to use Chroma.
I understand your point. Chroma Cloud has been quietly live in production for a year, and we have been discussing this architecture publicly for almost two years now. You can see this talk I gave at the CMU databases group - https://youtu.be/E4ot5d79jdA?si=i64ouoyFMevEgm3U. Some details have changed since then. But the core ideas remain the same.
The business model similarities mostly fall out of our architecture being similar, which mostly falls out of our constraints with respect to the workload being the same. There are only so many ways you can deliver a usage based billing model that is fair, understandable, and predictable. We aimed for a billing model that was all three, and this is what we arrived at.
On aesthetics, that’s always been our aesthetic, I think a lot of developer tools are leaning into the nostalgia of the early PC boom during this AI boom (fun fact, all the icons on our homepage are done by hand!).
On differences, we support optimized regexes vs full-scans, lending better performance. We also support trigram based full-text search which can often be useful for scenarios which need substring matches. We also support forking, which allows for cheap copy-on-write clones of your data, great for dataset versioning and tracking git repos with minimal cost. We've been building with support for generic sparse vectors (in beta) which enables techniques like SPLADE to be used, rather than just BM25. You can also run Chroma locally, enabling low-latency local workflows. This is great for AI apps where you need to iterate on a dataset until it passes evals, and then push it up to the cloud.
Chroma is Apache 2.0 open source - https://github.com/chroma-core/chroma and has a massive developer community behind it. Customers can run embedded, single-node and distributed Chroma themselves. We've suffered from depending on closed-source database startups and wanted to give developers using Chroma confidence in the longevity of their choice.
Lastly, we are building with AI workloads front and center and this changes what you build, how you build it and who you build for in the long term. We think search is changing and that the primary consumer of the search API for AI applications is shifting from human engineers, to language models. We are building some exciting things in this direction, more on that soon.
Deleted Comment
This binary is an utter waste of time.
Instead focus on the gradient of intelligence - the set of cognitive skills any given system has and to what degree it has them.
This engineering approach is more likely to lead to practical utility and progress.
The view of intelligence as binary is incredibly corrosive to this field.
this is very McLuhan/systemantics of you! all abstractions are leaky, but some abstractions let you look at the leaks.
TIL about setsums - one wonders if `fn setsum([String]) -> Digest` works then "nested setsums" must also work for very large scales.
one thing i missed from this post, which otherwise would score perfect marks for a technology introduciton, is benchmarks vs your comparisons on warpstream and friends.