Readit News logoReadit News
bhavnicksm commented on Show HN: Catsu: A unified Python client for embedding APIs   catsu.dev... · Posted by u/bhavnicksm
zerodayz · 3 days ago
this is huge, been wrangling with embedding libraries lately so will def try this out
bhavnicksm · 3 days ago
thanks! this is still pretty early, please let us know if you face any issues with the library, database or anything else :)
bhavnicksm commented on Show HN: Catsu: A unified Python client for embedding APIs   catsu.dev... · Posted by u/bhavnicksm
lennertjansen · 3 days ago
nice, this is an annoying problem. does it also provide fallback to switch providers when one isn't available?
bhavnicksm · 3 days ago
it doesn't right now, but the fallback feature is planned for in a future release. mostly because there's no simple way to handle the classic fallbacks like aws, gcp and azure, and we wanted to spend some time thinking about their DX.
bhavnicksm commented on Show HN: Pbnj – A minimal, self-hosted pastebin you can deploy in 60 seconds   pbnj.sh/... · Posted by u/bhavnicksm
davidcollantes · 15 days ago
HOWTO without CloudFlare, is it possible?
bhavnicksm · 15 days ago
Hey!

Right now, some things are somewhat hard-coded to be Cloudflare compatible. If someone's willing, you can just deploy this without Cloudflare, but you'd need to dig into the code a little.

In the future releases, I'll make it possible to host it on VPCs and release a Dockerfile along with it, so that should help a little.

Thanks for checking the project out!

bhavnicksm commented on Show HN: Pbnj – A minimal, self-hosted pastebin you can deploy in 60 seconds   pbnj.sh/... · Posted by u/bhavnicksm
Tt6000 · 15 days ago
Hey there, first of all congratulations, it's really nice and minimal and Illove it!

But Cloudflare is not self hosting!

bhavnicksm · 15 days ago
Yes, that's quite fair re:Cloudflare!

I couldn't find the right words to describe this, in comparison to something like Github Gist. I suppose "Own-your-data" since the D1 db generated is yours completely.

Happy to change the branding to be more reflective of this!

bhavnicksm commented on Show HN: Chonkie – A Fast, Lightweight Text Chunking Library for RAG   github.com/bhavnicksm/cho... · Posted by u/bhavnicksm
bhavnicksm · a year ago
Thank you so much for giving Chonkie a chance! Just to note Chonkie is still in beta mode (with v0.1.2 running) with a bunch of things planned for it. It's an initial working version, which seemed promising enough to present.

I hope that you will stick with Chonkie for the journey of making the 'perfect' chunking library!

Thanks again!

bhavnicksm commented on Show HN: Chonkie – A Fast, Lightweight Text Chunking Library for RAG   github.com/bhavnicksm/cho... · Posted by u/bhavnicksm
bravura · a year ago
One thing I've been looking for, and was a bit tricky implementing myself to be very fast, is this:

I have a particular max token length in mind, and I have a tokenizer like tiktoken. I have a string and I want to quickly find the maximum length truncation of the string that is <= target max token length.

Does chonkie handle this?

bhavnicksm · a year ago
I don't fully understand what you mean by "maximum length truncation of the string"; but if you're talking about splitting the sentence into 'chunks' which have token counts less than a pre-specified max_token length then, yes!

Is that what you meant?

bhavnicksm commented on Show HN: Chonkie – A Fast, Lightweight Text Chunking Library for RAG   github.com/bhavnicksm/cho... · Posted by u/bhavnicksm
rkharsan64 · a year ago
There's only 3 competitors in that particular benchmark, and the speedup compared to the 2nd is only 1.06x.

Edit: Also, from the same table, it seems that only this library was ran after warming up, while others were not. https://github.com/bhavnicksm/chonkie/blob/main/benchmarks/R...

bhavnicksm · a year ago
TokenChunking is really limited by the tokenizer and less by the Chunking algorithm. Tiktoken tokenizers seem to do better with warm-up which Chonkie defaults to -- which is also what the 2nd one is using.

Algorithmically, there's not much difference in TokenChunking between Chonkie and LangChain or any other TokenChunking algorithm you might want to use. (except Llamaindex, I don't know what mess they made for 33x slower algo)

If you only want TokenChunking (which I do not recommend completely), better than Chonkie or LangChain, just write your own for production :) At least don't install 80MiB packages for TokenChunking, Chonkie is 4x smaller than them.

That's just my honest response... And these benchmarks are just the beginning, future optimizations on SemanticChunking which would increase the speed-up from the current 2nd (2.5x right now) to even higher.

bhavnicksm commented on Show HN: Chonkie – A Fast, Lightweight Text Chunking Library for RAG   github.com/bhavnicksm/cho... · Posted by u/bhavnicksm
petesergeant · a year ago
> What other chunking strategies would be useful for RAG applications?

I’m using o1-preview for chunking, creating summary subdocuments.

bhavnicksm · a year ago
That's pretty cool! I believe a research paper called LumberChunker recently evaluated that to be pretty decent as well.

Thanks for responding, I'll try to make it easier to use something like that in Chonkie in the future!

u/bhavnicksm

KarmaCake day113May 2, 2024
About
(YC X25)
View Original