Readit News logoReadit News
tadkar commented on Supercharge vector search with ColBERT rerank in PostgreSQL   blog.vectorchord.ai/super... · Posted by u/gaocegege
upghost · a year ago
If this is true, my understanding of vanilla token vector embeddings is wrong. my understanding was that the vector embedding was the geometric coordinates of the token in the latent space with respect to the prior distribution. So adding another dimension to make it a "multivector" doesn't (in my mind) seem like it would add much. What am I missing?
tadkar · a year ago
I think the important thing is that the first approach to converting complete sentences to an embedding was done by averaging all the embeddings of the tokens in the sentence. What ColBERT does is store the embeddings of all the tokens before then using dot products to identify the most relevant tokens to the query. Another comment in this thread says the same thing in a different way. Feels funny to post a stack exchange reference, but this is a great answer!

[1] https://stackoverflow.com/questions/57960995/how-are-the-tok...

tadkar commented on Supercharge vector search with ColBERT rerank in PostgreSQL   blog.vectorchord.ai/super... · Posted by u/gaocegege
simonw · a year ago
> However, generating sentence embeddings through pooling token embeddings can potentially sacrifice fine-grained details present at the token level. ColBERT overcomes this by representing text as token-level multi-vectors rather than a single, aggregated vector. This approach, leveraging contextual late interaction at the token level, allows ColBERT to retain more nuanced information and improve search accuracy compared to methods relying solely on sentence embeddings.

I don't know what it is about ColBERT that affords such opaque descriptions, but this is sadly common. I find the above explanation incredibly difficult to parse.

I have my own explanation of ColBERT here but I'm not particularly happy with that either: https://til.simonwillison.net/llms/colbert-ragatouille

If anyone wants to try explaining ColBERT without using jargon like "token-level multi-vectors" or "contextual late interaction" I'd love to see a clear description of it!

tadkar · a year ago
Here’s my understanding. It is intimidating to write a response to you, because you have an exceptionally clear writing style. I hope the more knowledgable HN crowd will correct any errors in fact or presentation style below.

Old school word embedding models (like Word2Vec) come up with embeddings by using masked word predictions. You can embed a whole sentence by taking the average of all the word embeddings in the sentence.

There are many scenarios where this average fails to distinguish between multiple meanings of a word. For example “fine weather” and “fine hair” both contain “fine” but mean different things.

Transformers are great at producing better embeddings by considering context, and using the words in the rest of the sentence to produce a better representation of each word. BERT is a great model to do this.

The problem is that if you want to use BERT by itself to compute relevance you need to perform a lot of compute per query because you have to concatenate the query and the document vector to produce a long sequence that can then be “embedded” by BERT. Figure 2c in the ColBERT paper [1]

What ColBERT does is to use the fact that BERT can use context from the entire sentence and its attention heads to produce a more nuanced representation of any token in its input. It does this once for all documents in its index. So for example (assuming “fine” was a token) it would embed the “fine” in the sentence “we’re having fine weather today” to a different vector than the fine in “Sarah has fine blond hair”. In ColBERT the size of the output embeddings are usually much smaller than the typical 1024 you might expect from a Word2Vec.

Now, if you have a query, you can do the same and produce token level embeddings for all the tokens in the query.

Once you have these two contextualised embeddings, you can check for the presence of the particular meaning of a word in the document using the dot product. For example the query “which children have fine hair” matches the document “Sarah had fine blond hair” because the token “fine” is used in the exact same context in both the query and the document and should be picked up by the MaxSim operation.

[1] https://arxiv.org/pdf/2004.12832

tadkar commented on Advantages of incompetent management   yosefk.com/blog/advantage... · Posted by u/zdw
yosefk · 2 years ago
Author here - I'm simultaneously quite pessimistic about, and very interested in heterodox organizational structures and especially real-world stories about them. I feel that failure or regression to the mean are quite likely and scaling or replicating success stories is very hard, but I am almost certain things will evolve beyond the current status quo eventually, just really not sure when and how.

So thank you very much for sharing and recommendations for further reading will be much appreciated!

tadkar · 2 years ago
I have a theory that organizations that grow fast and scale well all have this “cellular model” at their core.

Investment bank trading desks in the pre-2008 era, partnership at the big strategy consulting firms and even “multi-strategy hedge funds” now are actually all collections of very incentive aligned businesses. They share the Creo quality of making lots of millionaires and people looking back on their time there as one of great freedom and achievement.

In all these places, employees are paid according to the revenue they generate, with seemingly no ceiling to what you can take home. It is true that the size of any one cell doesn’t scale beyond a small number of people. But all the organisations I mentioned above scale by having units tackling small pieces of vast markets.

The main lesson I took away from reading “Barbarians at the Gate” is that big companies hugely suffer from the principal agent problem, where management is mostly out to enrich themselves at the expense of shareholders and employees (sometimes). This looting is however only possible at a company that was established by a founder with a deep vision and passion for the product and has set up systems and culture that generates sufficient cash for the professional management to leech off.

What I have not read yet is a systematic study of these “cellular organizations” and what the common features are that make them successful. My guess is that the key is that each “unit” or “cell” has measurable economics that makes it possible to share the economic value over a sustained period of time. A bit like why sales people get paid a lot.

Deleted Comment

tadkar commented on Ask HN: How to Learn Performance Engineering?    · Posted by u/overrun11
tadkar · 2 years ago
This is a great blog to give you things to get started. https://easyperf.net/

As with all things, practice is an essential part of improving!

Then, there's learning from some real achievements. Fast inverse square root, or the 55GB/s Fizzbuzz example: https://codegolf.stackexchange.com/questions/215216/high-thr...

tadkar commented on Implementing Interactive Languages   scattered-thoughts.net/wr... · Posted by u/luu
eatonphil · 3 years ago
While he mentions the slowness of LLVM, it would have been cool to see Jamie's thoughts on tinycc and qbe as well. I've been looking into the fastest options for generating and executing machine code (without me doing it all myself; generating and compiling C feels like a happy medium).
tadkar · 3 years ago
I came here to say exactly the same thing. There are also a couple of other options: the MIR project from RedHat [1], libjit [2], lightning [3] and Dynasm [4] 1. https://github.com/vnmakarov/mir 2. https://www.gnu.org/software/libjit/ 3. https://www.gnu.org/software/lightning/manual/lightning.html 4. https://corsix.github.io/dynasm-doc/tutorial.html

But in general it seems to be very hard to beat the bang for buck from generating C and compiling that - even with something simple like tcc

tadkar commented on Invertible Bloom Lookup Tables with Less Randomness and Memory   arxiv.org/abs/2306.07583... · Posted by u/keepamovin
gabesullice · 3 years ago
I don't know the answer, so I'm purely speculating for the fun of it.

I guess that hash function outputs aren't perfectly uniformly distributed. E.g. if a toy hash function (a) produces a 2-bit output for a gajillion random inputs, you wouldn't get a quarter of the values in each bucket. Maybe you'd get 30% in 00, 20% in 01, and 25% in both buckets 10 and 11. Salting the inputs wouldn't help with that. It'd only make similar inputs less likely to collide, but collisions would still be more likely in the worst case.

By combining different a hash function (b) with different "lumps", I suppose that the lumps would even out so that you'd approach a probability of .25 in each bucket.

        00   01   10   11
  a    .30  .20  .25  .25
  b    .22  .22  .25  .31  
  a+b  .26  .21  .25  .28
Therefore, if you're going to spend time hashing something more than once, you might as well use different hash functions for each cycle.

tadkar · 3 years ago
I suspect that for most Bloom filters, the most commonly used hash functions are “good enough”. There’s also some literature to suggest that using just 2 hash functions and recombining the results is plenty. See kirsch-mitzenmacher [1] and [2]

[1] https://www.eecs.harvard.edu/%7Emichaelm/postscripts/tr-02-0... [2] https://stackoverflow.com/questions/70963247/bloom-filters-w...

u/tadkar

KarmaCake day175June 20, 2014View Original