Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Late Chunking (https://arxiv.org/pdf/2409.04701) #32618

Open
oskrim opened this issue Oct 20, 2024 · 3 comments
Open

Late Chunking (https://arxiv.org/pdf/2409.04701) #32618

oskrim opened this issue Oct 20, 2024 · 3 comments
Assignees
Milestone

Comments

@oskrim
Copy link

oskrim commented Oct 20, 2024

Is your feature request related to a problem? Please describe.
Practitioners often split text documents into smaller chunks and embed them separately. However, chunk embeddings created in this way can lose contextual information from surrounding chunks, resulting in sub-optimal representations

Describe the solution you'd like
Most likely a new embedder or an option on the Huggingface embedder would need to be implemented to support this

@oskrim
Copy link
Author

oskrim commented Oct 20, 2024

I could try to implement this, if this sounds interesting to you

@bratseth
Copy link
Member

That would be great!

@hmusum hmusum added this to the soon milestone Oct 23, 2024
@jobergum
Copy link

One challenge is modeling the chunking strategy and determining if mapping from a chunk embedding to a span in the original text should be possible. The paper uses different chunk-splitting methods, but even with the fixed number of tokens (e.g., 256), the user needs to implement the mapping between the chunk and the span in the longer text if we represent late-chunking in the schema like other embedders:

schema doc {
  document doc {
     field longtext type string {..  }
  }
  field chunk_embeddings type tensor<float>(chunk{}, v[1024]) {
     indexing: input longtext | embed late-chunker-id | attribute | index
 }
}

We have similar problems with the Colbert embedder, a similar concept, but where there is no pooling operation and each token becomes a vector.

A nice overview of the method from the paper
image

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

4 participants