You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Is your feature request related to a problem? Please describe.
Practitioners often split text documents into smaller chunks and embed them separately. However, chunk embeddings created in this way can lose contextual information from surrounding chunks, resulting in sub-optimal representations
Describe the solution you'd like
Most likely a new embedder or an option on the Huggingface embedder would need to be implemented to support this
The text was updated successfully, but these errors were encountered:
One challenge is modeling the chunking strategy and determining if mapping from a chunk embedding to a span in the original text should be possible. The paper uses different chunk-splitting methods, but even with the fixed number of tokens (e.g., 256), the user needs to implement the mapping between the chunk and the span in the longer text if we represent late-chunking in the schema like other embedders:
schema doc {
document doc {
field longtext type string {.. }
}
field chunk_embeddings type tensor<float>(chunk{}, v[1024]) {
indexing: input longtext | embed late-chunker-id | attribute | index
}
}
We have similar problems with the Colbert embedder, a similar concept, but where there is no pooling operation and each token becomes a vector.
Is your feature request related to a problem? Please describe.
Practitioners often split text documents into smaller chunks and embed them separately. However, chunk embeddings created in this way can lose contextual information from surrounding chunks, resulting in sub-optimal representations
Describe the solution you'd like
Most likely a new embedder or an option on the Huggingface embedder would need to be implemented to support this
The text was updated successfully, but these errors were encountered: