Releases: qdrant/fastembed
Releases · qdrant/fastembed
v0.4.1
v0.4.0
Change Log
Features 📢
- #355 - add rerankers support by @celinehoang177 @joein @I8dNLo
- #358 - add multi-gpu support by @joein @generall @hh-space-invader
- #364 - add license info to models descriptions by @hh-space-invader
Models 🧠
- #355 - rerankers ms-marco-MiniLM-L-6-v2, ms-marco-MiniLM-L-12-v2, bge-reranker-base by @celinehoang177
Fixes 🐛
- #337 - lowercasing words in bm25 by @n0x29a
- #339 - fix bm25 preprocessing by @I8dNLo
- #340 - fix hanging of the main process when child processes got unexpectedly killed by @hh-space-invader
Thanks to everyone who contributed to the current release
@celinehoang177 @I8dNLo @generall @hh-space-invader @n0x29a @joein
v0.3.5
v0.3.4
v0.3.1
What's Changed
Features
- Add support for jinaai/jina-embeddings-v2-base-de by @deichrenner in #270
- Add BM25 by @joein in #274
Fixes
- Fix None cache directory in parallel mode by @joein in #277
- Fix hybrid search example for pydantic v1 by @joein in #263
- Fix MiniLM by @I8dNLo in #275
- Fix parameter propagation in parallel mode, fix bm42 parallel by @joein in #274
- Pin Numpy <2 by @Anush008 in #278
Docs
- Replace Data Source by @NirantK in #206
- Add examples with supported type of models into readme by @generall in #271
New Contributors
- @deichrenner made their first contribution in #270. Thank you.
Full Changelog: v0.3.0...v0.3.1
v0.3.0
v0.2.7
Changelog
Features 🪄
- #214 Add onnx providers setter by @joein
- #224 add gpu support by @joein
- #230 update tokenizers by @joein
- #201 speed up model downloading by fine-graining file selection by @Anush008 @joein
Fixes 🪛
- #179 fix model cache invalidation if gcs downloading was interrupted by @joein
- #223 allow using fastembed behind a firewall utilising cached on disk models by @Thiru-GVT @joein
New models 🏆
Various documentation, workflow and notebooks improvements by @NirantK @generall @arunppsg
Full Changelog: v0.2.6...v0.2.7
v0.2.6
What's Changed
- feat: support mixedbread-ai/mxbai-embed-large-v1 by @yuvraj-wale in #158
- fix: case-insensitive check model_management.py by @Anush008 in #160
- Add misspelled version of SPLADE++ model for English by @NirantK in #161
- Hybrid Search Tutorial by @NirantK in #165
- fix: unify existing patterns, remove redundant by @joein in #168
- Fix spladepp parallelism by @joein in #169
- Update ruff by @joein in #172
- new: simplify imports by @joein in #171
- fix: fix model sizes in supported models lists by @joein in #167
- Update size_in_GB for BAAI/bge-small-en-v1.5 model by @NirantK in #176
- refactoring: update imports in notebooks by @joein in #173
New Contributors
- @yuvraj-wale made their first contribution in #158
- @joein made their first contribution in #168
Full Changelog: v0.2.5...v0.2.6
v0.2.5
What's Changed
- Make debugging easier: Add import statement for version debugging by @NirantK in #151
- Fix model name typo + Add SPLADE notebook by @NirantK in #155
- Case Insensitive model name checks by @Anush008 in #157
Contributors
- Add CONTRIBUTING.md file with guidelines for contributing to FastEmbed by @NirantK in #150
- Fix Issue Template forms by @NirantK in #152
- Move CONTRIBUTING.md + Add Test for Adding New Models by @NirantK in #154
Full Changelog: v0.2.4...v0.2.5