Skip to content

Commit

Permalink
v0.10.68 (#15557)
Browse files Browse the repository at this point in the history
  • Loading branch information
logan-markewich authored Aug 21, 2024
1 parent 41bf201 commit ef9a21c
Show file tree
Hide file tree
Showing 11 changed files with 287 additions and 112 deletions.
72 changes: 72 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,77 @@
# ChangeLog

## [2024-08-21]

### `llama-index-core` [0.10.68]

- remove nested progress bars in base element node parser (#15550)
- Adding exhaustive docs for workflows (#15556)
- Adding multi-strategy workflow with reflection notebook example (#15445)
- remove openai dep from core (#15527)
- Improve token counter to handle more response types (#15501)
- feat: Allow using step decorator without parentheses (#15540)
- feat: workflow services (aka nested workflows) (#15325)
- Remove requirement to specify "allowed_query_fields" parameter when using "cypher_validator" in TextToCypher retriever (#15506)

### `llama-index-embeddings-mistralai` [0.1.6]

- fix mistral embeddings usage (#15508)

### `llama-index-embeddings-ollama` [0.2.0]

- use ollama client for embeddings (#15478)

### `llama-index-embeddings-openvino` [0.2.1]

- support static input shape for openvino embedding and reranker (#15521)

### `llama-index-graph-stores-neptune` [0.1.8]

- Added code to expose structured schema for Neptune (#15507)

### `llama-index-llms-ai21` [0.3.2]

- Integration: AI21 Tools support (#15518)

### `llama-index-llms-bedrock` [0.1.13]

- Support token counting for llama-index integration with bedrock (#15491)

### `llama-index-llms-cohere` [0.2.2]

- feat: add tool calling support for achat cohere (#15539)

### `llama-index-llms-gigachat` [0.1.0]

- Adding gigachat LLM support (#15313)

### `llama-index-llms-openai` [0.1.31]

- Fix incorrect type in OpenAI token usage report (#15524)
- allow streaming token counts for openai (#15548)

### `llama-index-postprocessor-nvidia-rerank` [0.2.1]

- add truncate support (#15490)
- Update to 0.2.0, remove old code (#15533)
- update default model to nvidia/nv-rerankqa-mistral-4b-v3 (#15543)

### `llama-index-readers-bitbucket` [0.1.4]

- Fixing the issues in loading file paths from bitbucket (#15311)

### `llama-index-readers-google` [0.3.1]

- enhance google drive reader for improved functionality and usability (#15512)

### `llama-index-readers-remote` [0.1.6]

- check and sanitize remote reader urls (#15494)

### `llama-index-vector-stores-qdrant` [0.2.17]

- fix: setting IDF modifier in QdrantVectorStore for sparse vectors (#15538)

## [2024-08-18]

### `llama-index-core` [0.10.67]
Expand Down
72 changes: 72 additions & 0 deletions docs/docs/CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,77 @@
# ChangeLog

## [2024-08-21]

### `llama-index-core` [0.10.68]

- remove nested progress bars in base element node parser (#15550)
- Adding exhaustive docs for workflows (#15556)
- Adding multi-strategy workflow with reflection notebook example (#15445)
- remove openai dep from core (#15527)
- Improve token counter to handle more response types (#15501)
- feat: Allow using step decorator without parentheses (#15540)
- feat: workflow services (aka nested workflows) (#15325)
- Remove requirement to specify "allowed_query_fields" parameter when using "cypher_validator" in TextToCypher retriever (#15506)

### `llama-index-embeddings-mistralai` [0.1.6]

- fix mistral embeddings usage (#15508)

### `llama-index-embeddings-ollama` [0.2.0]

- use ollama client for embeddings (#15478)

### `llama-index-embeddings-openvino` [0.2.1]

- support static input shape for openvino embedding and reranker (#15521)

### `llama-index-graph-stores-neptune` [0.1.8]

- Added code to expose structured schema for Neptune (#15507)

### `llama-index-llms-ai21` [0.3.2]

- Integration: AI21 Tools support (#15518)

### `llama-index-llms-bedrock` [0.1.13]

- Support token counting for llama-index integration with bedrock (#15491)

### `llama-index-llms-cohere` [0.2.2]

- feat: add tool calling support for achat cohere (#15539)

### `llama-index-llms-gigachat` [0.1.0]

- Adding gigachat LLM support (#15313)

### `llama-index-llms-openai` [0.1.31]

- Fix incorrect type in OpenAI token usage report (#15524)
- allow streaming token counts for openai (#15548)

### `llama-index-postprocessor-nvidia-rerank` [0.2.1]

- add truncate support (#15490)
- Update to 0.2.0, remove old code (#15533)
- update default model to nvidia/nv-rerankqa-mistral-4b-v3 (#15543)

### `llama-index-readers-bitbucket` [0.1.4]

- Fixing the issues in loading file paths from bitbucket (#15311)

### `llama-index-readers-google` [0.3.1]

- enhance google drive reader for improved functionality and usability (#15512)

### `llama-index-readers-remote` [0.1.6]

- check and sanitize remote reader urls (#15494)

### `llama-index-vector-stores-qdrant` [0.2.17]

- fix: setting IDF modifier in QdrantVectorStore for sparse vectors (#15538)

## [2024-08-18]

### `llama-index-core` [0.10.67]
Expand Down
4 changes: 4 additions & 0 deletions docs/docs/api_reference/embeddings/gigachat.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
::: llama_index.embeddings.gigachat
options:
members:
- GigaChatEmbedding
4 changes: 4 additions & 0 deletions docs/docs/api_reference/llms/gigachat.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
::: llama_index.llms.gigachat
options:
members:
- GigaChatLLM
7 changes: 7 additions & 0 deletions docs/mkdocs.yml
Original file line number Diff line number Diff line change
Expand Up @@ -169,6 +169,7 @@ nav:
- ./examples/data_connectors/ObsidianReaderDemo.ipynb
- ./examples/data_connectors/PathwayReaderDemo.ipynb
- ./examples/data_connectors/PineconeDemo.ipynb
- ./examples/data_connectors/PreprocessReaderDemo.ipynb
- ./examples/data_connectors/PsychicDemo.ipynb
- ./examples/data_connectors/QdrantDemo.ipynb
- ./examples/data_connectors/SlackDemo.ipynb
Expand Down Expand Up @@ -206,6 +207,7 @@ nav:
- ./examples/embeddings/fastembed.ipynb
- ./examples/embeddings/fireworks.ipynb
- ./examples/embeddings/gemini.ipynb
- ./examples/embeddings/gigachat.ipynb
- ./examples/embeddings/google_palm.ipynb
- ./examples/embeddings/gradient.ipynb
- ./examples/embeddings/huggingface.ipynb
Expand Down Expand Up @@ -648,6 +650,7 @@ nav:
- ./examples/workflow/corrective_rag_pack.ipynb
- ./examples/workflow/function_calling_agent.ipynb
- ./examples/workflow/long_rag_pack.ipynb
- ./examples/workflow/multi_strategy_workflow.ipynb
- ./examples/workflow/parallel_execution.ipynb
- ./examples/workflow/rag.ipynb
- ./examples/workflow/react_agent.ipynb
Expand Down Expand Up @@ -823,6 +826,7 @@ nav:
- ./api_reference/embeddings/fastembed.md
- ./api_reference/embeddings/fireworks.md
- ./api_reference/embeddings/gemini.md
- ./api_reference/embeddings/gigachat.md
- ./api_reference/embeddings/google.md
- ./api_reference/embeddings/gradient.md
- ./api_reference/embeddings/huggingface.md
Expand Down Expand Up @@ -920,6 +924,7 @@ nav:
- ./api_reference/llms/fireworks.md
- ./api_reference/llms/friendli.md
- ./api_reference/llms/gemini.md
- ./api_reference/llms/gigachat.md
- ./api_reference/llms/gradient.md
- ./api_reference/llms/groq.md
- ./api_reference/llms/huggingface.md
Expand Down Expand Up @@ -2122,6 +2127,8 @@ plugins:
- ../llama-index-integrations/indices/llama-index-indices-managed-bge-m3
- ../llama-index-integrations/tools/llama-index-tools-box
- ../llama-index-integrations/llms/llama-index-llms-sambanova
- ../llama-index-integrations/embeddings/llama-index-embeddings-gigachat
- ../llama-index-integrations/llms/llama-index-llms-gigachat
- redirects:
redirect_maps:
./api/llama_index.vector_stores.MongoDBAtlasVectorSearch.html: api_reference/storage/vector_store/mongodb.md
Expand Down
2 changes: 1 addition & 1 deletion llama-index-core/llama_index/core/__init__.py
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
"""Init file of LlamaIndex."""

__version__ = "0.10.67"
__version__ = "0.10.68.post1"

import logging
from logging import NullHandler
Expand Down
4 changes: 2 additions & 2 deletions llama-index-core/llama_index/core/postprocessor/optimizer.py
Original file line number Diff line number Diff line change
Expand Up @@ -79,9 +79,9 @@ def __init__(
)

if tokenizer_fn is None:
import nltk.data
import nltk

tokenizer = nltk.data.load("tokenizers/punkt/english.pickle")
tokenizer = nltk.tokenize.PunktSentenceTokenizer()
tokenizer_fn = tokenizer.tokenize
self._tokenizer_fn = tokenizer_fn

Expand Down
4 changes: 2 additions & 2 deletions llama-index-core/llama_index/core/utils.py
Original file line number Diff line number Diff line change
Expand Up @@ -62,9 +62,9 @@ def __init__(self) -> None:
nltk.download("stopwords", download_dir=self._nltk_data_dir)

try:
nltk.data.find("tokenizers/punkt")
nltk.data.find("tokenizers/punkt_tab")
except LookupError:
nltk.download("punkt", download_dir=self._nltk_data_dir)
nltk.download("punkt_tab", download_dir=self._nltk_data_dir)

@property
def stopwords(self) -> List[str]:
Expand Down
3 changes: 2 additions & 1 deletion llama-index-core/pyproject.toml
Original file line number Diff line number Diff line change
Expand Up @@ -43,7 +43,7 @@ name = "llama-index-core"
packages = [{include = "llama_index"}]
readme = "README.md"
repository = "https://github.com/run-llama/llama_index"
version = "0.10.67"
version = "0.10.68.post1"

[tool.poetry.dependencies]
SQLAlchemy = {extras = ["asyncio"], version = ">=1.4.49"}
Expand All @@ -55,6 +55,7 @@ nest-asyncio = "^1.5.8"
nltk = ">=3.8.1,!=3.9" # Should be >= 3.8.2 but nltk removed 3.8.2 from pypi, 3.9 is broken
numpy = "<2.0.0" # Pin until we adapt to Numpy v2
pandas = "*"
pydantic = "<3.0"
python = ">=3.8.1,<4.0"
tenacity = ">=8.2.0,!=8.4.0,<9.0.0" # Avoid 8.4.0 which lacks tenacity.asyncio
tiktoken = ">=0.3.3"
Expand Down
Loading

0 comments on commit ef9a21c

Please sign in to comment.