Skip to content

Commit

Permalink
v0.10.35 (#13345)
Browse files Browse the repository at this point in the history
  • Loading branch information
logan-markewich authored May 7, 2024
1 parent 42deb58 commit 38eb6db
Show file tree
Hide file tree
Showing 20 changed files with 383 additions and 240 deletions.
53 changes: 53 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,58 @@
# ChangeLog

## [2024-05-07]

### `llama-index-agent-introspective` [0.1.0]

- Add CRITIC and reflection agent integrations (#13108)

### `llama-index-core` [0.10.35]

- fix `from_defaults()` erasing summary memory buffer history (#13325)
- use existing async event loop instead of `asyncio.run()` in core (#13309)
- fix async streaming from query engine in condense question chat engine (#13306)
- Handle ValueError in extract_table_summaries in element node parsers (#13318)
- Handle llm properly for QASummaryQueryEngineBuilder and RouterQueryEngine (#13281)
- expand instrumentation payloads (#13302)
- Fix Bug in sql join statement missing schema (#13277)

### `llama-index-embeddings-jinaai` [0.1.5]

- add encoding_type parameters in JinaEmbedding class (#13172)
- fix encoding type access in JinaEmbeddings (#13315)

### `llama-index-embeddings-nvidia` [0.1.0]

- add nvidia nim embeddings support (#13177)

### `llama-index-llms-mistralai` [0.1.12]

- Fix async issue when streaming with Mistral AI (#13292)

### `llama-index-llms-nvidia` [0.1.0]

- add nvidia nim llm support (#13176)

### `llama-index-postprocessor-nvidia-rerank` [0.1.0]

- add nvidia nim rerank support (#13178)

### `llama-index-readers-file` [0.1.21]

- Update MarkdownReader to parse text before first header (#13327)

### `llama-index-readers-web` [0.1.13]

- feat: Spider Web Loader (#13200)

### `llama-index-vector-stores-vespa` [0.1.0]

- Add VectorStore integration for Vespa (#13213)

### `llama-index-vector-stores-vertexaivectorsearch` [0.1.0]

- Add support for Vertex AI Vector Search as Vector Store (#13186)

## [2024-05-02]

### `llama-index-core` [0.10.34]
Expand Down
53 changes: 53 additions & 0 deletions docs/docs/CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,58 @@
# ChangeLog

## [2024-05-07]

### `llama-index-agent-introspective` [0.1.0]

- Add CRITIC and reflection agent integrations (#13108)

### `llama-index-core` [0.10.35]

- fix `from_defaults()` erasing summary memory buffer history (#13325)
- use existing async event loop instead of `asyncio.run()` in core (#13309)
- fix async streaming from query engine in condense question chat engine (#13306)
- Handle ValueError in extract_table_summaries in element node parsers (#13318)
- Handle llm properly for QASummaryQueryEngineBuilder and RouterQueryEngine (#13281)
- expand instrumentation payloads (#13302)
- Fix Bug in sql join statement missing schema (#13277)

### `llama-index-embeddings-jinaai` [0.1.5]

- add encoding_type parameters in JinaEmbedding class (#13172)
- fix encoding type access in JinaEmbeddings (#13315)

### `llama-index-embeddings-nvidia` [0.1.0]

- add nvidia nim embeddings support (#13177)

### `llama-index-llms-mistralai` [0.1.12]

- Fix async issue when streaming with Mistral AI (#13292)

### `llama-index-llms-nvidia` [0.1.0]

- add nvidia nim llm support (#13176)

### `llama-index-postprocessor-nvidia-rerank` [0.1.0]

- add nvidia nim rerank support (#13178)

### `llama-index-readers-file` [0.1.21]

- Update MarkdownReader to parse text before first header (#13327)

### `llama-index-readers-web` [0.1.13]

- feat: Spider Web Loader (#13200)

### `llama-index-vector-stores-vespa` [0.1.0]

- Add VectorStore integration for Vespa (#13213)

### `llama-index-vector-stores-vertexaivectorsearch` [0.1.0]

- Add support for Vertex AI Vector Search as Vector Store (#13186)

## [2024-05-02]

### `llama-index-core` [0.10.34]
Expand Down
4 changes: 4 additions & 0 deletions docs/docs/api_reference/embeddings/nvidia.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
::: llama_index.embeddings.nvidia
options:
members:
- NVIDIA
4 changes: 4 additions & 0 deletions docs/docs/api_reference/llms/nvidia.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
::: llama_index.llms.nvidia
options:
members:
- NVIDIA
4 changes: 4 additions & 0 deletions docs/docs/api_reference/postprocessor/nvidia_rerank.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
::: llama_index.postprocessor.nvidia_rerank
options:
members:
- NVIDIA
1 change: 1 addition & 0 deletions docs/docs/api_reference/readers/web.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,6 +11,7 @@
- RssReader
- SimpleWebPageReader
- SitemapReader
- SpiderReader
- TrafilaturaWebReader
- UnstructuredURLLoader
- WholeSiteReader
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
::: llama_index.vector_stores.vertexaivectorsearch
options:
members:
- VertexAIVectorStore
4 changes: 4 additions & 0 deletions docs/docs/api_reference/storage/vector_store/vespa.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
::: llama_index.vector_stores.vespa
options:
members:
- VespaVectorStore
16 changes: 16 additions & 0 deletions docs/mkdocs.yml
Original file line number Diff line number Diff line change
Expand Up @@ -197,6 +197,7 @@ nav:
- ./examples/embeddings/ipex_llm.ipynb
- ./examples/embeddings/octoai.ipynb
- ./examples/embeddings/ipex_llm_gpu.ipynb
- ./examples/embeddings/nvidia.ipynb
- Evaluation:
- ./examples/evaluation/TonicValidateEvaluators.ipynb
- ./examples/evaluation/semantic_similarity_eval.ipynb
Expand Down Expand Up @@ -311,6 +312,7 @@ nav:
- ./examples/llm/openvino.ipynb
- ./examples/llm/octoai.ipynb
- ./examples/llm/mistral_rs.ipynb
- ./examples/llm/nvidia.ipynb
- Low Level:
- ./examples/low_level/oss_ingestion_retrieval.ipynb
- ./examples/low_level/fusion_retriever.ipynb
Expand Down Expand Up @@ -373,6 +375,7 @@ nav:
- ./examples/node_postprocessor/JinaRerank.ipynb
- ./examples/node_postprocessor/rankLLM.ipynb
- ./examples/node_postprocessor/openvino_rerank.ipynb
- ./examples/node_postprocessor/NVIDIARerank.ipynb
- Object Stores:
- ./examples/objects/object_index.ipynb
- Output Parsers:
Expand Down Expand Up @@ -461,6 +464,7 @@ nav:
- Tools:
- ./examples/tools/OnDemandLoaderTool.ipynb
- ./examples/tools/eval_query_engine_tool.ipynb
- ./examples/tools/azure_dynamic_sessions.ipynb
- Transforms:
- ./examples/transforms/TransformsEval.ipynb
- Use Cases:
Expand Down Expand Up @@ -547,6 +551,8 @@ nav:
- ./examples/vector_stores/AWSDocDBDemo.ipynb
- ./examples/vector_stores/MilvusHybridIndexDemo.ipynb
- ./examples/vector_stores/FirestoreVectorStore.ipynb
- ./examples/vector_stores/VespaIndexDemo.ipynb
- ./examples/vector_stores/VertexAIVectorSearchDemo.ipynb
- Component Guides:
- ./module_guides/index.md
- Models:
Expand Down Expand Up @@ -722,6 +728,7 @@ nav:
- ./api_reference/embeddings/llm_rails.md
- ./api_reference/embeddings/mistralai.md
- ./api_reference/embeddings/nomic.md
- ./api_reference/embeddings/nvidia.md
- ./api_reference/embeddings/octoai.md
- ./api_reference/embeddings/ollama.md
- ./api_reference/embeddings/openai.md
Expand Down Expand Up @@ -804,6 +811,7 @@ nav:
- ./api_reference/llms/monsterapi.md
- ./api_reference/llms/mymagic.md
- ./api_reference/llms/neutrino.md
- ./api_reference/llms/nvidia.md
- ./api_reference/llms/nvidia_tensorrt.md
- ./api_reference/llms/nvidia_triton.md
- ./api_reference/llms/octoai.md
Expand Down Expand Up @@ -946,6 +954,7 @@ nav:
- ./api_reference/postprocessor/long_context_reorder.md
- ./api_reference/postprocessor/longllmlingua.md
- ./api_reference/postprocessor/metadata_replacement.md
- ./api_reference/postprocessor/nvidia_rerank.md
- ./api_reference/postprocessor/openvino_rerank.md
- ./api_reference/postprocessor/presidio.md
- ./api_reference/postprocessor/prev_next.md
Expand Down Expand Up @@ -1289,6 +1298,8 @@ nav:
- ./api_reference/storage/vector_store/typesense.md
- ./api_reference/storage/vector_store/upstash.md
- ./api_reference/storage/vector_store/vearch.md
- ./api_reference/storage/vector_store/vertexaivectorsearch.md
- ./api_reference/storage/vector_store/vespa.md
- ./api_reference/storage/vector_store/weaviate.md
- ./api_reference/storage/vector_store/zep.md
- Tools:
Expand Down Expand Up @@ -1850,6 +1861,11 @@ plugins:
- ../llama-index-integrations/readers/llama-index-readers-youtube-metadata
- ../llama-index-integrations/llms/llama-index-llms-mistral-rs
- ../llama-index-integrations/agent/llama-index-agent-introspective
- ../llama-index-integrations/vector_stores/llama-index-vector-stores-vertexaivectorsearch
- ../llama-index-integrations/vector_stores/llama-index-vector-stores-vespa
- ../llama-index-integrations/embeddings/llama-index-embeddings-nvidia
- ../llama-index-integrations/postprocessor/llama-index-postprocessor-nvidia-rerank
- ../llama-index-integrations/llms/llama-index-llms-nvidia
- redirects:
redirect_maps:
./api/llama_index.vector_stores.MongoDBAtlasVectorSearch.html: api_reference/storage/vector_store/mongodb.md
Expand Down
2 changes: 1 addition & 1 deletion llama-index-core/llama_index/core/__init__.py
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
"""Init file of LlamaIndex."""

__version__ = "0.10.34"
__version__ = "0.10.35"

import logging
from logging import NullHandler
Expand Down
2 changes: 1 addition & 1 deletion llama-index-core/pyproject.toml
Original file line number Diff line number Diff line change
Expand Up @@ -43,7 +43,7 @@ name = "llama-index-core"
packages = [{include = "llama_index"}]
readme = "README.md"
repository = "https://github.com/run-llama/llama_index"
version = "0.10.34"
version = "0.10.35"

[tool.poetry.dependencies]
SQLAlchemy = {extras = ["asyncio"], version = ">=1.4.49"}
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -228,9 +228,9 @@ def _get_stream_ai_response(
)
thread.start()
# Wait for the event to be set
chat_stream_response._is_function_not_none_thread_event.wait()
chat_stream_response.is_function_not_none_thread_event.wait()
# If it is executing an openAI function, wait for the thread to finish
if chat_stream_response._is_function:
if chat_stream_response.is_function:
thread.join()

# if it's false, return the answer (to stream)
Expand All @@ -249,7 +249,7 @@ async def _get_async_stream_ai_response(
)
# wait until openAI functions stop executing
chat_stream_response._ensure_async_setup()
await chat_stream_response._is_function_false_event.wait()
await chat_stream_response.is_function_false_event.wait()

# return response stream
return chat_stream_response
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -28,11 +28,11 @@ exclude = ["**/BUILD"]
license = "MIT"
name = "llama-index-agent-openai-legacy"
readme = "README.md"
version = "0.1.3"
version = "0.1.4"

[tool.poetry.dependencies]
python = ">=3.8.1,<4.0"
llama-index-core = "^0.10.1"
llama-index-core = "^0.10.35"
llama-index-llms-openai = "^0.1.1"

[tool.poetry.group.dev.dependencies]
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -389,9 +389,9 @@ def _get_stream_ai_response(
)
thread.start()
# Wait for the event to be set
chat_stream_response._is_function_not_none_thread_event.wait()
chat_stream_response.is_function_not_none_thread_event.wait()
# If it is executing an openAI function, wait for the thread to finish
if chat_stream_response._is_function:
if chat_stream_response.is_function:
thread.join()

# if it's false, return the answer (to stream)
Expand All @@ -414,7 +414,7 @@ async def _get_async_stream_ai_response(
chat_stream_response._ensure_async_setup()

# wait until openAI functions stop executing
await chat_stream_response._is_function_false_event.wait()
await chat_stream_response.is_function_false_event.wait()

# return response stream
return chat_stream_response
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -28,11 +28,11 @@ exclude = ["**/BUILD"]
license = "MIT"
name = "llama-index-agent-openai"
readme = "README.md"
version = "0.2.3"
version = "0.2.4"

[tool.poetry.dependencies]
python = ">=3.8.1,<4.0"
llama-index-core = "^0.10.30"
llama-index-core = "^0.10.35"
llama-index-llms-openai = "^0.1.5"
openai = ">=1.14.0"

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -72,7 +72,7 @@ def write_response_to_history(
final_text = ""
for chat in self.chat_stream:
# LLM response queue
self._is_function = is_function(chat.message)
self.is_function = is_function(chat.message)
self.put_in_queue(chat.delta)
final_text += chat.delta or ""
if chat.raw is not None:
Expand All @@ -92,7 +92,7 @@ def write_response_to_history(
self.documents += convert_chat_response_to_documents(
chat, self.citations_settings
)
if self._is_function is not None: # if loop has gone through iteration
if self.is_function is not None: # if loop has gone through iteration
# NOTE: this is to handle the special case where we consume some of the
# chat stream, but not all of it (e.g. in react agent)
chat.message.content = final_text.strip() # final message
Expand All @@ -105,10 +105,10 @@ def write_response_to_history(
else:
raise

self._is_done = True
self.is_done = True

# This act as is_done events for any consumers waiting
self._is_function_not_none_thread_event.set()
self.is_function_not_none_thread_event.set()

async def awrite_response_to_history(
self,
Expand All @@ -124,11 +124,11 @@ async def awrite_response_to_history(
final_text = ""
async for chat in self.achat_stream:
# Chat response queue
self._is_function = is_function(chat.message)
self.is_function = is_function(chat.message)
self.aput_in_queue(chat.delta)
final_text += chat.delta or ""
if self._is_function is False:
self._is_function_false_event.set()
if self.is_function is False:
self.is_function_false_event.set()
if chat.raw is not None:
# Citations stream event
if (
Expand All @@ -146,15 +146,15 @@ async def awrite_response_to_history(
self.documents += convert_chat_response_to_documents(
chat, self.citations_settings
)
self._new_item_event.set()
if self._is_function is not None: # if loop has gone through iteration
self.new_item_event.set()
if self.is_function is not None: # if loop has gone through iteration
# NOTE: this is to handle the special case where we consume some of the
# chat stream, but not all of it (e.g. in react agent)
chat.message.content = final_text.strip() # final message
memory.put(chat.message)
except Exception as e:
logger.warning(f"Encountered exception writing response to history: {e}")
self._is_done = True
self.is_done = True


class ChatModeCitations(str, Enum):
Expand Down
Loading

0 comments on commit 38eb6db

Please sign in to comment.