Skip to content

Commit

Permalink
v0.11.4 (#15788)
Browse files Browse the repository at this point in the history
  • Loading branch information
logan-markewich authored Sep 2, 2024
1 parent b16434c commit 1c6b064
Show file tree
Hide file tree
Showing 14 changed files with 268 additions and 135 deletions.
60 changes: 60 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,65 @@
# ChangeLog

## [2024-09-02]

### `llama-index-core` [0.11.4]

- Add mypy to core (#14883)
- Fix incorrect instrumentation fields/types (#15752)
- FunctionCallingAgent sources bug + light wrapper to create agent (#15783)
- Add text to sql advanced workflow nb (#15775)
- fix: remove context after streaming workflow to enable streaming again (#15776)
- Fix chat memory persisting and loading methods to use correct JSON format (#15545)
- Fix `_example_type` class var being read as private attr with Pydantic V2 (#15758)

### `llama-index-embeddings-litellm` [0.2.1]

- add dimensions param to LiteLLMEmbedding, fix a bug that prevents reading vars from env (#15770)

### `llama-index-embeddings-upstage` [0.2.1]

- Bugfix upstage embedding when initializing the UpstageEmbedding class (#15767)

### `llama-index-embeddings-sagemaker-endpoint` [0.2.2]

- Fix Sagemaker Field required issue (#15778)

### `llama-index-graph-stores-falkordb` [0.2.1]

- fix relations upsert with special chars (#15769)

### `llama-index-graph-stores-neo4j` [0.3.1]

- Add native vector index support for neo4j lpg and fix vector filters (#15759)

### `llama-index-llms-azure-inference` [0.2.2]

- fix: GitHub Models metadata retrieval (#15747)

### `llama-index-llms-bedrock` [0.2.1]

- Update `base.py` to fix `self` issues (#15729)

### `llama-index-llms-ollama` [0.3.1]

- add ollama response usage (#15773)

### `llama-index-llms-sagemaker-endpoint` [0.2.2]

- Fix Sagemaker Field required issue (#15778)

### `llama-index-multi-modal-llms-anthropic` [0.2.1]

- Support image type detection without knowing the file name (#15763)

### `llama-index-vector-stores-milvus` [0.2.2]

- feat: implement get_nodes for MilvusVectorStore (#15696)

### `llama-index-vector-stores-tencentvectordb` [0.2.1]

- fix: tencentvectordb inconsistent attribute name (#15733)

## [2024-08-29]

### `llama-index-core` [0.11.3]
Expand Down
60 changes: 60 additions & 0 deletions docs/docs/CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,65 @@
# ChangeLog

## [2024-09-02]

### `llama-index-core` [0.11.4]

- Add mypy to core (#14883)
- Fix incorrect instrumentation fields/types (#15752)
- FunctionCallingAgent sources bug + light wrapper to create agent (#15783)
- Add text to sql advanced workflow nb (#15775)
- fix: remove context after streaming workflow to enable streaming again (#15776)
- Fix chat memory persisting and loading methods to use correct JSON format (#15545)
- Fix `_example_type` class var being read as private attr with Pydantic V2 (#15758)

### `llama-index-embeddings-litellm` [0.2.1]

- add dimensions param to LiteLLMEmbedding, fix a bug that prevents reading vars from env (#15770)

### `llama-index-embeddings-upstage` [0.2.1]

- Bugfix upstage embedding when initializing the UpstageEmbedding class (#15767)

### `llama-index-embeddings-sagemaker-endpoint` [0.2.2]

- Fix Sagemaker Field required issue (#15778)

### `llama-index-graph-stores-falkordb` [0.2.1]

- fix relations upsert with special chars (#15769)

### `llama-index-graph-stores-neo4j` [0.3.1]

- Add native vector index support for neo4j lpg and fix vector filters (#15759)

### `llama-index-llms-azure-inference` [0.2.2]

- fix: GitHub Models metadata retrieval (#15747)

### `llama-index-llms-bedrock` [0.2.1]

- Update `base.py` to fix `self` issues (#15729)

### `llama-index-llms-ollama` [0.3.1]

- add ollama response usage (#15773)

### `llama-index-llms-sagemaker-endpoint` [0.2.2]

- Fix Sagemaker Field required issue (#15778)

### `llama-index-multi-modal-llms-anthropic` [0.2.1]

- Support image type detection without knowing the file name (#15763)

### `llama-index-vector-stores-milvus` [0.2.2]

- feat: implement get_nodes for MilvusVectorStore (#15696)

### `llama-index-vector-stores-tencentvectordb` [0.2.1]

- fix: tencentvectordb inconsistent attribute name (#15733)

## [2024-08-29]

### `llama-index-core` [0.11.3]
Expand Down
4 changes: 4 additions & 0 deletions docs/docs/api_reference/selectors/notdiamond.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
::: llama_index.selectors.notdiamond
options:
members:
- NotDiamondSelector
2 changes: 2 additions & 0 deletions docs/mkdocs.yml
Original file line number Diff line number Diff line change
Expand Up @@ -475,6 +475,7 @@ nav:
- ./examples/output_parsing/llm_program.ipynb
- ./examples/output_parsing/lmformatenforcer_pydantic_program.ipynb
- ./examples/output_parsing/lmformatenforcer_regular_expressions.ipynb
- ./examples/output_parsing/nvidia_output_parsing.ipynb
- ./examples/output_parsing/openai_pydantic_program.ipynb
- ./examples/output_parsing/openai_sub_question.ipynb
- Param Optimizer:
Expand Down Expand Up @@ -655,6 +656,7 @@ nav:
- ./examples/vector_stores/qdrant_hybrid.ipynb
- Workflow:
- ./examples/workflow/JSONalyze_query_engine.ipynb
- ./examples/workflow/advanced_text_to_sql.ipynb
- ./examples/workflow/citation_query_engine.ipynb
- ./examples/workflow/corrective_rag_pack.ipynb
- ./examples/workflow/function_calling_agent.ipynb
Expand Down
2 changes: 1 addition & 1 deletion llama-index-core/llama_index/core/__init__.py
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
"""Init file of LlamaIndex."""

__version__ = "0.11.3"
__version__ = "0.11.4"

import logging
from logging import NullHandler
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -375,9 +375,7 @@ def run_step(self, step: TaskStep, task: Task, **kwargs: Any) -> TaskStepOutput:
except AttributeError:
response_str = str(response)

agent_response = AgentChatResponse(
response=response_str, sources=task.extra_state["sources"]
)
agent_response = AgentChatResponse(response=response_str, sources=tool_outputs)

return TaskStepOutput(
output=agent_response,
Expand Down Expand Up @@ -470,9 +468,7 @@ async def arun_step(
except AttributeError:
response_str = str(response)

agent_response = AgentChatResponse(
response=response_str, sources=task.extra_state["sources"]
)
agent_response = AgentChatResponse(response=response_str, sources=tool_outputs)

return TaskStepOutput(
output=agent_response,
Expand Down
1 change: 1 addition & 0 deletions llama-index-core/llama_index/core/agent/runner/base.py
Original file line number Diff line number Diff line change
Expand Up @@ -555,6 +555,7 @@ def finalize_response(
step_output.output.sources = self.get_task(task_id).extra_state.get(
"sources", []
)
step_output.output.set_source_nodes()

return cast(AGENT_CHAT_RESPONSE_TYPE, step_output.output)

Expand Down
10 changes: 8 additions & 2 deletions llama-index-core/llama_index/core/chat_engine/types.py
Original file line number Diff line number Diff line change
Expand Up @@ -54,12 +54,15 @@ class AgentChatResponse:
is_dummy_stream: bool = False
metadata: Optional[Dict[str, Any]] = None

def __post_init__(self) -> None:
def set_source_nodes(self) -> None:
if self.sources and not self.source_nodes:
for tool_output in self.sources:
if isinstance(tool_output.raw_output, (Response, StreamingResponse)):
self.source_nodes.extend(tool_output.raw_output.source_nodes)

def __post_init__(self) -> None:
self.set_source_nodes()

def __str__(self) -> str:
return self.response

Expand Down Expand Up @@ -116,12 +119,15 @@ class StreamingAgentChatResponse:
# Track if an exception occurred
exception: Optional[Exception] = None

def __post_init__(self) -> None:
def set_source_nodes(self) -> None:
if self.sources and not self.source_nodes:
for tool_output in self.sources:
if isinstance(tool_output.raw_output, (Response, StreamingResponse)):
self.source_nodes.extend(tool_output.raw_output.source_nodes)

def __post_init__(self) -> None:
self.set_source_nodes()

def __str__(self) -> str:
if self.is_done and not self.queue.empty() and not self.is_function:
while self.queue.queue:
Expand Down
2 changes: 1 addition & 1 deletion llama-index-core/pyproject.toml
Original file line number Diff line number Diff line change
Expand Up @@ -46,7 +46,7 @@ name = "llama-index-core"
packages = [{include = "llama_index"}]
readme = "README.md"
repository = "https://github.com/run-llama/llama_index"
version = "0.11.3"
version = "0.11.4"

[tool.poetry.dependencies]
SQLAlchemy = {extras = ["asyncio"], version = ">=1.4.49"}
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -27,7 +27,7 @@ exclude = ["**/BUILD"]
license = "MIT"
name = "llama-index-embeddings-openai"
readme = "README.md"
version = "0.2.3"
version = "0.2.4"

[tool.poetry.dependencies]
python = ">=3.8.1,<4.0"
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -13,6 +13,7 @@ import_path = "llama_index.graph_stores.falkordb"

[tool.llamahub.class_authors]
FalkorDBGraphStore = "llama-index"
FalkorDBPropertyGraphStore = "llama-index"

[tool.mypy]
disallow_untyped_defs = true
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -28,11 +28,11 @@ keywords = ["PDF", "llama", "llama-parse", "parse"]
license = "MIT"
name = "llama-index-readers-llama-parse"
readme = "README.md"
version = "0.2.0"
version = "0.3.0"

[tool.poetry.dependencies]
python = ">=3.8.1,<4.0"
llama-parse = ">=0.4.0"
llama-parse = ">=0.5.0"
llama-index-core = "^0.11.0"

[tool.poetry.group.dev.dependencies]
Expand Down
Loading

0 comments on commit 1c6b064

Please sign in to comment.