LlamaIndex Workflows and human intervention or human-in-the-loop #15960
Replies: 1 comment 10 replies
-
I found a similar open issue that might be relevant to your question: [Feature Request]: Human-In-The-Loop Multistep Query, which is closed and was last updated on February 06, 2024 [1]. Currently, LlamaIndex Workflows support dynamic intervention through event handling, allowing events to be sent to specific steps or broadcasted to all steps, enabling dynamic adjustments and interventions during the workflow execution. The For implementing human-in-the-loop mechanisms, there isn't specific documentation or examples available in the repository. However, you can create custom event handlers to manage dynamic interventions. Here is an example of how to create a custom event handler: from typing import Dict
from llama_index.core.instrumentation.event_handlers import BaseEventHandler
from llama_index.packs.diff_private_simple_dataset.events import (
SyntheticExampleStartEvent,
SyntheticExampleEndEvent,
LLMEmptyResponseEvent,
EmptyIntersectionEvent,
)
import json
class TooManyProblemsEncounteredError(Exception):
pass
class DiffPrivacyEventHandler(BaseEventHandler):
num_splits: int
t_max: int
synthetic_example_starts: int = 0
synthetic_example_ends: int = 0
llm_empty_responses: int = 0
empty_intersections: int = 0
critical_threshold: int = 0.025 # ~2.5% error rate with OpenAI API calls
@classmethod
def class_name(cls) -> str:
"""Class name."""
return "MyEventHandler"
def compute_approximate_error_rate(self):
"""Returns an approximate error rate."""
return (self.llm_empty_responses + self.empty_intersections) / (
self.synthetic_example_starts * self.t_max * (self.num_splits + 1)
)
def handle(self, event) -> None:
"""Logic for handling event."""
if isinstance(event, SyntheticExampleStartEvent):
self.synthetic_example_starts += 1
elif isinstance(event, SyntheticExampleEndEvent):
self.synthetic_example_ends += 1
elif isinstance(event, LLMEmptyResponseEvent):
self.llm_empty_responses += 1
with open("error_report.json", "w") as f:
json.dump(self.dict(), f)
if self.compute_approximate_error_rate() > self.critical_threshold:
raise TooManyProblemsEncounteredError(
"There were too many errors encountered."
)
elif isinstance(event, EmptyIntersectionEvent):
self.empty_intersections += 1
with open("error_report.json", "w") as f:
json.dump(self.dict(), f)
if self.compute_approximate_error_rate() > self.critical_threshold:
raise TooManyProblemsEncounteredError(
"There were too many errors encountered."
) For more detailed examples and documentation, you can refer to the LlamaIndex documentation on workflows. Specifically, the Reflection Workflow example provides a detailed walkthrough of implementing a workflow that dynamically validates and retries structured outputs [4][5][6][7]. Regarding future support for human-in-the-loop features similar to LangGraph, I couldn't find specific information in the repository. It might be worth keeping an eye on the repository for any updates or new feature announcements. |
Beta Was this translation helpful? Give feedback.
-
Is there a recommended or standard way of implementing human-in-the-loop with LlamaIndex Workflows? I guess that stepping through the workflow with a mechanism to change the context that the workflow (including nested ones) see is one way of going about it. However, stepping through the workflow is not the same as planned (or, perhaps even dynamic) intervention.
In comparison, LangGraph has human-in-the-loop. Is this a feature that LlamaIndex Workflows will support in the future?
Beta Was this translation helpful? Give feedback.
All reactions