Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Async memory management for OpenAIAgentWorker #17375

Merged
merged 15 commits into from
Jan 10, 2025

Conversation

mathematisse
Copy link
Contributor

@mathematisse mathematisse commented Dec 26, 2024

Description

Following recent changes on async memory (#16127 , and other minors) this is an implementation of the async aput for the open AI agent worker.

  • Added afinalize_task in the open ai agent package
  • Added aput_messages in the base memory to allow for custom implementations with eventual gathers (if/when possible)
  • Biggest change: Added a detection mecanism with inspect in the chat engine to ensure we call correctly the "on_stream_end_fn". That field is for now only used by open ai agent and react agent, but to ensure compatibility and non-breaking change, I did it like this. (handle detection of partial functions)

Version Bump?

  • Yes, for llama-index-agent-openai

Type of Change

  • New feature (non-breaking change which adds functionality)

How Has This Been Tested?

  • I believe this change is already covered by existing unit tests

Suggested Checklist:

  • I have performed a self-review of my own code
  • My changes generate no new warnings
  • New and existing unit tests pass locally with my changes
  • I ran make format; make lint to appease the lint gods

@dosubot dosubot bot added the size:S This PR changes 10-29 lines, ignoring generated files. label Dec 26, 2024
@mathematisse
Copy link
Contributor Author

mathematisse commented Dec 27, 2024

I've added another await aput that was missing, my mistake this time

I've intentionally left the non async put for the memory present in extra-state, as this memory is meant to be temporary from what i understand

Merry Christmas and in advance, wish a good new year to y'all

EDIT: Mistake from my side, this aput would have applied to the extra-state memory, double mistake just corrected

If I have your approval @logan-markewich , I can also apply await logic to the extra_state memory which would standardize async use in the open ai agent, even if the BaseMemory doesn't really implement any await logic

@mathematisse
Copy link
Contributor Author

@logan-markewich reverted vbump, merged main, ready to merge hopefully !

@mathematisse
Copy link
Contributor Author

I've added additionnal migrations for afinalize_tasks and afinalize_response of the BaseAgentRunner, ensuring the correct functions are called

@dosubot dosubot bot added size:M This PR changes 30-99 lines, ignoring generated files. and removed size:S This PR changes 10-29 lines, ignoring generated files. labels Jan 10, 2025
@dosubot dosubot bot added the lgtm This PR has been approved by a maintainer label Jan 10, 2025
@logan-markewich logan-markewich merged commit 5d7b5d2 into run-llama:main Jan 10, 2025
11 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lgtm This PR has been approved by a maintainer size:M This PR changes 30-99 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants