-
Notifications
You must be signed in to change notification settings - Fork 5.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Async memory management for OpenAIAgentWorker #17375
Conversation
I've intentionally left the non async put for the memory present in extra-state, as this memory is meant to be temporary from what i understand Merry Christmas and in advance, wish a good new year to y'all EDIT: Mistake from my side, this aput would have applied to the extra-state memory, double mistake just corrected If I have your approval @logan-markewich , I can also apply await logic to the extra_state memory which would standardize async use in the open ai agent, even if the BaseMemory doesn't really implement any await logic |
llama-index-integrations/agent/llama-index-agent-openai/pyproject.toml
Outdated
Show resolved
Hide resolved
@logan-markewich reverted vbump, merged main, ready to merge hopefully ! |
I've added additionnal migrations for afinalize_tasks and afinalize_response of the BaseAgentRunner, ensuring the correct functions are called |
Description
Following recent changes on async memory (#16127 , and other minors) this is an implementation of the async aput for the open AI agent worker.
Version Bump?
Type of Change
How Has This Been Tested?
Suggested Checklist:
make format; make lint
to appease the lint gods