-
-
Notifications
You must be signed in to change notification settings - Fork 311
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add back history and reset subcommand in magics #997
base: main
Are you sure you want to change the base?
Conversation
This time it's not specific to OpenAI.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you for opening this PR and for re-implementing message history in the magics! I've left suggestions about 1) bounding the chat history to 2 exchanges at most, and 2) avoiding the pseudo-XML syntax being used for non-chat providers. This is a good stopgap solution for users who want to use history in AI magics as soon as possible.
There are better ways to pass message history in LangChain however. In the future, we will definitely want to rework this logic to use the new LCEL syntax and use the RunnableWithMessageHistory
class from langchain_core.runnables.history
; see #392.
def _append_exchange(self, prompt: str, output: str): | ||
"""Appends a conversational exchange between user and an OpenAI Chat | ||
model to a transcript that will be included in future exchanges.""" | ||
self.transcript.append(HumanMessage(prompt)) | ||
self.transcript.append(AIMessage(output)) | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There should be bounds on the length of the transcript passed to the LLM, since some LLMs have shorter token limits. Can you mimic the implementation of the history in chat, where we only allow up to 2 Human-AI exchanges? That would involve modifying this method to remove all but the last Human-AI exchange before appending the new Human-AI exchange to self.transcript
.
If you wish to implement longer history as well, you can make the size of the history configurable via %ai config
, while defaulting to 2 exchanges. See this PR for a reference: #962
( | ||
f"<HUMAN>{message.content}</HUMAN>" | ||
if message.type == "human" | ||
else message.content | ||
) | ||
for message in self.transcript + [HumanMessage(content=prompt)] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not familiar with this pseudo-XML syntax, and it may cause confuse LLMs to also return their responses in pseudo-XML, e.g. <AI>The square root of pi is...</AI>
.
Can a more conventional string formatting be used instead here? LangChain simply prepends Human:
to human messages and AI:
to AI messages.
#551 removed the history associated uniquely with the
openai-chat
provider in magic commands. It also removed the "reset" command to delete said history. Docs were updated to remove mention of the history and the reset command.This PR adds back the history in magic commands. It also adds the
%ai reset
subcommand to delete said history. A mention of the history and the reset command are added to docs.The history transcript maintains the distinction between human and AI messages by wrapping the prompts and responses in
HumanMessage
andAIMessage
objects.For non-chat providers, human messages are wrapped in pseudo XML
<HUMAN>...</HUMAN>
tags unless there is nothing but the first prompt. This is yet untested.