Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[RPG GAME]FunctionAgent Version #236

Open
wants to merge 18 commits into
base: develop
Choose a base branch
from

Conversation

Southpika
Copy link
Contributor

@Southpika Southpika commented Dec 27, 2023

1.添加通过FunctionAgent实现的版本

待tool_choice以及文生图工具ready
TODO:

  1. 文生图工具暂时为模拟本地固定路径图片
  2. FunctionAgent目前触发不稳定,以及有自生成现象,基本只有第一轮能正常触发文生图
image

image

@codecov-commenter
Copy link

codecov-commenter commented Dec 27, 2023

Codecov Report

All modified and coverable lines are covered by tests ✅

Comparison is base (79eae0c) 69.61% compared to head (62ce209) 69.61%.

Additional details and impacted files
@@           Coverage Diff            @@
##           develop     #236   +/-   ##
========================================
  Coverage    69.61%   69.61%           
========================================
  Files           62       62           
  Lines         3100     3100           
========================================
  Hits          2158     2158           
  Misses         942      942           

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

Comment on lines +66 to +92
async def _chat(history):
prompt = history[-1][0]
response = await self.run(prompt)
self.memory.msg_manager.messages[-1] = AIMessage(
eval(response.chat_history[2].content)["return_story"]
)
raw_messages.extend(response.chat_history)
if len(response.chat_history) >= 3:
output_result = eval(response.chat_history[2].content)["return_story"]
else:
output_result = response.text
if response.steps and response.steps[-1].output_files:
# If there is a file output in the last round, then we need to show it.
output_file = response.steps[-1].output_files[-1]
file_content = await output_file.read_contents()
if get_file_type(output_file.filename) == "image":
# If it is a image, we can display it in the same chat page.
base64_encoded = base64.b64encode(file_content).decode("utf-8")
output_result = eval(response.chat_history[2].content)[
"return_story"
] + IMAGE_HTML.format(BASE64_ENCODED=base64_encoded)
history[-1][1] = output_result
return (
history,
_messages_to_dicts(raw_messages),
_messages_to_dicts(self.memory.get_messages()),
)
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

因具体_chat函数部分有修改,因此不能直接继承launch_gradio_demo方法

@CLAassistant
Copy link

CLAassistant commented Oct 14, 2024

CLA assistant check
Thank you for your submission! We really appreciate it. Like many open source projects, we ask that you all sign our Contributor License Agreement before we can accept your contribution.
0 out of 2 committers have signed the CLA.

❌ shiyutang
❌ Southpika
You have signed the CLA already but the status is still pending? Let us recheck it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants