添加无问芯穹API接口的问题 #3692
Replies: 2 comments
-
Hey @PangDingPing! Good to see you diving into new adventures with Langchain-Chatchat. Let's crack this nut together. The
By following these steps, you should be able to pinpoint the cause of the Sources
|
Beta Was this translation helpful? Give feedback.
-
model_config中的"provider"改成 "XinghuoWorker"奇迹成功,还要研究一下是什么原因导致的…… configs/model_config.py |
Beta Was this translation helpful? Give feedback.
-
xinghuo-api(Running)运行正常。
添加无问芯穹的的接口出错的求助:
修改添加以下代码后,在web_UI的模型列表已经是“xinqiong-api(Running)”状态,
提交聊天内容时报错,报错代码如下:
==============================Langchain-Chatchat Configuration==============================
操作系统:Windows-10-10.0.22631-SP0.
python版本:3.11.7 | packaged by Anaconda, Inc. | (main, Dec 15 2023, 18:05:47) [MSC v.1916 64 bit (AMD64)]
项目版本:v0.2.10
langchain版本:0.0.354. fastchat版本:0.2.35
当前使用的分词器:ChineseRecursiveTextSplitter
当前启动的LLM模型:['xinqiong-api', 'xinghuo-api'] @ cpu
{'api_base_url': 'https://cloud.infini-ai.com/maas/qwen-7b-chat/nvidia/',
'api_key': '',
'device': 'auto',
'host': '127.0.0.1',
'infer_turbo': False,
'model_name': 'xinqiong-api',
'online_api': True,
'port': 21011,
'provider': 'XinqiongWorker',
'worker_class': <class 'server.model_workers.xinqiong.XinqiongWorker'>}
{'APISecret': '',
'APPID': '',
'api_key': '**********',
'device': 'auto',
'host': '127.0.0.1',
'infer_turbo': False,
'online_api': True,
'port': 21003,
'provider': 'XingHuoWorker',
'version': 'v3.5',
'worker_class': <class 'server.model_workers.xinghuo.XingHuoWorker'>}
当前Embbedings模型: bge-large-zh-v1.5 @ cuda
服务端运行信息:
OpenAI API Server: http://127.0.0.1:20000/v1
Chatchat API Server: http://127.0.0.1:7861
Chatchat WEBUI Server: http://127.0.0.1:8501
==============================Langchain-Chatchat Configuration==============================
You can now view your Streamlit app in your browser.
URL: http://127.0.0.1:8501
A new version of Streamlit is available.
See what's new at https://discuss.streamlit.io/c/announcements
Enter the following command to upgrade:
$ pip install streamlit --upgrade
2024-04-10 11:10:13,122 - _client.py[line:1027] - INFO: HTTP Request: POST http://127.0.0.1:20001/list_models "HTTP/1.1 200 OK"
2024-04-10 11:10:13,139 - _client.py[line:1027] - INFO: HTTP Request: POST http://127.0.0.1:7861/llm_model/list_running_models "HTTP/1.1 200 OK"
INFO: 127.0.0.1:56581 - "POST /llm_model/list_running_models HTTP/1.1" 200 OK
2024-04-10 11:10:13,455 - _client.py[line:1027] - INFO: HTTP Request: POST http://127.0.0.1:20001/list_models "HTTP/1.1 200 OK"
2024-04-10 11:10:13,457 - _client.py[line:1027] - INFO: HTTP Request: POST http://127.0.0.1:7861/llm_model/list_running_models "HTTP/1.1 200 OK"
INFO: 127.0.0.1:56581 - "POST /llm_model/list_running_models HTTP/1.1" 200 OK
2024-04-10 11:10:13,533 - _client.py[line:1027] - INFO: HTTP Request: POST http://127.0.0.1:7861/llm_model/list_config_models "HTTP/1.1 200 OK"
INFO: 127.0.0.1:56581 - "POST /llm_model/list_config_models HTTP/1.1" 200 OK
2024-04-10 11:10:20,376 - _client.py[line:1027] - INFO: HTTP Request: POST http://127.0.0.1:7861/llm_model/get_model_config "HTTP/1.1 200 OK"
INFO: 127.0.0.1:60082 - "POST /llm_model/get_model_config HTTP/1.1" 200 OK
2024-04-10 11:10:20,845 - _client.py[line:1027] - INFO: HTTP Request: POST http://127.0.0.1:20001/list_models "HTTP/1.1 200 OK"
2024-04-10 11:10:20,849 - _client.py[line:1027] - INFO: HTTP Request: POST http://127.0.0.1:7861/llm_model/list_running_models "HTTP/1.1 200 OK"
INFO: 127.0.0.1:50842 - "POST /llm_model/list_running_models HTTP/1.1" 200 OK
2024-04-10 11:10:21,081 - _client.py[line:1027] - INFO: HTTP Request: POST http://127.0.0.1:20001/list_models "HTTP/1.1 200 OK"
2024-04-10 11:10:21,083 - _client.py[line:1027] - INFO: HTTP Request: POST http://127.0.0.1:7861/llm_model/list_running_models "HTTP/1.1 200 OK"
INFO: 127.0.0.1:50842 - "POST /llm_model/list_running_models HTTP/1.1" 200 OK
INFO: 127.0.0.1:50842 - "POST /llm_model/list_config_models HTTP/1.1" 200 OK
2024-04-10 11:10:21,105 - _client.py[line:1027] - INFO: HTTP Request: POST http://127.0.0.1:7861/llm_model/list_config_models "HTTP/1.1 200 OK"
INFO: 127.0.0.1:52842 - "POST /llm_model/list_running_models HTTP/1.1" 200 OK
2024-04-10 11:11:18,631 - _client.py[line:1027] - INFO: HTTP Request: POST http://127.0.0.1:20001/list_models "HTTP/1.1 200 OK"
2024-04-10 11:11:18,633 - _client.py[line:1027] - INFO: HTTP Request: POST http://127.0.0.1:7861/llm_model/list_running_models "HTTP/1.1 200 OK"
INFO: 127.0.0.1:52842 - "POST /llm_model/list_running_models HTTP/1.1" 200 OK
2024-04-10 11:11:18,849 - _client.py[line:1027] - INFO: HTTP Request: POST http://127.0.0.1:20001/list_models "HTTP/1.1 200 OK"
2024-04-10 11:11:18,851 - _client.py[line:1027] - INFO: HTTP Request: POST http://127.0.0.1:7861/llm_model/list_running_models "HTTP/1.1 200 OK"
INFO: 127.0.0.1:52842 - "POST /llm_model/list_config_models HTTP/1.1" 200 OK
2024-04-10 11:11:18,880 - _client.py[line:1027] - INFO: HTTP Request: POST http://127.0.0.1:7861/llm_model/list_config_models "HTTP/1.1 200 OK"
INFO: 127.0.0.1:52842 - "POST /chat/chat HTTP/1.1" 200 OK
2024-04-10 11:11:18,900 - _client.py[line:1027] - INFO: HTTP Request: POST http://127.0.0.1:7861/chat/chat "HTTP/1.1 200 OK"
E:\AI\Anaconda\envs\GLM\Lib\site-packages\langchain_core_api\deprecation.py:117: LangChainDeprecationWarning: The class
langchain_community.chat_models.openai.ChatOpenAI
was deprecated in langchain-community 0.0.10 and will be removed in 0.2.0. An updated version of the class exists in the langchain-openai package and should be used instead. To use it runpip install -U langchain-openai
and import asfrom langchain_openai import ChatOpenAI
.warn_deprecated(
2024-04-10 11:11:19,994 - _client.py[line:1758] - INFO: HTTP Request: POST https://cloud.infini-ai.com/maas/qwen-7b-chat/nvidia/chat/completions "HTTP/1.1 200 OK"
2024-04-10 11:11:19,995 - utils.py[line:38] - ERROR:
Traceback (most recent call last):
File "E:\AI\Langchain-Chatchat\server\utils.py", line 36, in wrap_done
await fn
File "E:\AI\Anaconda\envs\GLM\Lib\site-packages\langchain\chains\base.py", line 385, in acall
raise e
File "E:\AI\Anaconda\envs\GLM\Lib\site-packages\langchain\chains\base.py", line 379, in acall
await self._acall(inputs, run_manager=run_manager)
File "E:\AI\Anaconda\envs\GLM\Lib\site-packages\langchain\chains\llm.py", line 275, in _acall
response = await self.agenerate([inputs], run_manager=run_manager)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\AI\Anaconda\envs\GLM\Lib\site-packages\langchain\chains\llm.py", line 142, in agenerate
return await self.llm.agenerate_prompt(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\AI\Anaconda\envs\GLM\Lib\site-packages\langchain_core\language_models\chat_models.py", line 554, in agenerate_prompt
return await self.agenerate(
^^^^^^^^^^^^^^^^^^^^^
File "E:\AI\Anaconda\envs\GLM\Lib\site-packages\langchain_core\language_models\chat_models.py", line 514, in agenerate
raise exceptions[0]
File "E:\AI\Anaconda\envs\GLM\Lib\site-packages\langchain_core\language_models\chat_models.py", line 617, in _agenerate_with_cache
return await self._agenerate(
^^^^^^^^^^^^^^^^^^^^^^
File "E:\AI\Anaconda\envs\GLM\Lib\site-packages\langchain_community\chat_models\openai.py", line 522, in _agenerate
return await agenerate_from_stream(stream_iter)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\AI\Anaconda\envs\GLM\Lib\site-packages\langchain_core\language_models\chat_models.py", line 92, in agenerate_from_stream
assert generation is not None
^^^^^^^^^^^^^^^^^^^^^^
AssertionError
2024-04-10 11:11:20,041 - utils.py[line:40] - ERROR: AssertionError: Caught exception:
configs/model_config.py添加
"xinqiong-api": {
"model_name": "xinqiong-api",
"api_base_url": "https://cloud.infini-ai.com/maas/qwen-7b-chat/nvidia/",
"api_key": "*******",
"provider": "XinqiongWorker",
}
configs/server_config.py添加
"xinqiong-api": {
"port": 21011,
},
参考azure.py新建server/model_workers/xinqiong.py
This is a model worker for Xinqiong API.
import sys
import os
from fastchat.conversation import Conversation
from server.model_workers.base import *
from server.utils import get_httpx_client
from fastchat import conversation as conv
import json
from typing import List, Dict
from configs import logger, log_verbose
class XinqiongWorker(ApiModelWorker):
def init(
self,
*,
controller_addr: str = None,
worker_addr: str = None,
model_names: List[str] = ["xinqiong-api"],
version: str = "xinqiong-api",
**kwargs,
):
kwargs.update(model_names=model_names, controller_addr=controller_addr, worker_addr=worker_addr)
super().init(**kwargs)
self.version = version
if name == "main":
import uvicorn
from server.utils import MakeFastAPIOffline
from fastchat.serve.base_model_worker import app
Beta Was this translation helpful? Give feedback.
All reactions