Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Using LLMFactory to build Azure OpenAI llm for iot agent will crash #730

Open
rivendell1984 opened this issue Jun 6, 2024 · 4 comments
Open
Labels
bug Something isn't working

Comments

@rivendell1984
Copy link

Hello,

In iot_agent_usage.py example, the ToolAgent uses OpenAI by default. So I try to use pne.LLMFactory to build Azure OpenAI llm & pass it to ToolAgent. It crashes and the log as below:

  File "/home/user/test_env/lib/python3.11/site-packages/promptulate/agents/base.py", line 43, in run
    result: str = self._run(instruction, *args, **kwargs)
                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/user/test_env/lib/python3.11/site-packages/promptulate/agents/tool_agent/agent.py", line 149, in _run
    action_resp: ActionResponse = self._parse_llm_response(llm_resp)
                                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/user/test_env/lib/python3.11/site-packages/promptulate/agents/tool_agent/agent.py", line 214, in _parse_llm_response
    analysis=data["analysis"],
             ~~~~^^^^^^^^^^^^
KeyError: 'analysis'
@rivendell1984 rivendell1984 added the bug Something isn't working label Jun 6, 2024
Copy link

github-actions bot commented Jun 6, 2024

Hello @rivendell1984, thanks for your first issue and interest in our work 😊!

If this is a bug report, please provide screenshots, relevant logs and minimum viable code to reproduce your issue, which will help us debug the problem.

@Undertone0809
Copy link
Owner

Can you show your code in detail. It will help me to trace bug.

@rivendell1984
Copy link
Author

Thanks a lot for response. My promptulate version is 1.16.7. Below is the code

import paho.mqtt.client as mqtt

from promptulate.agents import ToolAgent
from promptulate.tools import DuckDuckGoTool, calculator, sleep_tool
from promptulate.tools.human_feedback import HumanFeedBackTool
from promptulate.tools.iot_swith_mqtt import IotSwitchTool
from promptulate.utils.logger import enable_log
import promptulate as pne

enable_log()

os.environ["AZURE_API_KEY"] = "XXX"
os.environ["AZURE_API_BASE"] = "XXX"
os.environ["AZURE_API_VERSION"] = "XXX"

llm = pne.LLMFactory.build(model_name="azure/gpt-35-turbo")

def main():
    # MQTT broker address and port
    broker_address = "XXX"
    broker_port = 1883
    # username and password
    username = "XXX"
    password = "XXX"
    client = mqtt.Client()
    client.username_pw_set(username, password)
    client.connect(broker_address, broker_port)
    tools = [
        DuckDuckGoTool(),
        calculator,
        sleep_tool,
        HumanFeedBackTool(),
        IotSwitchTool(
            client=client,
            rule_table=[
                {
                    "content": "Turn on the air conditioner",
                    "topic": "/123",
                    "ask": "Turn on the air conditioner",
                },
                {"content": "Turn on the heater", "topic": "/123", "ask": "Turn on the heater"},
                {"content": "Turn on the light", "topic": "/123", "ask": "Turn on the light"},
            ],
        ),
    ]
    agent = ToolAgent(
        llm=llm,
        tools=tools,
        enable_role=True,
        agent_name="xiao chang",
        agent_identity="Smart speaker.",
        agent_goal="Control smart home, can turn on the "
        "air conditioner, heater, and lights, and enter into chat mode "
        "after completing the action.",
        agent_constraints="Please try to ask humans before controlling "
        "or switching on electrical appliances.",
    )
    prompt = """I feel so dark now."""
    agent.run(prompt)


if __name__ == "__main__":
    main()

And I got this log right now:

[Action] human_feedback args: {'content': "I'm sorry to hear that. Would you like to talk about why you're feeling this way or would you like to discuss something else instead?"}
[Agent ask] I'm sorry to hear that. Would you like to talk about why you're feeling this way or would you like to discuss something else instead?
I want to turn on the light
[Observation] I want to turn on the light
[Thought] The user wants to turn on the light. As an assistant, I need to use the Iot_Switch_Mqtt tool to switch on the light.
[Action] Iot_Switch_Mqtt args: {'question': 'Please switch on the light'}
/home/user/test_env/lib/python3.11/site-packages/promptulate/llms/openai/openai.py:288: DeprecationWarning: ChatOpenAI is deprecated in v1.16.0. Please use pne.LLMFactory instead.
See the detail in https://undertone0809.github.io/promptulate/#/modules/llm/llm?id=llm
  warnings.warn(
Traceback (most recent call last):
  File "/home/user/smart_speaker/iot_agent.py", line 63, in <module>
    main()
  File "/home/user/smart_speaker/iot_agent.py", line 59, in main
    agent.run(prompt)
  File "/home/user/test_env/lib/python3.11/site-packages/promptulate/agents/base.py", line 43, in run
    result: str = self._run(instruction, *args, **kwargs)
                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/user/test_env/lib/python3.11/site-packages/promptulate/agents/tool_agent/agent.py", line 169, in _run
    tool_result = self.tool_manager.run_tool(
                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/user/test_env/lib/python3.11/site-packages/promptulate/tools/manager.py", line 109, in run_tool
    return tool.run(**parameters)
           ^^^^^^^^^^^^^^^^^^^^^^
  File "/home/user/test_env/lib/python3.11/site-packages/promptulate/tools/base.py", line 191, in run
    result: Any = self._run(*args, **kwargs)
                  ^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/user/test_env/lib/python3.11/site-packages/promptulate/tools/iot_swith_mqtt/tools.py", line 60, in _run
    llm_output = self.llm(prompt)
                 ^^^^^^^^^^^^^^^^
  File "/home/user/test_env/lib/python3.11/site-packages/promptulate/llms/openai/openai.py", line 283, in __call__
    return self.predict(message_set, stop).content
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/user/test_env/lib/python3.11/site-packages/promptulate/llms/base.py", line 55, in predict
    result = self._predict(messages, *args, **kwargs)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/user/test_env/lib/python3.11/site-packages/promptulate/llms/openai/openai.py", line 297, in _predict
    api_key = self.api_key
              ^^^^^^^^^^^^
  File "/home/user/test_env/lib/python3.11/site-packages/promptulate/llms/openai/openai.py", line 89, in api_key
    return pne_config.get_openai_api_key(self.model)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/user/test_env/lib/python3.11/site-packages/promptulate/config.py", line 113, in get_openai_api_key
    raise ValueError("OPENAI API key is not provided. Please set your key.")

I have set the azure API key by os.environ, and the LLMFactory will use litellm to build the model which will be passed to ToolAgent.
BTW, I have tested litellm for chatting, it works well. it seems the ToolAgent still uses the default OpenAI model.

@Undertone0809
Copy link
Owner

Thanks a lot for response. My promptulate version is 1.16.7. Below is the code

import paho.mqtt.client as mqtt

from promptulate.agents import ToolAgent
from promptulate.tools import DuckDuckGoTool, calculator, sleep_tool
from promptulate.tools.human_feedback import HumanFeedBackTool
from promptulate.tools.iot_swith_mqtt import IotSwitchTool
from promptulate.utils.logger import enable_log
import promptulate as pne

enable_log()

os.environ["AZURE_API_KEY"] = "XXX"
os.environ["AZURE_API_BASE"] = "XXX"
os.environ["AZURE_API_VERSION"] = "XXX"

llm = pne.LLMFactory.build(model_name="azure/gpt-35-turbo")

def main():
    # MQTT broker address and port
    broker_address = "XXX"
    broker_port = 1883
    # username and password
    username = "XXX"
    password = "XXX"
    client = mqtt.Client()
    client.username_pw_set(username, password)
    client.connect(broker_address, broker_port)
    tools = [
        DuckDuckGoTool(),
        calculator,
        sleep_tool,
        HumanFeedBackTool(),
        IotSwitchTool(
            client=client,
            rule_table=[
                {
                    "content": "Turn on the air conditioner",
                    "topic": "/123",
                    "ask": "Turn on the air conditioner",
                },
                {"content": "Turn on the heater", "topic": "/123", "ask": "Turn on the heater"},
                {"content": "Turn on the light", "topic": "/123", "ask": "Turn on the light"},
            ],
        ),
    ]
    agent = ToolAgent(
        llm=llm,
        tools=tools,
        enable_role=True,
        agent_name="xiao chang",
        agent_identity="Smart speaker.",
        agent_goal="Control smart home, can turn on the "
        "air conditioner, heater, and lights, and enter into chat mode "
        "after completing the action.",
        agent_constraints="Please try to ask humans before controlling "
        "or switching on electrical appliances.",
    )
    prompt = """I feel so dark now."""
    agent.run(prompt)


if __name__ == "__main__":
    main()

And I got this log right now:

[Action] human_feedback args: {'content': "I'm sorry to hear that. Would you like to talk about why you're feeling this way or would you like to discuss something else instead?"}
[Agent ask] I'm sorry to hear that. Would you like to talk about why you're feeling this way or would you like to discuss something else instead?
I want to turn on the light
[Observation] I want to turn on the light
[Thought] The user wants to turn on the light. As an assistant, I need to use the Iot_Switch_Mqtt tool to switch on the light.
[Action] Iot_Switch_Mqtt args: {'question': 'Please switch on the light'}
/home/user/test_env/lib/python3.11/site-packages/promptulate/llms/openai/openai.py:288: DeprecationWarning: ChatOpenAI is deprecated in v1.16.0. Please use pne.LLMFactory instead.
See the detail in https://undertone0809.github.io/promptulate/#/modules/llm/llm?id=llm
  warnings.warn(
Traceback (most recent call last):
  File "/home/user/smart_speaker/iot_agent.py", line 63, in <module>
    main()
  File "/home/user/smart_speaker/iot_agent.py", line 59, in main
    agent.run(prompt)
  File "/home/user/test_env/lib/python3.11/site-packages/promptulate/agents/base.py", line 43, in run
    result: str = self._run(instruction, *args, **kwargs)
                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/user/test_env/lib/python3.11/site-packages/promptulate/agents/tool_agent/agent.py", line 169, in _run
    tool_result = self.tool_manager.run_tool(
                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/user/test_env/lib/python3.11/site-packages/promptulate/tools/manager.py", line 109, in run_tool
    return tool.run(**parameters)
           ^^^^^^^^^^^^^^^^^^^^^^
  File "/home/user/test_env/lib/python3.11/site-packages/promptulate/tools/base.py", line 191, in run
    result: Any = self._run(*args, **kwargs)
                  ^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/user/test_env/lib/python3.11/site-packages/promptulate/tools/iot_swith_mqtt/tools.py", line 60, in _run
    llm_output = self.llm(prompt)
                 ^^^^^^^^^^^^^^^^
  File "/home/user/test_env/lib/python3.11/site-packages/promptulate/llms/openai/openai.py", line 283, in __call__
    return self.predict(message_set, stop).content
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/user/test_env/lib/python3.11/site-packages/promptulate/llms/base.py", line 55, in predict
    result = self._predict(messages, *args, **kwargs)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/user/test_env/lib/python3.11/site-packages/promptulate/llms/openai/openai.py", line 297, in _predict
    api_key = self.api_key
              ^^^^^^^^^^^^
  File "/home/user/test_env/lib/python3.11/site-packages/promptulate/llms/openai/openai.py", line 89, in api_key
    return pne_config.get_openai_api_key(self.model)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/user/test_env/lib/python3.11/site-packages/promptulate/config.py", line 113, in get_openai_api_key
    raise ValueError("OPENAI API key is not provided. Please set your key.")

I have set the azure API key by os.environ, and the LLMFactory will use litellm to build the model which will be passed to ToolAgent.
BTW, I have tested litellm for chatting, it works well. it seems the ToolAgent still uses the default OpenAI model.

Find the problem. I will fix it later. Thanks for feedback!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants