You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
// Configuration for the LLM providers that can be used. Pythagora supports
// OpenAI, Azure, Anthropic and Groq. OpenRouter and local LLMs (such as LM-Studio)
// also work, you can use "openai" provider to define these.
"llm": {
"openai": {
// Base url to the provider/server, omitting the trailing "chat/completions" part.
"base_url": "https://openrouter.ai/api/v1",
"api_key": "key",
"connect_timeout": 60.0,
"read_timeout": 10.0
},
// Each agent can use a different model or configuration. The default, as before, is GPT4 Turbo
// for most tasks and GPT3.5 Turbo to generate file descriptions. The agent name here should match
// the Python class name.
"agent": {
"default": {
"provider": "openai",
"model": "recursal/eagle-7b"
running python main.py:
[Pythagora] Pythagora cannot start because the LLM API is not reachable.
What I did wrong?
The text was updated successfully, but these errors were encountered:
me too facing same problem , when I checked LMstudio logs it shows it is receiving request for endpoint "openai/v1/chat/completions" instead of "v1/chat/completions". looks like Pythagoras is appending openai to the endpoint. somehow need o remove this append for use of local llm.
Version
Command-line (Python) version
Operating System
Windows 10
What happened?
Edited config.json to use OpenRouter:
running python main.py:
What I did wrong?
The text was updated successfully, but these errors were encountered: