-
Notifications
You must be signed in to change notification settings - Fork 2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add support for Llama2, Palm, Cohere, Anthropic, Replicate, Azure Models[100+ LLMs] - using LiteLLM #200
base: master
Are you sure you want to change the base?
Conversation
Here's how litellm has been integrated into other similar repos: filip-michalsky/SalesGPT#36 |
@assafelovic @rotemweiss57 can I get a review on this PR ? |
@ishaan-jaff can you please modify the LLM calls in agent/llm_utils.py? |
updated @assafelovic |
hi, thnx for the PR |
@assafelovic any update on this? |
This PR is already stale due to many changes. @ishaan-jaff if you're interested in it we can push this. Many demand for Llama models |
@assafelovic instead can we help users use @arsaboo what error did you run into when using the litellm proxy + |
@assafelovic we're trying to set an cc @arsaboo can you link the stacktrace here + what you tried so far |
Here's the error that I received with LiteLLM proxy @ishaan-jaff : gpt-researcher-1 | INFO: connection open
gpt-researcher-1 | ERROR: Exception in ASGI application
gpt-researcher-1 | Traceback (most recent call last):
gpt-researcher-1 | File "/usr/local/lib/python3.11/site-packages/uvicorn/protocols/websockets/websockets_impl.py", line 240, in run_asgi
gpt-researcher-1 | result = await self.app(self.scope, self.asgi_receive, self.asgi_send)
gpt-researcher-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
gpt-researcher-1 | File "/usr/local/lib/python3.11/site-packages/uvicorn/middleware/proxy_headers.py", line 69, in __call__
gpt-researcher-1 | return await self.app(scope, receive, send)
gpt-researcher-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
gpt-researcher-1 | File "/usr/local/lib/python3.11/site-packages/fastapi/applications.py", line 1054, in __call__
gpt-researcher-1 | await super().__call__(scope, receive, send)
gpt-researcher-1 | File "/usr/local/lib/python3.11/site-packages/starlette/applications.py", line 123, in __call__
gpt-researcher-1 | await self.middleware_stack(scope, receive, send)
gpt-researcher-1 | File "/usr/local/lib/python3.11/site-packages/starlette/middleware/errors.py", line 151, in __call__
gpt-researcher-1 | await self.app(scope, receive, send)
gpt-researcher-1 | File "/usr/local/lib/python3.11/site-packages/starlette/middleware/exceptions.py", line 65, in __call__
gpt-researcher-1 | await wrap_app_handling_exceptions(self.app, conn)(scope, receive, send)
gpt-researcher-1 | File "/usr/local/lib/python3.11/site-packages/starlette/_exception_handler.py", line 64, in wrapped_app
gpt-researcher-1 | raise exc
gpt-researcher-1 | File "/usr/local/lib/python3.11/site-packages/starlette/_exception_handler.py", line 53, in wrapped_app
gpt-researcher-1 | await app(scope, receive, sender)
gpt-researcher-1 | File "/usr/local/lib/python3.11/site-packages/starlette/routing.py", line 756, in __call__
gpt-researcher-1 | await self.middleware_stack(scope, receive, send)
gpt-researcher-1 | File "/usr/local/lib/python3.11/site-packages/starlette/routing.py", line 776, in app
gpt-researcher-1 | await route.handle(scope, receive, send)
gpt-researcher-1 | File "/usr/local/lib/python3.11/site-packages/starlette/routing.py", line 373, in handle
gpt-researcher-1 | await self.app(scope, receive, send)
gpt-researcher-1 | File "/usr/local/lib/python3.11/site-packages/starlette/routing.py", line 96, in app
gpt-researcher-1 | await wrap_app_handling_exceptions(app, session)(scope, receive, send)
gpt-researcher-1 | File "/usr/local/lib/python3.11/site-packages/starlette/_exception_handler.py", line 64, in wrapped_app
gpt-researcher-1 | raise exc
gpt-researcher-1 | File "/usr/local/lib/python3.11/site-packages/starlette/_exception_handler.py", line 53, in wrapped_app
gpt-researcher-1 | await app(scope, receive, sender)
gpt-researcher-1 | File "/usr/local/lib/python3.11/site-packages/starlette/routing.py", line 94, in app
gpt-researcher-1 | await func(session)
gpt-researcher-1 | File "/usr/local/lib/python3.11/site-packages/fastapi/routing.py", line 348, in app
gpt-researcher-1 | await dependant.call(**values)
gpt-researcher-1 | File "/usr/src/app/backend/server.py", line 52, in websocket_endpoint
gpt-researcher-1 | report = await manager.start_streaming(task, report_type, websocket)
gpt-researcher-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
gpt-researcher-1 | File "/usr/src/app/backend/websocket_manager.py", line 57, in start_streaming
gpt-researcher-1 | report = await run_agent(task, report_type, websocket)
gpt-researcher-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
gpt-researcher-1 | File "/usr/src/app/backend/websocket_manager.py", line 74, in run_agent
gpt-researcher-1 | report = await researcher.run()
gpt-researcher-1 | ^^^^^^^^^^^^^^^^^^^^^^
gpt-researcher-1 | File "/usr/src/app/backend/report_type/basic_report/basic_report.py", line 17, in run
gpt-researcher-1 | await researcher.conduct_research()
gpt-researcher-1 | File "/usr/src/app/gpt_researcher/master/agent.py", line 80, in conduct_research
gpt-researcher-1 | self.context = await self.get_context_by_search(self.query)
gpt-researcher-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
gpt-researcher-1 | File "/usr/src/app/gpt_researcher/master/agent.py", line 138, in get_context_by_search
gpt-researcher-1 | sub_queries = await get_sub_queries(query, self.role, self.cfg, self.parent_query, self.report_type)
gpt-researcher-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
gpt-researcher-1 | File "/usr/src/app/gpt_researcher/master/functions.py", line 96, in get_sub_queries
gpt-researcher-1 | response = await create_chat_completion(
gpt-researcher-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
gpt-researcher-1 | File "/usr/src/app/gpt_researcher/utils/llm.py", line 76, in create_chat_completion
gpt-researcher-1 | response = await provider.get_chat_response(
gpt-researcher-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
gpt-researcher-1 | File "/usr/src/app/gpt_researcher/llm_provider/openai/openai.py", line 48, in get_chat_response
gpt-researcher-1 | output = await self.llm.ainvoke(messages)
gpt-researcher-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
gpt-researcher-1 | File "/usr/local/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 179, in ainvoke
gpt-researcher-1 | llm_result = await self.agenerate_prompt(
gpt-researcher-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
gpt-researcher-1 | File "/usr/local/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 570, in agenerate_prompt
gpt-researcher-1 | return await self.agenerate(
gpt-researcher-1 | ^^^^^^^^^^^^^^^^^^^^^
gpt-researcher-1 | File "/usr/local/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 530, in agenerate
gpt-researcher-1 | raise exceptions[0]
gpt-researcher-1 | File "/usr/local/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 715, in _agenerate_with_cache
gpt-researcher-1 | result = await self._agenerate(
gpt-researcher-1 | ^^^^^^^^^^^^^^^^^^^^^^
gpt-researcher-1 | File "/usr/local/lib/python3.11/site-packages/langchain_openai/chat_models/base.py", line 623, in _agenerate
gpt-researcher-1 | response = await self.async_client.create(messages=message_dicts, **params)
gpt-researcher-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
gpt-researcher-1 | File "/usr/local/lib/python3.11/site-packages/openai/resources/chat/completions.py", line 1159, in create
gpt-researcher-1 | return await self._post(
gpt-researcher-1 | ^^^^^^^^^^^^^^^^^
gpt-researcher-1 | File "/usr/local/lib/python3.11/site-packages/openai/_base_client.py", line 1790, in post
gpt-researcher-1 | return await self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)
gpt-researcher-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
gpt-researcher-1 | File "/usr/local/lib/python3.11/site-packages/openai/_base_client.py", line 1493, in request
gpt-researcher-1 | return await self._request(
gpt-researcher-1 | ^^^^^^^^^^^^^^^^^^^^
gpt-researcher-1 | File "/usr/local/lib/python3.11/site-packages/openai/_base_client.py", line 1584, in _request
gpt-researcher-1 | raise self._make_status_error_from_response(err.response) from None
gpt-researcher-1 | openai.AuthenticationError: Error code: 401 - {'error': {'message': 'Incorrect API key provided: sk-V-NQb*************yJEA. You can find your API key at https://platform.openai.com/account/api-keys.', 'type': 'invalid_request_error', 'param': None, 'code': 'invalid_api_key'}}
gpt-researcher-1 | INFO: connection closed I tried both: OPENAI_API_KEY=sk-V-NXYZyJEA
OPENAI_API_BASE=http://192.168.2.162:4001```
and
```yaml
OPENAI_API_KEY=sk-V-NXYZyJEA
OPENAI_BASE_URL=http://192.168.2.162:4001``` |
it looks like the issue is - |
Considering the LLMs support 🤖 release, is LiteLLM support still planned or is this PR superseded? |
This PR adds support for 50+ models with a standard I/O interface using: https://github.com/BerriAI/litellm/
ChatLiteLLM()
is integrated into langchain and allows you to call all models using theChatOpenAI
I/O interfacehttps://python.langchain.com/docs/integrations/chat/litellm
Here's an example of how to use ChatLiteLLM()