You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I searched the LangChain documentation with the integrated search.
I used the GitHub search to find a similar question and didn't find it.
I am sure that this is a bug in LangChain rather than my code.
The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
Example Code
defextract_content(pages):
prompt='''\ Here is the HTML content of the webpage associated with the document type {key} on my site. Generate a textual transcription of this document that respects the structure of the page. Only keep the elements of the page related to the type {key}. {html}'''llm=ChatOpenAI(model='gpt-4o-mini', timeout=60)
template=ChatPromptTemplate.from_messages([('human', prompt)])
chain=template|llm|StrOutputParser()
returnchain.batch(pages, config={"max_concurrency": 4})
Error Message and Stack Trace (if applicable)
[2024-09-17 17:13:23] Task tasks.process_url raised unexpected: APITimeoutError('Request timed out.')
Traceback (most recent call last):
File "/home/ubuntu/api/env/lib/python3.10/site-packages/httpx/_transports/default.py", line 72, in map_httpcore_exceptions
yield
File "/home/ubuntu/api/env/lib/python3.10/site-packages/httpx/_transports/default.py", line 236, in handle_request
resp = self._pool.handle_request(req)
File "/home/ubuntu/api/env/lib/python3.10/site-packages/httpcore/_sync/connection_pool.py", line 216, in handle_request
raise exc from None
File "/home/ubuntu/api/env/lib/python3.10/site-packages/httpcore/_sync/connection_pool.py", line 196, in handle_request
response = connection.handle_request(
File "/home/ubuntu/api/env/lib/python3.10/site-packages/httpcore/_sync/connection.py", line 101, in handle_request
return self._connection.handle_request(request)
File "/home/ubuntu/api/env/lib/python3.10/site-packages/httpcore/_sync/http11.py", line 143, in handle_request
raise exc
File "/home/ubuntu/api/env/lib/python3.10/site-packages/httpcore/_sync/http11.py", line 113, in handle_request
) = self._receive_response_headers(**kwargs)
File "/home/ubuntu/api/env/lib/python3.10/site-packages/httpcore/_sync/http11.py", line 186, in _receive_response_headers
event = self._receive_event(timeout=timeout)
File "/home/ubuntu/api/env/lib/python3.10/site-packages/httpcore/_sync/http11.py", line 224, in _receive_event
data = self._network_stream.read(
File "/home/ubuntu/api/env/lib/python3.10/site-packages/httpcore/_backends/sync.py", line 124, in read
with map_exceptions(exc_map):
File "/usr/lib/python3.10/contextlib.py", line 153, in __exit__
self.gen.throw(typ, value, traceback)
File "/home/ubuntu/api/env/lib/python3.10/site-packages/httpcore/_exceptions.py", line 14, in map_exceptions
raise to_exc(exc) from exc
httpcore.ReadTimeout: The read operation timed out
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/ubuntu/api/env/lib/python3.10/site-packages/openai/_base_client.py", line 973, in _request
response = self._client.send(
File "/home/ubuntu/api/env/lib/python3.10/site-packages/httpx/_client.py", line 926, in send
response = self._send_handling_auth(
File "/home/ubuntu/api/env/lib/python3.10/site-packages/httpx/_client.py", line 954, in _send_handling_auth
response = self._send_handling_redirects(
File "/home/ubuntu/api/env/lib/python3.10/site-packages/httpx/_client.py", line 991, in _send_handling_redirects
response = self._send_single_request(request)
File "/home/ubuntu/api/env/lib/python3.10/site-packages/httpx/_client.py", line 1027, in _send_single_request
response = transport.handle_request(request)
File "/home/ubuntu/api/env/lib/python3.10/site-packages/httpx/_transports/default.py", line 235, in handle_request
with map_httpcore_exceptions():
File "/usr/lib/python3.10/contextlib.py", line 153, in __exit__
self.gen.throw(typ, value, traceback)
File "/home/ubuntu/api/env/lib/python3.10/site-packages/httpx/_transports/default.py", line 89, in map_httpcore_exceptions
raise mapped_exc(message) from exc
httpx.ReadTimeout: The read operation timed out
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/ubuntu/api/env/lib/python3.10/site-packages/celery/app/trace.py", line 453, in trace_task
R = retval = fun(*args, **kwargs)
File "/home/ubuntu/api/app.py", line 20, in __call__
return self.run(*args, **kwargs)
File "/home/ubuntu/api/tasks.py", line 110, in process_url
others = extract_content(links)
File "/home/ubuntu/api/extractor.py", line 192, in extract_content
out = chain.batch(lst, config={"max_concurrency": 4})
File "/home/ubuntu/api/env/lib/python3.10/site-packages/langchain_core/runnables/base.py", line 3158, in batch
inputs = step.batch(
File "/home/ubuntu/api/env/lib/python3.10/site-packages/langchain_core/runnables/base.py", line 779, in batch
return cast(List[Output], list(executor.map(invoke, inputs, configs)))
File "/usr/lib/python3.10/concurrent/futures/_base.py", line 621, in result_iterator
yield _result_or_cancel(fs.pop())
File "/usr/lib/python3.10/concurrent/futures/_base.py", line 319, in _result_or_cancel
return fut.result(timeout)
File "/usr/lib/python3.10/concurrent/futures/_base.py", line 458, in result
return self.__get_result()
File "/usr/lib/python3.10/concurrent/futures/_base.py", line 403, in __get_result
raise self._exception
File "/usr/lib/python3.10/concurrent/futures/thread.py", line 58, in run
result = self.fn(*self.args, **self.kwargs)
File "/home/ubuntu/api/env/lib/python3.10/site-packages/langchain_core/runnables/config.py", line 529, in _wrapped_fn
return contexts.pop().run(fn, *args)
File "/home/ubuntu/api/env/lib/python3.10/site-packages/langchain_core/runnables/base.py", line 772, in invoke
return self.invoke(input, config, **kwargs)
File "/home/ubuntu/api/env/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py", line 286, in invoke
self.generate_prompt(
File "/home/ubuntu/api/env/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py", line 786, in generate_prompt
return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs)
File "/home/ubuntu/api/env/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py", line 643, in generate
raise e
File "/home/ubuntu/api/env/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py", line 633, in generate
self._generate_with_cache(
File "/home/ubuntu/api/env/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py", line 855, in _generate_with_cache
result = self._generate(
File "/home/ubuntu/api/env/lib/python3.10/site-packages/langchain_openai/chat_models/base.py", line 670, in _generate
response = self.client.create(**payload)
File "/home/ubuntu/api/env/lib/python3.10/site-packages/openai/_utils/_utils.py", line 274, in wrapper
return func(*args, **kwargs)
File "/home/ubuntu/api/env/lib/python3.10/site-packages/openai/resources/chat/completions.py", line 704, in create
return self._post(
File "/home/ubuntu/api/env/lib/python3.10/site-packages/openai/_base_client.py", line 1260, in post
return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls))
File "/home/ubuntu/api/env/lib/python3.10/site-packages/openai/_base_client.py", line 937, in request
return self._request(
File "/home/ubuntu/api/env/lib/python3.10/site-packages/openai/_base_client.py", line 982, in _request
return self._retry_request(
File "/home/ubuntu/api/env/lib/python3.10/site-packages/openai/_base_client.py", line 1075, in _retry_request
return self._request(
File "/home/ubuntu/api/env/lib/python3.10/site-packages/openai/_base_client.py", line 982, in _request
return self._retry_request(
File "/home/ubuntu/api/env/lib/python3.10/site-packages/openai/_base_client.py", line 1075, in _retry_request
return self._request(
File "/home/ubuntu/api/env/lib/python3.10/site-packages/openai/_base_client.py", line 992, in _request
raise APITimeoutError(request=request) from err
openai.APITimeoutError: Request timed out.
Description
I am trying to use batching to speed up the processing of certain chais, but the behavior I’m observing seems as though the LLM’s timeout is applying to all calls together, which is inexplicable.
Langsmith shows me the same total time for each runnable in the batch, which doesn’t correspond to each individual LLM call (see attachements). This is not observable when I use batch_as_completed.
System Info
System Information
OS: Linux
OS Version: #24~22.04.1-Ubuntu SMP Thu Jul 18 10:43:12 UTC 2024
Python Version: 3.10.12 (main, Sep 11 2024, 15:47:36) [GCC 11.4.0]
Checked other resources
Example Code
Error Message and Stack Trace (if applicable)
Description
I am trying to use batching to speed up the processing of certain chais, but the behavior I’m observing seems as though the LLM’s timeout is applying to all calls together, which is inexplicable.
Langsmith shows me the same total time for each runnable in the batch, which doesn’t correspond to each individual LLM call (see attachements). This is not observable when I use
batch_as_completed
.System Info
System Information
Package Information
Optional packages not installed
Other Dependencies
The text was updated successfully, but these errors were encountered: