Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Timeout Issue in Batching: LLM Timeout Applies to Entire Batch Instead of Individual Calls #26610

Open
5 tasks done
yoch opened this issue Sep 18, 2024 · 1 comment
Open
5 tasks done
Labels
🤖:bug Related to a bug, vulnerability, unexpected error with an existing feature investigate

Comments

@yoch
Copy link
Contributor

yoch commented Sep 18, 2024

Checked other resources

  • I added a very descriptive title to this issue.
  • I searched the LangChain documentation with the integrated search.
  • I used the GitHub search to find a similar question and didn't find it.
  • I am sure that this is a bug in LangChain rather than my code.
  • The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).

Example Code

def extract_content(pages):
    prompt = '''\
    Here is the HTML content of the webpage associated with the document type {key} on my site. 
    Generate a textual transcription of this document that respects the structure of the page.
    Only keep the elements of the page related to the type {key}.

    {html}'''
    llm = ChatOpenAI(model='gpt-4o-mini', timeout=60)
    template = ChatPromptTemplate.from_messages([('human', prompt)])
    chain = template | llm | StrOutputParser()
    return chain.batch(pages, config={"max_concurrency": 4})

Error Message and Stack Trace (if applicable)

[2024-09-17 17:13:23] Task tasks.process_url raised unexpected: APITimeoutError('Request timed out.')
Traceback (most recent call last):
  File "/home/ubuntu/api/env/lib/python3.10/site-packages/httpx/_transports/default.py", line 72, in map_httpcore_exceptions
    yield
  File "/home/ubuntu/api/env/lib/python3.10/site-packages/httpx/_transports/default.py", line 236, in handle_request
    resp = self._pool.handle_request(req)
  File "/home/ubuntu/api/env/lib/python3.10/site-packages/httpcore/_sync/connection_pool.py", line 216, in handle_request
    raise exc from None
  File "/home/ubuntu/api/env/lib/python3.10/site-packages/httpcore/_sync/connection_pool.py", line 196, in handle_request
    response = connection.handle_request(
  File "/home/ubuntu/api/env/lib/python3.10/site-packages/httpcore/_sync/connection.py", line 101, in handle_request
    return self._connection.handle_request(request)
  File "/home/ubuntu/api/env/lib/python3.10/site-packages/httpcore/_sync/http11.py", line 143, in handle_request
    raise exc
  File "/home/ubuntu/api/env/lib/python3.10/site-packages/httpcore/_sync/http11.py", line 113, in handle_request
    ) = self._receive_response_headers(**kwargs)
  File "/home/ubuntu/api/env/lib/python3.10/site-packages/httpcore/_sync/http11.py", line 186, in _receive_response_headers
    event = self._receive_event(timeout=timeout)
  File "/home/ubuntu/api/env/lib/python3.10/site-packages/httpcore/_sync/http11.py", line 224, in _receive_event
    data = self._network_stream.read(
  File "/home/ubuntu/api/env/lib/python3.10/site-packages/httpcore/_backends/sync.py", line 124, in read
    with map_exceptions(exc_map):
  File "/usr/lib/python3.10/contextlib.py", line 153, in __exit__
    self.gen.throw(typ, value, traceback)
  File "/home/ubuntu/api/env/lib/python3.10/site-packages/httpcore/_exceptions.py", line 14, in map_exceptions
    raise to_exc(exc) from exc
httpcore.ReadTimeout: The read operation timed out

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/home/ubuntu/api/env/lib/python3.10/site-packages/openai/_base_client.py", line 973, in _request
    response = self._client.send(
  File "/home/ubuntu/api/env/lib/python3.10/site-packages/httpx/_client.py", line 926, in send
    response = self._send_handling_auth(
  File "/home/ubuntu/api/env/lib/python3.10/site-packages/httpx/_client.py", line 954, in _send_handling_auth
    response = self._send_handling_redirects(
  File "/home/ubuntu/api/env/lib/python3.10/site-packages/httpx/_client.py", line 991, in _send_handling_redirects
    response = self._send_single_request(request)
  File "/home/ubuntu/api/env/lib/python3.10/site-packages/httpx/_client.py", line 1027, in _send_single_request
    response = transport.handle_request(request)
  File "/home/ubuntu/api/env/lib/python3.10/site-packages/httpx/_transports/default.py", line 235, in handle_request
    with map_httpcore_exceptions():
  File "/usr/lib/python3.10/contextlib.py", line 153, in __exit__
    self.gen.throw(typ, value, traceback)
  File "/home/ubuntu/api/env/lib/python3.10/site-packages/httpx/_transports/default.py", line 89, in map_httpcore_exceptions
    raise mapped_exc(message) from exc
httpx.ReadTimeout: The read operation timed out

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/home/ubuntu/api/env/lib/python3.10/site-packages/celery/app/trace.py", line 453, in trace_task
    R = retval = fun(*args, **kwargs)
  File "/home/ubuntu/api/app.py", line 20, in __call__
    return self.run(*args, **kwargs)
  File "/home/ubuntu/api/tasks.py", line 110, in process_url
    others = extract_content(links)
  File "/home/ubuntu/api/extractor.py", line 192, in extract_content
    out = chain.batch(lst, config={"max_concurrency": 4})
  File "/home/ubuntu/api/env/lib/python3.10/site-packages/langchain_core/runnables/base.py", line 3158, in batch
    inputs = step.batch(
  File "/home/ubuntu/api/env/lib/python3.10/site-packages/langchain_core/runnables/base.py", line 779, in batch
    return cast(List[Output], list(executor.map(invoke, inputs, configs)))
  File "/usr/lib/python3.10/concurrent/futures/_base.py", line 621, in result_iterator
    yield _result_or_cancel(fs.pop())
  File "/usr/lib/python3.10/concurrent/futures/_base.py", line 319, in _result_or_cancel
    return fut.result(timeout)
  File "/usr/lib/python3.10/concurrent/futures/_base.py", line 458, in result
    return self.__get_result()
  File "/usr/lib/python3.10/concurrent/futures/_base.py", line 403, in __get_result
    raise self._exception
  File "/usr/lib/python3.10/concurrent/futures/thread.py", line 58, in run
    result = self.fn(*self.args, **self.kwargs)
  File "/home/ubuntu/api/env/lib/python3.10/site-packages/langchain_core/runnables/config.py", line 529, in _wrapped_fn
    return contexts.pop().run(fn, *args)
  File "/home/ubuntu/api/env/lib/python3.10/site-packages/langchain_core/runnables/base.py", line 772, in invoke
    return self.invoke(input, config, **kwargs)
  File "/home/ubuntu/api/env/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py", line 286, in invoke
    self.generate_prompt(
  File "/home/ubuntu/api/env/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py", line 786, in generate_prompt
    return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs)
  File "/home/ubuntu/api/env/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py", line 643, in generate
    raise e
  File "/home/ubuntu/api/env/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py", line 633, in generate
    self._generate_with_cache(
  File "/home/ubuntu/api/env/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py", line 855, in _generate_with_cache
    result = self._generate(
  File "/home/ubuntu/api/env/lib/python3.10/site-packages/langchain_openai/chat_models/base.py", line 670, in _generate
    response = self.client.create(**payload)
  File "/home/ubuntu/api/env/lib/python3.10/site-packages/openai/_utils/_utils.py", line 274, in wrapper
    return func(*args, **kwargs)
  File "/home/ubuntu/api/env/lib/python3.10/site-packages/openai/resources/chat/completions.py", line 704, in create
    return self._post(
  File "/home/ubuntu/api/env/lib/python3.10/site-packages/openai/_base_client.py", line 1260, in post
    return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls))
  File "/home/ubuntu/api/env/lib/python3.10/site-packages/openai/_base_client.py", line 937, in request
    return self._request(
  File "/home/ubuntu/api/env/lib/python3.10/site-packages/openai/_base_client.py", line 982, in _request
    return self._retry_request(
  File "/home/ubuntu/api/env/lib/python3.10/site-packages/openai/_base_client.py", line 1075, in _retry_request
    return self._request(
  File "/home/ubuntu/api/env/lib/python3.10/site-packages/openai/_base_client.py", line 982, in _request
    return self._retry_request(
  File "/home/ubuntu/api/env/lib/python3.10/site-packages/openai/_base_client.py", line 1075, in _retry_request
    return self._request(
  File "/home/ubuntu/api/env/lib/python3.10/site-packages/openai/_base_client.py", line 992, in _request
    raise APITimeoutError(request=request) from err
openai.APITimeoutError: Request timed out.

Description

I am trying to use batching to speed up the processing of certain chais, but the behavior I’m observing seems as though the LLM’s timeout is applying to all calls together, which is inexplicable.

Langsmith shows me the same total time for each runnable in the batch, which doesn’t correspond to each individual LLM call (see attachements). This is not observable when I use batch_as_completed.

Screenshot from 2024-09-18 09-43-44
Screenshot from 2024-09-18 09-44-32

System Info

System Information

OS: Linux
OS Version: #24~22.04.1-Ubuntu SMP Thu Jul 18 10:43:12 UTC 2024
Python Version: 3.10.12 (main, Sep 11 2024, 15:47:36) [GCC 11.4.0]

Package Information

langchain_core: 0.3.0
langchain: 0.3.0
langchain_community: 0.3.0
langsmith: 0.1.121
langchain_openai: 0.2.0
langchain_qdrant: 0.1.4
langchain_text_splitters: 0.3.0

Optional packages not installed

langgraph
langserve

Other Dependencies

aiohttp: 3.10.5
async-timeout: 4.0.3
dataclasses-json: 0.6.7
httpx: 0.27.2
jsonpatch: 1.33
numpy: 1.26.4
openai: 1.45.1
packaging: 24.1
pydantic: 2.9.1
pydantic-settings: 2.5.2
PyYAML: 6.0.2
qdrant-client: 1.11.2
requests: 2.32.3
tenacity: 8.5.0
tiktoken: 0.7.0
typing-extensions: 4.12.2

@langcarl langcarl bot added the investigate label Sep 18, 2024
@dosubot dosubot bot added the 🤖:bug Related to a bug, vulnerability, unexpected error with an existing feature label Sep 18, 2024
@SheeperGit
Copy link

Hey, we're a group of UTSC students looking to work on this issue!
Would it be acceptable for our team of five to investigate?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
🤖:bug Related to a bug, vulnerability, unexpected error with an existing feature investigate
Projects
None yet
Development

No branches or pull requests

2 participants