Skip to content

Commit

Permalink
requests: Fix detection of iterators in chunked data requests.
Browse files Browse the repository at this point in the history
Chunked detection does not work as generators never have an `__iter__`
attribute.  They do have `__next__`.

Example that now works with this commit:

    def read_in_chunks(file_object, chunk_size=4096):
        while True:
            data = file_object.read(chunk_size)
            if not data:
                break
            yield data

    file = open(filename, "rb")
    r = requests.post(url, data=read_in_chunks(file))
  • Loading branch information
bwhitman authored and dpgeorge committed Oct 4, 2023
1 parent 46748d2 commit e025c84
Show file tree
Hide file tree
Showing 2 changed files with 2 additions and 2 deletions.
2 changes: 1 addition & 1 deletion python-ecosys/requests/manifest.py
Original file line number Diff line number Diff line change
@@ -1,3 +1,3 @@
metadata(version="0.8.0", pypi="requests")
metadata(version="0.8.1", pypi="requests")

package("requests")
2 changes: 1 addition & 1 deletion python-ecosys/requests/requests/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -45,7 +45,7 @@ def request(
parse_headers=True,
):
redirect = None # redirection url, None means no redirection
chunked_data = data and getattr(data, "__iter__", None) and not getattr(data, "__len__", None)
chunked_data = data and getattr(data, "__next__", None) and not getattr(data, "__len__", None)

if auth is not None:
import ubinascii
Expand Down

0 comments on commit e025c84

Please sign in to comment.