Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix chunked_data requests in urequests.py #670

Merged
merged 1 commit into from
Oct 4, 2023

Conversation

bwhitman
Copy link
Contributor

Chunked detection does not work on current Micropython as it never has an __iter__ attribute to a generator. It does add __next__ to them, so this test in urequests for chunked detection did not work. I've changed it to __next__. I can post a failing example in the PR.

@bwhitman
Copy link
Contributor Author

Example:

def read_in_chunks(file_object, chunk_size=4096):
    while True:
        data = file_object.read(chunk_size)
        if not data:
            break
        yield data

file = open(filename, "rb")
r = urequests.post(url, data=read_in_chunks(file))

On current urequests, this will fail as chunked_data is not detected properly as read_in_chunks() does not have an __iter__ attr. Changing the test to __next__ works and the example works.

@bwhitman bwhitman changed the title Update urequests.py Fix chunked_data requests in urequests.py May 30, 2023
@andrewleech
Copy link
Contributor

andrewleech commented May 30, 2023

Good pickup! I'm not sure whether the original library author tested this chunked data handling in a different way, or perhaps just copied cpython code without testing... either way it's true that __iter__ isn't exposed on common micropython generators like yielding functions or "tuple comprehension":

$ micropython
MicroPython v1.19.1-1014-gbde222ce8 on 2023-04-14; linux [GCC 9.4.0] version
Use Ctrl-D to exit, Ctrl-E for paste mode
>>> def gen():
...     yield 'hi'
...     yield 'there'
...
>>> g = gen()
>>> dir(g)
['__class__', '__next__', 'close', 'send', 'throw', 'pend_throw']
>>>
>>> dir((a for a in [1, 2]))
['__class__', '__next__', 'close', 'send', 'throw', 'pend_throw']
>>>

vs python

$ python
Python 3.8.10 (default, Mar 15 2022, 12:22:08)
>>> def gen():
...     yield 'hi'
...     yield 'there'
...
>>> g = gen()
>>> dir(g)
['__class__', '__del__', '__delattr__', '__dir__', '__doc__', '__eq__', '__format__', '__ge__', '__getattribute__', '__gt__', '__hash__', '__init__', '__init_subclass__', '__iter__', '__le__', '__lt__', '__name__', '__ne__', '__new__', '__next__', '__qualname__', '__reduce__', '__reduce_ex__', '__repr__', '__setattr__', '__sizeof__', '__str__', '__subclasshook__', 'close', 'gi_code', 'gi_frame', 'gi_running', 'gi_yieldfrom', 'send', 'throw']
>>>
>>> dir((a for a in [1, 2]))
['__class__', '__del__', '__delattr__', '__dir__', '__doc__', '__eq__', '__format__', '__ge__', '__getattribute__', '__gt__', '__hash__', '__init__', '__init_subclass__', '__iter__', '__le__', '__lt__', '__name__', '__ne__', '__new__', '__next__', '__qualname__', '__reduce__', '__reduce_ex__', '__repr__', '__setattr__', '__sizeof__', '__str__', '__subclasshook__', 'close', 'gi_code', 'gi_frame', 'gi_running', 'gi_yieldfrom', 'send', 'throw']

It's very common for micropython to not expose "little-used" attributes on objects as every attribute exposed comes with a notable code size cost.

Your fix looks fairly appropriate to me; no extra code size added and meets the needs of your example, which also matches the official requests example of this functionality: https://requests.readthedocs.io/en/latest/user/advanced/#chunk-encoded-requests

I'd also support adding your example above to a new example_chunked_post.py file, this sort of iterative / chunked handling is very valuable in micropython to limit buffer size and therefore ram use!

Copy link
Contributor

@andrewleech andrewleech left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

Chunked detection does not work as generators never have an `__iter__`
attribute.  They do have `__next__`.

Example that now works with this commit:

    def read_in_chunks(file_object, chunk_size=4096):
        while True:
            data = file_object.read(chunk_size)
            if not data:
                break
            yield data

    file = open(filename, "rb")
    r = requests.post(url, data=read_in_chunks(file))
@dpgeorge dpgeorge merged commit e025c84 into micropython:master Oct 4, 2023
3 checks passed
@dpgeorge
Copy link
Member

dpgeorge commented Oct 4, 2023

Thanks for the contribution!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants