-
Notifications
You must be signed in to change notification settings - Fork 1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix chunked_data requests in urequests.py #670
Conversation
Example:
On current |
Good pickup! I'm not sure whether the original library author tested this chunked data handling in a different way, or perhaps just copied cpython code without testing... either way it's true that
vs python
It's very common for micropython to not expose "little-used" attributes on objects as every attribute exposed comes with a notable code size cost. Your fix looks fairly appropriate to me; no extra code size added and meets the needs of your example, which also matches the official requests example of this functionality: https://requests.readthedocs.io/en/latest/user/advanced/#chunk-encoded-requests I'd also support adding your example above to a new |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
Chunked detection does not work as generators never have an `__iter__` attribute. They do have `__next__`. Example that now works with this commit: def read_in_chunks(file_object, chunk_size=4096): while True: data = file_object.read(chunk_size) if not data: break yield data file = open(filename, "rb") r = requests.post(url, data=read_in_chunks(file))
Thanks for the contribution! |
Chunked detection does not work on current Micropython as it never has an
__iter__
attribute to a generator. It does add__next__
to them, so this test inurequests
for chunked detection did not work. I've changed it to__next__
. I can post a failing example in the PR.