You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Notice that the space between + and \ is now missing. The current tokenizer code simply inserts a backslash when it encounters two subsequent tokens with a differeing row offset:
Bug report
Bug description:
Code which contains line breaks is not round-trip invariant:
Notice that the space between
+
and\
is now missing. The current tokenizer code simply inserts a backslash when it encounters two subsequent tokens with a differeing row offset:cpython/Lib/tokenize.py
Lines 179 to 182 in 9c2bb7d
I think this should be fixed. The docstring of
tokenize.untokenize
says:To fix this, it will probably be necessary to inspect the raw line contents and count how much whitespace there is at the end of the line.
CPython versions tested on:
CPython main branch
Operating systems tested on:
Linux
The text was updated successfully, but these errors were encountered: