-
-
Notifications
You must be signed in to change notification settings - Fork 4.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
max_length of nlp pipline for e.g. japanese #13207
Comments
On the other hand, none of the components in a core pipeline benefit from very long contexts (typically a section or a page or even a paragraph is sufficient), so splitting up texts is often the best way to go anyway. Very long texts can use a lot of RAM, especially for This limit for Japanese is completely separate from Their error message seems fine (much better than an OOM message with a confusing traceback from the middle of the parser), so I don't know if it makes sense to us to add another check in the spacy Japanese tokenizer, which then might get out-of-sync with the upstream sudachipy constraints in the future. But you're right that We'll look at adding this to the documentation! |
Thanks for the explanation, that helped clearing the confusion on my end and i know how to proceed for my usecase. In case anyone ever stumbles upon this, here the code i went with for byte splitting (though probably still has a lot of optimization potential) # splits not after x bytes but ensures that max x bytes are used without destroying the final character
def __chunk_text_on_bytes(text: str, max_chunk_size: int = 1_000_000):
factor = len(text) / __utf8len(text)
increase_by = int(max(min(max_chunk_size*.1,10),1))
initial_size_guess = int(max(max_chunk_size * factor - 10,1))
final_list = []
remaining = text
while len(remaining):
part = remaining[:initial_size_guess]
if __utf8len(part) > max_chunk_size:
initial_size_guess = max(initial_size_guess - min(max_chunk_size *.001,10),1)
continue
cut_after = initial_size_guess
while __utf8len(part) < max_chunk_size and part != remaining:
cut_after = min(len(remaining), cut_after+increase_by)
part = remaining[:cut_after]
if __utf8len(part) > max_chunk_size:
cut_after-=increase_by
final_list.append(remaining[:cut_after])
remaining = remaining[cut_after:]
return final_list |
Existing documentation""" Updated documentation""" |
Thanks for the suggestion! I think that this description is slightly confusing for users, since |
It works if you replace the SudachiPy version, and use the older SudachiPy==0.5.4. Tested it and worked in my case.
|
Not sure if this is meant to happen or a misunderstanding on my part. I'm assuming misunderstanding so I'm going for Documentation Report.
The Language (nlp) class has a
max_length
parameter that seems to work different for e.g. japanese.I'm currently trying to chunk texts that are too long by considering the max_length and splitting based on that. For e.g. english texts this seems to work without any issues.
Basic approach code:
However for the config string
ja_core_news_sm
this doesn't work.After a bit of analyzation i noticed that not the length but the byte amount needs to be considered.
However even with the byte approach i run into an error that looks like it's max_length related but maybe not really?
Slightly reduced Error trace:
I also double checked the values for max_length (1000000), string length (63876) & byte length(63960)
Setting the max_length by hand to 1100000 didn't change the error message so I'm assuming something else (maybe sudachi itself?) defines the Input is too long error message.
What the actual issue is and how to solve it (for lookup size limits) would be great for the documentation.
Which page or section is this issue related to?
Not sure where to add since it I'm not sure if it's directly japanese related. However a note might be interesting at https://spacy.io/models/ja or https://spacy.io/usage/models#japanese.
Further a note for max_length in general might need extension (if correctly assumed maybe something like
character length isn't the classic python len(<string>) function but the byte size (e.g. letter "I" - len 1 - byte 1 & kanji "私" - len 1 - byte 3)
The text was updated successfully, but these errors were encountered: