What could be the reason for ValueError: could not broadcast input array from shape (32000,) into shape (0,) during token generation. #681
Replies: 2 comments 3 replies
-
I'm facing this issue too. In my case this exception happens when the same model is used to generate text multiple times, without reload. Looks like it's context grows out of control. Are there any methods to clear context? |
Beta Was this translation helpful? Give feedback.
-
Yes, this error has come up when the model tries to generate but the context limit got full, you are supposed to handle context limits yourself.. you could for example tokenize the prompt/history to check the context size and then remove older messages if needed (not a great solution but it works, i think ooba does this). |
Beta Was this translation helpful? Give feedback.
-
Hello. I am occasionally getting this error:
with a 13B llama2-chat model during token generation.
What could be the reason for it? This started happening recently.
Beta Was this translation helpful? Give feedback.
All reactions