logit-bias : Any dictionary of tokens available #1600
Replies: 4 comments
-
If you specify Or you could look for a PyTorch format LLaMA model on HuggingFace or similar sites and check in |
Beta Was this translation helpful? Give feedback.
-
Ah, great, the prompting-method sounds like a nice way to ask for tokens. Will the tokens be diferent for each model? |
Beta Was this translation helpful? Give feedback.
-
I believe that's generally the case. I think there are some models that add an extra token at the end or something like that. These are usually special tokens like start of document, end of document, etc. Not really stuff that you'd care about manipulating via logit bias anyway. |
Beta Was this translation helpful? Give feedback.
-
You have to be a little careful, because capitalization, whitespace, punctuation etc. around your words can result in different tokens. In fact llama.cpp adds a space in front of every prompt because most tokens also start with a space. |
Beta Was this translation helpful? Give feedback.
-
In
main
we have this parameter-l
or--logit-bias
which can be used to change the probability of certain tokens.Is there any way to see a dictionary of the used tokens, maybe also to search them?
Beta Was this translation helpful? Give feedback.
All reactions