You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I want to be able to run both llama2 (7b) and llama3 (8b) from this repo to be able to easily compare them. I know I can use the old repo to run llama2 but the sku_list.py file makes it seem like it should be possible to run llama2 here too.
I have tried manually changing the tokenizer files and did all of the following:
set CHECKPOINT_DIR to a llama2 checkpoint
set the tokenizer.py and tokenizer.model files to the llama2 versions (and added a get_instance method to the llama2 tokenizer file pointing to the correct tokenizer.model file)
(HACK) set model_args.vocab_size = tokenizer.n_words
But it then failes with the error:
File "llama_models/llama3/api/chat_format.py", line 54, in init
self.vision_token = self.tokenizer.special_tokens["<|image|>"]
AttributeError: 'Tokenizer' object has no attribute 'special_tokens'
What is the correct way to run llama2 here?
The text was updated successfully, but these errors were encountered:
I want to be able to run both llama2 (7b) and llama3 (8b) from this repo to be able to easily compare them. I know I can use the old repo to run llama2 but the sku_list.py file makes it seem like it should be possible to run llama2 here too.
I have tried manually changing the tokenizer files and did all of the following:
But it then failes with the error:
What is the correct way to run llama2 here?
The text was updated successfully, but these errors were encountered: