-
-
Notifications
You must be signed in to change notification settings - Fork 5.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Unable to start with arguments --model liuhaotian_llava-v1.5-13b --multimodal-pipeline llava-v1.5-13b #5036
Comments
Looks like it's related to this commit - added native llava support in transformers. However, liuhaotian_llava-v1.5-13b seems to use transformer 4.31.0 not the new version. |
Submit a PR #5037 |
Closed the PR above. Submit a new one to dev branch #5038 |
Hi, this error is for last transformers version? .. ValueError: Unrecognized configuration class <class 'transformers.models.llava.configuration_llava.LlavaConfig'> for this kind of AutoModel: AutoModelForCausalLM. Model type should be one of BartConfig, BertConfig, BertGenerationConfig, BigBirdConfig, BigBirdPegasusConfig, BioGptConfig, BlenderbotConfig, BlenderbotSmallConfig, BloomConfig, CamembertConfig, LlamaConfig, CodeGenConfig, CpmAntConfig, CTRLConfig, Data2VecTextConfig, ElectraConfig, ErnieConfig, FalconConfig, FuyuConfig, GitConfig, GPT2Config, GPT2Config, GPTBigCodeConfig, GPTNeoConfig, GPTNeoXConfig, GPTNeoXJapaneseConfig, GPTJConfig, LlamaConfig, MarianConfig, MBartConfig, MegaConfig, MegatronBertConfig, MistralConfig, MixtralConfig, MptConfig, MusicgenConfig, MvpConfig, OpenLlamaConfig, OpenAIGPTConfig, OPTConfig, PegasusConfig, PersimmonConfig, PhiConfig, PLBartConfig, ProphetNetConfig, QDQBertConfig, ReformerConfig, RemBertConfig, RobertaConfig, RobertaPreLayerNormConfig, RoCBertConfig, RoFormerConfig, RwkvConfig, Speech2Text2Config, TransfoXLConfig, TrOCRConfig, WhisperConfig, XGLMConfig, XLMConfig, XLMProphetNetConfig, XLMRobertaConfig, XLMRobertaXLConfig, XLNetConfig, XmodConfig. Is that because the model is in transformers 4.31.0? |
Yes. The model is meant for 4.31.0. |
some additional info in case it's useful before the PR gets merged --> as per the development branch @szelok mentioned (#5038), applying the changes to the file modules/models.py on that branch to text-generation-webui commit ID 0f134bf744acef2715edd7d39e76f865d8d83a19 did solve this problem on my end without installing a different version of the transformers library (the one getting installed with this commit was 4.37.2 on my end), I could load the model named "liuhaotian_llava-v1.5-7b" with the following flags and ask questions about images on the webUI:
(I loaded the model on the interface, not via the --model flag) |
worked for me! |
^ I got this same error when trying to run with the latest commit in main (1a7c027), which instals transformers I tried to downgrade transformers to
And then starting the UI with
So it seems like multimodal is broken due to underlying transformer version conflicts. |
This issue has been closed due to inactivity for 2 months. If you believe it is still relevant, please leave a comment below. You can tag a developer in your comment. |
Still facing this problem as of June 19. Any updates? |
still facing this error so how to fix it |
@JustinKai0527 add |
Apply the fixes from the PR that @Victorivus mentioned to fix this issue. You simply need to modify the |
Describe the bug
python server.py --share --model liuhaotian_llava-v1.5-13b --multimodal-pipeline llava-v1.5-13b --load-in-4bit
18:26:56-770141 INFO Starting Text generation web UI
18:26:56-785664 INFO Loading liuhaotian_llava-v1.5-13b
18:26:56-856700 INFO Using the following 4-bit params: {'load_in_4bit': True,
'bnb_4bit_compute_dtype': torch.float16, 'bnb_4bit_quant_type': 'nf4',
'bnb_4bit_use_double_quant': False}
╭─────────────────────────────── Traceback (most recent call last) ────────────────────────────────╮
│ /content/text-generation-webui/server.py:240 in │
│ │
│ 239 # Load the model │
│ ❱ 240 shared.model, shared.tokenizer = load_model(model_name) │
│ 241 if shared.args.lora: │
│ │
│ /content/text-generation-webui/modules/models.py:90 in load_model │
│ │
│ 89 shared.args.loader = loader │
│ ❱ 90 output = load_func_maploader │
│ 91 if type(output) is tuple: │
│ │
│ /content/text-generation-webui/modules/models.py:245 in huggingface_loader │
│ │
│ 244 │
│ ❱ 245 model = LoaderClass.from_pretrained(path_to_model, **params) │
│ 246 │
│ │
│ /usr/local/lib/python3.10/dist-packages/transformers/models/auto/auto_factory.py:569 in │
│ from_pretrained │
│ │
│ 568 ) │
│ ❱ 569 raise ValueError( │
│ 570 f"Unrecognized configuration class {config.class} for this kind of AutoM │
╰──────────────────────────────────────────────────────────────────────────────────────────────────╯
ValueError: Unrecognized configuration class <class
'transformers.models.llava.configuration_llava.LlavaConfig'> for this kind of AutoModel:
AutoModelForCausalLM.
Model type should be one of BartConfig, BertConfig, BertGenerationConfig, BigBirdConfig,
BigBirdPegasusConfig, BioGptConfig, BlenderbotConfig, BlenderbotSmallConfig, BloomConfig,
CamembertConfig, LlamaConfig, CodeGenConfig, CpmAntConfig, CTRLConfig, Data2VecTextConfig,
ElectraConfig, ErnieConfig, FalconConfig, FuyuConfig, GitConfig, GPT2Config, GPT2Config,
GPTBigCodeConfig, GPTNeoConfig, GPTNeoXConfig, GPTNeoXJapaneseConfig, GPTJConfig, LlamaConfig,
MarianConfig, MBartConfig, MegaConfig, MegatronBertConfig, MistralConfig, MixtralConfig, MptConfig,
MusicgenConfig, MvpConfig, OpenLlamaConfig, OpenAIGPTConfig, OPTConfig, PegasusConfig,
PersimmonConfig, PhiConfig, PLBartConfig, ProphetNetConfig, QDQBertConfig, ReformerConfig,
RemBertConfig, RobertaConfig, RobertaPreLayerNormConfig, RoCBertConfig, RoFormerConfig, RwkvConfig,
Speech2Text2Config, TransfoXLConfig, TrOCRConfig, WhisperConfig, XGLMConfig, XLMConfig,
XLMProphetNetConfig, XLMRobertaConfig, XLMRobertaXLConfig, XLNetConfig, XmodConfig.
Is there an existing issue for this?
Reproduction
python server.py --share --model liuhaotian_llava-v1.5-13b --multimodal-pipeline llava-v1.5-13b --load-in-4bit
Screenshot
No response
Logs
System Info
The text was updated successfully, but these errors were encountered: