Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Unable to start with arguments --model liuhaotian_llava-v1.5-13b --multimodal-pipeline llava-v1.5-13b #5036

Closed
1 task done
szelok opened this issue Dec 21, 2023 · 14 comments
Labels
bug Something isn't working stale

Comments

@szelok
Copy link

szelok commented Dec 21, 2023

Describe the bug

python server.py --share --model liuhaotian_llava-v1.5-13b --multimodal-pipeline llava-v1.5-13b --load-in-4bit
18:26:56-770141 INFO     Starting Text generation web UI                                            
18:26:56-785664 INFO     Loading liuhaotian_llava-v1.5-13b                                          
18:26:56-856700 INFO     Using the following 4-bit params: {'load_in_4bit': True,                  
                         'bnb_4bit_compute_dtype': torch.float16, 'bnb_4bit_quant_type': 'nf4',    
                         'bnb_4bit_use_double_quant': False}                                        
╭─────────────────────────────── Traceback (most recent call last) ────────────────────────────────╮
│ /content/text-generation-webui/server.py:240 in                                         │
│                                                                                                  │
│   239         # Load the model                                                                   │
│ ❱ 240         shared.model, shared.tokenizer = load_model(model_name)                            │
│   241         if shared.args.lora:                                                               │
│                                                                                                  │
│ /content/text-generation-webui/modules/models.py:90 in load_model                                │
│                                                                                                  │
│    89     shared.args.loader = loader                                                            │
│ ❱  90     output = load_func_maploader                                             │
│    91     if type(output) is tuple:                                                              │
│                                                                                                  │
│ /content/text-generation-webui/modules/models.py:245 in huggingface_loader                       │
│                                                                                                  │
│   244                                                                                            │
│ ❱ 245         model = LoaderClass.from_pretrained(path_to_model, **params)                       │
│   246                                                                                            │
│                                                                                                  │
│ /usr/local/lib/python3.10/dist-packages/transformers/models/auto/auto_factory.py:569 in          │
│ from_pretrained                                                                                  │
│                                                                                                  │
│   568             )                                                                              │
│ ❱ 569         raise ValueError(                                                                  │
│   570             f"Unrecognized configuration class {config.class} for this kind of AutoM   │
╰──────────────────────────────────────────────────────────────────────────────────────────────────╯
ValueError: Unrecognized configuration class <class
'transformers.models.llava.configuration_llava.LlavaConfig'> for this kind of AutoModel:
AutoModelForCausalLM.
Model type should be one of BartConfig, BertConfig, BertGenerationConfig, BigBirdConfig,
BigBirdPegasusConfig, BioGptConfig, BlenderbotConfig, BlenderbotSmallConfig, BloomConfig,
CamembertConfig, LlamaConfig, CodeGenConfig, CpmAntConfig, CTRLConfig, Data2VecTextConfig,
ElectraConfig, ErnieConfig, FalconConfig, FuyuConfig, GitConfig, GPT2Config, GPT2Config,
GPTBigCodeConfig, GPTNeoConfig, GPTNeoXConfig, GPTNeoXJapaneseConfig, GPTJConfig, LlamaConfig,
MarianConfig, MBartConfig, MegaConfig, MegatronBertConfig, MistralConfig, MixtralConfig, MptConfig,
MusicgenConfig, MvpConfig, OpenLlamaConfig, OpenAIGPTConfig, OPTConfig, PegasusConfig,
PersimmonConfig, PhiConfig, PLBartConfig, ProphetNetConfig, QDQBertConfig, ReformerConfig,
RemBertConfig, RobertaConfig, RobertaPreLayerNormConfig, RoCBertConfig, RoFormerConfig, RwkvConfig,
Speech2Text2Config, TransfoXLConfig, TrOCRConfig, WhisperConfig, XGLMConfig, XLMConfig,
XLMProphetNetConfig, XLMRobertaConfig, XLMRobertaXLConfig, XLNetConfig, XmodConfig.

Is there an existing issue for this?

  • I have searched the existing issues

Reproduction

python server.py --share --model liuhaotian_llava-v1.5-13b --multimodal-pipeline llava-v1.5-13b --load-in-4bit

Screenshot

No response

Logs

╭─────────────────────────────── Traceback (most recent call last) ────────────────────────────────╮
│ /content/text-generation-webui/server.py:240 in <module>                                         │
│                                                                                                  │
│   239         # Load the model                                                                   │
│ ❱ 240         shared.model, shared.tokenizer = load_model(model_name)                            │
│   241         if shared.args.lora:                                                               │
│                                                                                                  │
│ /content/text-generation-webui/modules/models.py:90 in load_model                                │
│                                                                                                  │
│    89     shared.args.loader = loader                                                            │
│ ❱  90     output = load_func_map[loader](model_name)                                             │
│    91     if type(output) is tuple:                                                              │
│                                                                                                  │
│ /content/text-generation-webui/modules/models.py:245 in huggingface_loader                       │
│                                                                                                  │
│   244                                                                                            │
│ ❱ 245         model = LoaderClass.from_pretrained(path_to_model, **params)                       │
│   246                                                                                            │
│                                                                                                  │
│ /usr/local/lib/python3.10/dist-packages/transformers/models/auto/auto_factory.py:569 in          │
│ from_pretrained                                                                                  │
│                                                                                                  │
│   568             )                                                                              │
│ ❱ 569         raise ValueError(                                                                  │
│   570             f"Unrecognized configuration class {config.__class__} for this kind of AutoM   │
╰──────────────────────────────────────────────────────────────────────────────────────────────────╯
ValueError: Unrecognized configuration class <class 
'transformers.models.llava.configuration_llava.LlavaConfig'> for this kind of AutoModel: 
AutoModelForCausalLM.
Model type should be one of BartConfig, BertConfig, BertGenerationConfig, BigBirdConfig, 
BigBirdPegasusConfig, BioGptConfig, BlenderbotConfig, BlenderbotSmallConfig, BloomConfig, 
CamembertConfig, LlamaConfig, CodeGenConfig, CpmAntConfig, CTRLConfig, Data2VecTextConfig, 
ElectraConfig, ErnieConfig, FalconConfig, FuyuConfig, GitConfig, GPT2Config, GPT2Config,

System Info

Google Colab notebook with T4 GPU
@szelok szelok added the bug Something isn't working label Dec 21, 2023
szelok added a commit to szelok/text-generation-webui that referenced this issue Dec 21, 2023
@szelok
Copy link
Author

szelok commented Dec 21, 2023

Looks like it's related to this commit - added native llava support in transformers.
huggingface/transformers@44b5506#diff-311e25086b9c91ebbaa8fdeff4fdad9c69e8c18645c2e21f8aa0a5e76f5bf64d

However, liuhaotian_llava-v1.5-13b seems to use transformer 4.31.0 not the new version.

@szelok
Copy link
Author

szelok commented Dec 21, 2023

Submit a PR #5037

@szelok
Copy link
Author

szelok commented Dec 21, 2023

Closed the PR above. Submit a new one to dev branch #5038

@jesulo
Copy link

jesulo commented Jan 24, 2024

Hi, this error is for last transformers version? .. ValueError: Unrecognized configuration class <class 'transformers.models.llava.configuration_llava.LlavaConfig'> for this kind of AutoModel: AutoModelForCausalLM.

Model type should be one of BartConfig, BertConfig, BertGenerationConfig, BigBirdConfig, BigBirdPegasusConfig, BioGptConfig, BlenderbotConfig, BlenderbotSmallConfig, BloomConfig, CamembertConfig, LlamaConfig, CodeGenConfig, CpmAntConfig, CTRLConfig, Data2VecTextConfig, ElectraConfig, ErnieConfig, FalconConfig, FuyuConfig, GitConfig, GPT2Config, GPT2Config, GPTBigCodeConfig, GPTNeoConfig, GPTNeoXConfig, GPTNeoXJapaneseConfig, GPTJConfig, LlamaConfig, MarianConfig, MBartConfig, MegaConfig, MegatronBertConfig, MistralConfig, MixtralConfig, MptConfig, MusicgenConfig, MvpConfig, OpenLlamaConfig, OpenAIGPTConfig, OPTConfig, PegasusConfig, PersimmonConfig, PhiConfig, PLBartConfig, ProphetNetConfig, QDQBertConfig, ReformerConfig, RemBertConfig, RobertaConfig, RobertaPreLayerNormConfig, RoCBertConfig, RoFormerConfig, RwkvConfig, Speech2Text2Config, TransfoXLConfig, TrOCRConfig, WhisperConfig, XGLMConfig, XLMConfig, XLMProphetNetConfig, XLMRobertaConfig, XLMRobertaXLConfig, XLNetConfig, XmodConfig.

Is that because the model is in transformers 4.31.0?

@szelok
Copy link
Author

szelok commented Jan 26, 2024

Yes. The model is meant for 4.31.0.

@sonebu
Copy link

sonebu commented Feb 13, 2024

some additional info in case it's useful before the PR gets merged --> as per the development branch @szelok mentioned (#5038), applying the changes to the file modules/models.py on that branch to text-generation-webui commit ID 0f134bf744acef2715edd7d39e76f865d8d83a19 did solve this problem on my end without installing a different version of the transformers library (the one getting installed with this commit was 4.37.2 on my end), I could load the model named "liuhaotian_llava-v1.5-7b" with the following flags and ask questions about images on the webUI:

--multimodal-pipeline llava-v1.5-7b --load-in-4bit --extensions llava

(I loaded the model on the interface, not via the --model flag)

@nnethercott
Copy link

some additional info in case it's useful before the PR gets merged --> as per the development branch @szelok mentioned (#5038), applying the changes to the file modules/models.py on that branch to text-generation-webui commit ID 0f134bf744acef2715edd7d39e76f865d8d83a19 did solve this problem on my end without installing a different version of the transformers library (the one getting installed with this commit was 4.37.2 on my end), I could load the model named "liuhaotian_llava-v1.5-7b" with the following flags and ask questions about images on the webUI:

--multimodal-pipeline llava-v1.5-7b --load-in-4bit --extensions llava

(I loaded the model on the interface, not via the --model flag)

worked for me!

@slyt
Copy link

slyt commented Mar 30, 2024

Hi, this error is for last transformers version? .. ValueError: Unrecognized configuration class <class 'transformers.models.llava.configuration_llava.LlavaConfig'> for this kind of AutoModel: AutoModelForCausalLM.

Model type should be one of BartConfig, BertConfig, BertGenerationConfig, BigBirdConfig, BigBirdPegasusConfig, BioGptConfig, BlenderbotConfig, BlenderbotSmallConfig, BloomConfig, CamembertConfig, LlamaConfig, CodeGenConfig, CpmAntConfig, CTRLConfig, Data2VecTextConfig, ElectraConfig, ErnieConfig, FalconConfig, FuyuConfig, GitConfig, GPT2Config, GPT2Config, GPTBigCodeConfig, GPTNeoConfig, GPTNeoXConfig, GPTNeoXJapaneseConfig, GPTJConfig, LlamaConfig, MarianConfig, MBartConfig, MegaConfig, MegatronBertConfig, MistralConfig, MixtralConfig, MptConfig, MusicgenConfig, MvpConfig, OpenLlamaConfig, OpenAIGPTConfig, OPTConfig, PegasusConfig, PersimmonConfig, PhiConfig, PLBartConfig, ProphetNetConfig, QDQBertConfig, ReformerConfig, RemBertConfig, RobertaConfig, RobertaPreLayerNormConfig, RoCBertConfig, RoFormerConfig, RwkvConfig, Speech2Text2Config, TransfoXLConfig, TrOCRConfig, WhisperConfig, XGLMConfig, XLMConfig, XLMProphetNetConfig, XLMRobertaConfig, XLMRobertaXLConfig, XLNetConfig, XmodConfig.

Is that because the model is in transformers 4.31.0?

^ I got this same error when trying to run with the latest commit in main (1a7c027), which instals transformers 4.39.2

I tried to downgrade transformers to 4.31.0 with pip install --force-reinstall -v "transformers==4.31.0". I received some errors from pip:

ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
hqq 0.1.5 requires transformers>=4.36.1, but you have transformers 4.31.0 which is incompatible.
datasets 2.18.0 requires fsspec[http]<=2024.2.0,>=2023.1.0, but you have fsspec 2024.3.1 which is incompatible.
aqlm 1.1.2 requires transformers>=4.38.0, but you have transformers 4.31.0 which is incompatible.
autoawq 0.2.3 requires transformers>=4.35.0, but you have transformers 4.31.0 which is incompatible.

And then starting the UI with ./wsl.sh --multimodal-pipeline llava-v1.5-7b --extensions llava throws error:

╭────────────────────────────────────────────────────────────────────────────────────── Traceback (most recent call last) ───────────────────────────────────────────────────────────────────────────────────────╮
│ /home/admin/text-generation-webui/server.py:40 in <module>                                                                                                                                                     │
│                                                                                                                                                                                                                │
│    39 import modules.extensions as extensions_module                                                                                                                                                           │
│ ❱  40 from modules import (                                                                                                                                                                                    │
│    41     chat,                                                                                                                                                                                                │
│                                                                                                                                                                                                                │
│ /home/admin/text-generation-webui/modules/chat.py:21 in <module>                                                                                                                                               │
│                                                                                                                                                                                                                │
│    20 from modules.logging_colors import logger                                                                                                                                                                │
│ ❱  21 from modules.text_generation import (                                                                                                                                                                    │
│    22     generate_reply,                                                                                                                                                                                      │
│                                                                                                                                                                                                                │
│ /home/admin/text-generation-webui/modules/text_generation.py:13 in <module>                                                                                                                                    │
│                                                                                                                                                                                                                │
│    12 import transformers                                                                                                                                                                                      │
│ ❱  13 from transformers import LogitsProcessorList, is_torch_xpu_available                                                                                                                                     │
│    14                                                                                                                                                                                                          │
╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
ImportError: cannot import name 'is_torch_xpu_available' from 'transformers' (/home/admin/text-generation-webui/installer_files/env/lib/python3.11/site-packages/transformers/__init__.py)

So it seems like multimodal is broken due to underlying transformer version conflicts.

@Victorivus
Copy link
Contributor

Victorivus commented Apr 8, 2024

@slyt Check PR #5038 there is a way to solving this modifying modules/models.py, probably will be fixed soon

Copy link

github-actions bot commented Jun 7, 2024

This issue has been closed due to inactivity for 2 months. If you believe it is still relevant, please leave a comment below. You can tag a developer in your comment.

@github-actions github-actions bot closed this as completed Jun 7, 2024
@11301858
Copy link

Still facing this problem as of June 19. Any updates?

randoentity pushed a commit to randoentity/text-generation-webui that referenced this issue Jul 27, 2024
@JustinKai0527
Copy link

still facing this error so how to fix it

@BoomSky0416
Copy link

BoomSky0416 commented Aug 15, 2024

@JustinKai0527 add --visual_inputs True

@FoxMcloud5655
Copy link

Apply the fixes from the PR that @Victorivus mentioned to fix this issue.

You simply need to modify the modules/models.py file to get this to work.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working stale
Projects
None yet
Development

No branches or pull requests

10 participants