Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error importing AI model #6587

Open
1 task done
Barsik5436 opened this issue Dec 19, 2024 · 0 comments
Open
1 task done

Error importing AI model #6587

Barsik5436 opened this issue Dec 19, 2024 · 0 comments
Labels
bug Something isn't working

Comments

@Barsik5436
Copy link

Describe the bug

Importing an AI model via the web interface does not work, indicating an error related to the bitsandbytes library, and then a CUDA error, look at the logs.

Is there an existing issue for this?

  • I have searched the existing issues

Reproduction

copied the repository, launched start_windows.bat, selected cpu there, after the installation was completed, launched the same executable file again, then installed the AI ​​model https://huggingface.co/unsloth/Reflection-Llama-3.1-70B through the models tab in the web interface and then pressed the load button. While using the code, I tried to fix the problem myself by downloading bitsandbytes, bitsandbytes-cuda117, bitsandbytes-windows according to the recommendations of chat-gpt.

Screenshot

{A7B45A1D-EFD8-46CD-BC97-D310447E91B1}
{643A1D60-8FB3-4F9F-A82D-A40F8E16477B}

Logs

Traceback (most recent call last):
  File "C:\text-generation-webui-main\modules\ui_model_menu.py", line 232, in load_model_wrapper
    shared.model, shared.tokenizer = load_model(selected_model, loader)
                                     ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\text-generation-webui-main\modules\models.py", line 93, in load_model
    output = load_func_map[loader](model_name)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\text-generation-webui-main\modules\models.py", line 172, in huggingface_loader
    model = LoaderClass.from_pretrained(path_to_model, **params)
            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\text-generation-webui-main\installer_files\env\Lib\site-packages\transformers\models\auto\auto_factory.py", line 564, in from_pretrained
    return model_class.from_pretrained(
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\text-generation-webui-main\installer_files\env\Lib\site-packages\transformers\modeling_utils.py", line 3669, in from_pretrained
    hf_quantizer.validate_environment(
  File "C:\text-generation-webui-main\installer_files\env\Lib\site-packages\transformers\quantizers\quantizer_bnb_4bit.py", line 82, in validate_environment
    validate_bnb_backend_availability(raise_exception=True)
  File "C:\text-generation-webui-main\installer_files\env\Lib\site-packages\transformers\integrations\bitsandbytes.py", line 558, in validate_bnb_backend_availability
    return _validate_bnb_cuda_backend_availability(raise_exception)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\text-generation-webui-main\installer_files\env\Lib\site-packages\transformers\integrations\bitsandbytes.py", line 536, in _validate_bnb_cuda_backend_availability
    raise RuntimeError(log_msg)
RuntimeError: CUDA is required but not available for bitsandbytes. Please consider installing the multi-platform enabled version of bitsandbytes, which is currently a work in progress. Please check currently supported platforms and installation instructions at https://huggingface.co/docs/bitsandbytes/main/en/installation#multi-backend

System Info

Windows 11
CPU: Intel(R) Core (TM) i3-1315U
GPU: Intel(R) UHD Graphics
@Barsik5436 Barsik5436 added the bug Something isn't working label Dec 19, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

1 participant