Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Removal of noavx2 instructions, no longer possible? #5955

Closed
1 task done
Xyem opened this issue Apr 29, 2024 · 8 comments
Closed
1 task done

Removal of noavx2 instructions, no longer possible? #5955

Xyem opened this issue Apr 29, 2024 · 8 comments
Labels
bug Something isn't working

Comments

@Xyem
Copy link

Xyem commented Apr 29, 2024

Describe the bug

I've previous been using oobabooga on my machine and just came to update it, to find that the noavx2 requirement files have been removed and it is not clear if other instructions can be substituted or using oobabooga on this CPU is no longer possible (which would be unfortunate, as it is fitted with an RTX 2060 and 128GB of RAM, for loading big models..).

Can you advise/restore the instructions?

Is there an existing issue for this?

  • I have searched the existing issues

Reproduction

N/A

Screenshot

No response

Logs

N/A

System Info

Arch Linux
Intel(R) Xeon(R) CPU E5-1620 0 @ 3.60GHz
NVIDIA Corporation TU106 [GeForce RTX 2060 12GB]
@Xyem Xyem added the bug Something isn't working label Apr 29, 2024
@dgdguk
Copy link

dgdguk commented Apr 29, 2024

That's by design, so not a bug - the same commit also removed support for AMD/Intel GPUs (see #5921).

You can go back to the last supported version with the following commands (I think, anyway):

git checkout 26d822f
./cmd_linux.sh
python -c "import one_click; one_click.update_requirements(pull=False)"  # Update dependencies without updating to broken version

@Xyem
Copy link
Author

Xyem commented Apr 29, 2024

I don't understand why it was removed.. there are wheels(?) for AVX-only as far as I can tell. python -m pip install llama-cpp-python --prefer-binary --upgrade --extra-index-url=https://jllllll.github.io/llama-cpp-python-cuBLAS-wheels/AVX/cu122 --force-reinstall seems to work fine (model loads, 40 tokens/s, sane output).
`

@Xyem
Copy link
Author

Xyem commented Apr 29, 2024

Frustrating.. I wanted to update to use multimodal and I've just hit this issue instead in my fresh installation: #5036

@ShaunCassidyPoster
Copy link

I was just about to update the webui when I saw this issue. I have an older i7 930 processor that I don't think has AVX instructions. Does this change prevent me from updating ever again?

@Xyem
Copy link
Author

Xyem commented May 1, 2024

I was just about to update the webui when I saw this issue. I have an older i7 930 processor that I don't think has AVX instructions. Does this change prevent me from updating ever again?

Sorry, I was meant to respond to you yesterday. I just did a fresh installation and used requirements.txt and then force-installed llama-cpp-python using the command in my previous comment to get a working installation. I believe there is one for basic which doesn't use AVX instructions at all, which might be suitable for you (i.e. https://jllllll.github.io/llama-cpp-python-cuBLAS-wheels/basic/cu122/llama-cpp-python/).

Hope this helps!

@ShaunCassidyPoster
Copy link

Thanks, Xyem. I will try python -m pip install llama-cpp-python --prefer-binary --upgrade --extra-index-url=https://jllllll.github.io/llama-cpp-python-cuBLAS-wheels/basic/cu122 --force-reinstall using the cmd_windows.bat console after running update_wizard_windows.bat. The other front ends all seem to require AVX, too. This will help buy some time before building a new computer.

@dgdguk
Copy link

dgdguk commented May 1, 2024

As of #5964, noavx, AMD etc. wheels have been restored, so it's possible to install these now - so this issue has been resolved, I think.

@Xyem
Copy link
Author

Xyem commented May 1, 2024

As of #5964, noavx, AMD etc. wheels have been restored, so it's possible to install these now - so this issue has been resolved, I think.

That's fantastic, thanks for letting me know. Really glad to see that. :)

I'll go ahead and close this then.

@Xyem Xyem closed this as completed May 1, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

3 participants