-
Notifications
You must be signed in to change notification settings - Fork 95
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Any plans for adding gfx10+ support? #648
Comments
I second this feature request, as I have an 8GB 5500 XT for machine learning applications. PyTorch recently added hipBLASLt as a hard requirement to build from version 2.3+ if someone has ROCm 5.7+. Unless the PyTorch developers make it optional (issue currently ongoing), I and other users will be forced to downgrade to 2.2.2, the latest release that doesn't have this prerequisite. |
Here's my attempt to force the compilation of hipBLASLt for my 5500 XT, which uses the gfx1012 architecture. It seemed to hit a brick wall when creating the ExtOp libraries. Unless someone in the community is knowledgeable about how AMD GPUs work at the hardware level and provides unofficial patches for the gfx101x/gfx103x arches, I doubt they'll include them for the foreseeable future.
|
@TheTrustedComputer, if I read code correctly, hipBLASLt and rocWMMA are tied to either mfma (gfx9) or wmma (gfx11) instruction set. You can either build hipBLASLt with |
@AngryLoki Do you mean any supported GPU architecture? I built it for mine, I also appreciate your clarification regarding PyTorch's hipBLASLt requirement. PyTorch has an environment variable Gentoo's patch of hipBLASLt as a dummy library is an interesting workaround; I'll probably give that a try. Thanks! |
@TheTrustedComputer , build for random supported architecture (e. g. gfx940). Pytorch will attempt to load hipblaslt, it will discover that in was not compiled for your current GPU (and it is technically impossible to compile it) and it will automatically fallback to old hipBLAS code (used in pytorch-2.2.2). There is no need to set |
I have an RX5500xt (gfx1012), so does that mean I can only use PyTorch 2.2.2? Do I need to compile PyTorch myself? |
@lalala-233 You can still use the latest PyTorch on the 5500 XT as long you compile it for the GPU's architecture and disable flash and memory-efficient attention (Triton has no support for gfx101x and gfx103x arches). This is the only option for RDNA1 users unless you want to use the very ancient PyTorch 2.0.0 nightly wheel against ROCm 5.2 and have it behave like an RDNA2 card. Later ROCm versions use unique features from that instruction set, so this workaround is no longer possible on RDNA1 hardware. Install hipBLASLt from your distribution's repository (fastest), or build it as a dummy library by applying Gentoo's patch and following @AngryLoki's instructions (slower but faster than building for a supported architecture). This has become a linking requirement for PyTorch since version 2.3, and a proposal to relax this to an optional one is currently in the backlog for a future release. |
I cannot use rx6xxx cards anymore for LLM fine tuning with the new hipblaslt requirement. Are there any plans to add support in the future?
The text was updated successfully, but these errors were encountered: