Skip to content

Issues: ROCm/AMDMIGraphX

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Author
Filter by author
Loading
Label
Filter by label
Loading
Use alt + click/return to exclude labels
or + click/return for logical OR
Projects
Filter by project
Loading
Milestones
Filter by milestone
Loading
Assignee
Filter by who’s assigned
Sort

Issues list

Add SD3 to DLM for both MIGX and TRT
#3459 opened Sep 18, 2024 by causten
Add llama2 with KV-cache example
#3458 opened Sep 18, 2024 by turneram
Add llama2 with KV-cache to DLM
#3457 opened Sep 18, 2024 by turneram
Add compile pass for hipblaslt
#3455 opened Sep 18, 2024 by ahsan-ca
Use GPU intrinsics and HIP types for FP8 for MIGX JIT kernels FP8 issues related to FP8 implemenation
#3442 opened Sep 13, 2024 by CharlieL7
Update formatter github action to make suggestions Continous Integration Pull request updates parts of continous integration pipeline enhancement New feature or request
#3431 opened Sep 10, 2024 by CharlieL7
Figure out the encoding for fp8 in numpy (numpy does not have a way currently) FP8 issues related to FP8 implemenation
#3422 opened Sep 6, 2024 by CharlieL7
Add group convolution support for fp8 (needs MLIR update) FP8 issues related to FP8 implemenation
#3421 opened Sep 6, 2024 by CharlieL7
Figure out why test_gemm_add verify test is inaccurate for fp8 bugfix Fixes a bug found in the code. FP8 issues related to FP8 implemenation
#3420 opened Sep 6, 2024 by CharlieL7
Update OCP FP8 support with hipblaslt support FP8 issues related to FP8 implemenation roadmap Tasks to finish for a release
#3419 opened Sep 6, 2024 by CharlieL7
Debug end-to-end accuracy of Llama2
#3394 opened Aug 21, 2024 by turneram
ProTip! Exclude everything labeled bug with -label:bug.