Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Skip FP8 dot/gemm for MLIR and offload them to rocBLAS #2924

Merged
merged 3 commits into from
Mar 25, 2024
Merged

Conversation

umangyadav
Copy link
Member

@umangyadav umangyadav commented Mar 25, 2024

MIGraphX has verify tests where it tests FP8 * FP8 = FP8 case.
rocBLAS has support for it but not the MLIR. rocBLAS internally converts the output of the GEMM to fp8.

FP8 * FP8 = FP8 case is probably unlikely to happen for the gemm/dot operations. It should mostly be fp8 * fp8 = fp32.

FP8 tests are run on MI300 hardware and therefore Jenkins has not seen any failures but 1416489 broke unit-tests for FP8 * FP8 = FP8 case on MI300 because with that change all the gemms with k<=1028 are now being offloaded to MLIR.

@umangyadav umangyadav requested a review from causten as a code owner March 25, 2024 14:55
@umangyadav umangyadav self-assigned this Mar 25, 2024
Copy link

codecov bot commented Mar 25, 2024

Codecov Report

All modified and coverable lines are covered by tests ✅

Project coverage is 91.83%. Comparing base (f0cb545) to head (c0fc875).

Additional details and impacted files
@@           Coverage Diff            @@
##           develop    #2924   +/-   ##
========================================
  Coverage    91.83%   91.83%           
========================================
  Files          479      479           
  Lines        18340    18340           
========================================
  Hits         16842    16842           
  Misses        1498     1498           

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@migraphx-bot
Copy link
Collaborator

Test Batch Rate new
c0fc87
Rate old
f0cb54
Diff Compare
torchvision-resnet50 64 2,819.41 2,825.79 -0.23%
torchvision-resnet50_fp16 64 6,543.00 6,546.40 -0.05%
torchvision-densenet121 32 2,090.71 2,100.61 -0.47%
torchvision-densenet121_fp16 32 3,686.38 3,687.46 -0.03%
torchvision-inceptionv3 32 1,601.12 1,600.62 0.03%
torchvision-inceptionv3_fp16 32 2,551.69 2,551.41 0.01%
cadene-inceptionv4 16 715.33 714.73 0.08%
cadene-resnext64x4 16 678.81 678.24 0.08%
slim-mobilenet 64 5,843.68 5,844.96 -0.02%
slim-nasnetalarge 64 152.96 152.88 0.05%
slim-resnet50v2 64 2,583.29 2,582.46 0.03%
bert-mrpc-onnx 8 917.29 916.77 0.06%
bert-mrpc-tf 1 437.36 436.75 0.14%
pytorch-examples-wlang-gru 1 422.20 426.30 -0.96%
pytorch-examples-wlang-lstm 1 408.09 394.12 3.54% 🔆
torchvision-resnet50_1 1 609.04 604.22 0.80%
cadene-dpn92_1 1 392.08 391.28 0.20%
cadene-resnext101_1 1 331.12 331.08 0.01%
onnx-taau-downsample 1 306.13 304.34 0.59%
dlrm-criteoterabyte 1 28.72 28.73 -0.03%
dlrm-criteoterabyte_fp16 1 48.37 48.31 0.13%
agentmodel 1 7,524.74 7,604.45 -1.05%
unet_fp16 2 57.59 57.62 -0.04%
resnet50v1_fp16 1 907.17 907.72 -0.06%
resnet50v1_int8 1 824.24 807.49 2.07%
bert_base_cased_fp16 64 1,052.21 1,052.67 -0.04%
bert_large_uncased_fp16 32 300.24 300.18 0.02%
bert_large_fp16 1 158.49 158.43 0.04%
distilgpt2_fp16 16 1,856.20 1,860.28 -0.22%
yolov5s 1 475.36 476.75 -0.29%
tinyllama 1 32.86 32.85 0.02%
vicuna-fastchat 1 158.35 158.09 0.16%
whisper-tiny-encoder 1 346.94 348.19 -0.36%
whisper-tiny-decoder 1 400.45 401.19 -0.18%

Check results before merge 🔆

@migraphx-bot
Copy link
Collaborator


     ✅ bert-mrpc-onnx: PASSED: MIGraphX meets tolerance

     ✅ bert-mrpc-tf: PASSED: MIGraphX meets tolerance

     ✅ pytorch-examples-wlang-gru: PASSED: MIGraphX meets tolerance

     ✅ pytorch-examples-wlang-lstm: PASSED: MIGraphX meets tolerance

     ✅ torchvision-resnet50_1: PASSED: MIGraphX meets tolerance

     ✅ cadene-dpn92_1: PASSED: MIGraphX meets tolerance

     ✅ cadene-resnext101_1: PASSED: MIGraphX meets tolerance

     ✅ dlrm-criteoterabyte: PASSED: MIGraphX meets tolerance

     ✅ agentmodel: PASSED: MIGraphX meets tolerance

     ✅ unet: PASSED: MIGraphX meets tolerance

     ✅ resnet50v1: PASSED: MIGraphX meets tolerance

     ✅ bert_base_cased_fp16: PASSED: MIGraphX meets tolerance

🔴bert_large_uncased_fp16: FAILED: MIGraphX is not within tolerance - check verbose output


     ✅ bert_large: PASSED: MIGraphX meets tolerance

     ✅ yolov5s: PASSED: MIGraphX meets tolerance

     ✅ tinyllama: PASSED: MIGraphX meets tolerance

     ✅ vicuna-fastchat: PASSED: MIGraphX meets tolerance

     ✅ whisper-tiny-encoder: PASSED: MIGraphX meets tolerance

     ✅ whisper-tiny-decoder: PASSED: MIGraphX meets tolerance

     ✅ distilgpt2_fp16: PASSED: MIGraphX meets tolerance

@TedThemistokleous TedThemistokleous added bugfix Fixes a bug found in the code. simple small or simple changes labels Mar 25, 2024
@umangyadav umangyadav merged commit a4a2202 into develop Mar 25, 2024
48 checks passed
@umangyadav umangyadav deleted the fp8_dot branch March 25, 2024 21:32
umangyadav added a commit that referenced this pull request Mar 25, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bugfix Fixes a bug found in the code. simple small or simple changes
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants