Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Change driver verify to check for fp16 and --fp16 #2334

Merged
merged 5 commits into from
Oct 18, 2023
Merged

Conversation

CharlieL7
Copy link
Collaborator

@CharlieL7 CharlieL7 commented Oct 13, 2023

Change the tolerances based on what we find.
Use std::optional.

@CharlieL7 CharlieL7 self-assigned this Oct 13, 2023
@CharlieL7 CharlieL7 changed the title Change to check for fp16 and --fp16 Change driver verify to check for fp16 and --fp16 Oct 13, 2023
@codecov
Copy link

codecov bot commented Oct 13, 2023

Codecov Report

Merging #2334 (abdb00e) into develop (94bda24) will not change coverage.
The diff coverage is n/a.

❗ Current head abdb00e differs from pull request most recent head 5a3ba68. Consider uploading reports for the commit 5a3ba68 to get more accurate results

@@           Coverage Diff            @@
##           develop    #2334   +/-   ##
========================================
  Coverage    91.35%   91.35%           
========================================
  Files          438      438           
  Lines        16465    16465           
========================================
  Hits         15041    15041           
  Misses        1424     1424           

src/driver/main.cpp Outdated Show resolved Hide resolved
src/driver/main.cpp Outdated Show resolved Hide resolved
@migraphx-bot
Copy link
Collaborator

migraphx-bot commented Oct 14, 2023

Test Batch Rate new
acd4c5
Rate old
650ba4
Diff Compare
torchvision-resnet50 64 2,853.78 2,855.07 -0.05%
torchvision-resnet50_fp16 64 6,481.63 6,485.89 -0.07%
torchvision-densenet121 32 2,105.15 2,099.19 0.28%
torchvision-densenet121_fp16 32 3,693.70 3,685.09 0.23%
torchvision-inceptionv3 32 1,597.46 1,598.81 -0.08%
torchvision-inceptionv3_fp16 32 2,594.94 2,594.10 0.03%
cadene-inceptionv4 16 707.61 708.32 -0.10%
cadene-resnext64x4 16 698.38 698.74 -0.05%
slim-mobilenet 64 8,369.15 8,357.32 0.14%
slim-nasnetalarge 64 227.10 227.33 -0.10%
slim-resnet50v2 64 2,679.07 2,677.94 0.04%
bert-mrpc-onnx 8 825.07 825.54 -0.06%
bert-mrpc-tf 1 388.27 387.74 0.14%
pytorch-examples-wlang-gru 1 288.15 296.73 -2.89%
pytorch-examples-wlang-lstm 1 303.44 307.96 -1.47%
torchvision-resnet50_1 1 603.81 605.56 -0.29%
torchvision-inceptionv3_1 1 339.31 337.94 0.40%
cadene-dpn92_1 1 398.60 396.52 0.53%
cadene-resnext101_1 1 329.34 329.16 0.05%
slim-vgg16_1 1 464.09 464.46 -0.08%
slim-mobilenet_1 1 2,080.10 2,075.27 0.23%
slim-inceptionv4_1 1 217.58 217.23 0.16%
onnx-taau-downsample 1 306.26 306.31 -0.02%
dlrm-criteoterabyte 1 21.70 21.72 -0.07%
dlrm-criteoterabyte_fp16 1 40.76 40.75 0.03%
agentmodel 1 5,712.95 5,825.40 -1.93%
unet_fp16 2 55.99 56.00 -0.03%
resnet50v1_fp16 1 935.89 935.59 0.03%
bert_base_cased_fp16 64 971.24 971.23 0.00%
bert_large_uncased_fp16 32 305.20 305.23 -0.01%
bert_large_fp16 1 166.75 167.09 -0.21%
distilgpt2_fp16 16 1,278.86 1,278.55 0.02%

This build is OK for merge ✅

@migraphx-bot
Copy link
Collaborator

migraphx-bot commented Oct 14, 2023


    :white_check_mark:bert-mrpc-onnx: PASSED: MIGraphX meets tolerance

    :white_check_mark:bert-mrpc-tf: PASSED: MIGraphX meets tolerance

    :white_check_mark:pytorch-examples-wlang-gru: PASSED: MIGraphX meets tolerance

    :white_check_mark:pytorch-examples-wlang-lstm: PASSED: MIGraphX meets tolerance

    :white_check_mark:torchvision-resnet50_1: PASSED: MIGraphX meets tolerance

🔴torchvision-inceptionv3_1: FAILED: MIGraphX is not within tolerance - check verbose output


    :white_check_mark:cadene-dpn92_1: PASSED: MIGraphX meets tolerance

    :white_check_mark:cadene-resnext101_1: PASSED: MIGraphX meets tolerance

    :white_check_mark:slim-vgg16_1: PASSED: MIGraphX meets tolerance

    :white_check_mark:slim-mobilenet_1: PASSED: MIGraphX meets tolerance

🔴slim-inceptionv4_1: FAILED: MIGraphX is not within tolerance - check verbose output


    :white_check_mark:dlrm-criteoterabyte: PASSED: MIGraphX meets tolerance

    :white_check_mark:agentmodel: PASSED: MIGraphX meets tolerance

    :white_check_mark:unet: PASSED: MIGraphX meets tolerance

    :white_check_mark:resnet50v1: PASSED: MIGraphX meets tolerance

🔴bert_base_cased_fp16: FAILED: MIGraphX is not within tolerance - check verbose output


🔴bert_large_uncased_fp16: FAILED: MIGraphX is not within tolerance - check verbose output


    :white_check_mark:bert_large: PASSED: MIGraphX meets tolerance

🔴distilgpt2_fp16: FAILED: MIGraphX is not within tolerance - check verbose output

src/driver/main.cpp Outdated Show resolved Hide resolved
src/driver/main.cpp Outdated Show resolved Hide resolved
@CharlieL7 CharlieL7 marked this pull request as ready for review October 16, 2023 19:07
@causten
Copy link
Collaborator

causten commented Oct 17, 2023

@CharlieL7 can you check if the tolerances are valid, I see two models failing that were not in the past

@CharlieL7
Copy link
Collaborator Author

@CharlieL7 can you check if the tolerances are valid, I see two models failing that were not in the past

@causten Those tests are using the accuracy_checker script, should be unrelated to the changes in this PR. Issue originates from #2274.

template <class T>
struct value_parser<std::optional<T>>
{
static T apply(const std::string& x) { return value_parser<T>::apply(x); }
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actually this should return std::optional<T>: static std::optional<T> apply(const std::string& x) { return value_parser<T>::apply(x); }.

@causten causten merged commit 5139b93 into develop Oct 18, 2023
14 of 15 checks passed
@causten causten deleted the fp16_tol_updates branch October 18, 2023 00:07
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants