Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fp8 support for MatMul on cuda #22698

Draft
wants to merge 39 commits into
base: main
Choose a base branch
from

Conversation

amarin16
Copy link
Collaborator

@amarin16 amarin16 commented Nov 1, 2024

No description provided.

// Option values:
// - "0": Gemm fp8 mode is not enabled. [DEFAULT]
// - "1": Gemm fp8 mode is enabled.
static const char* const kOrtSessionOptionsGemmCudaFloat8E4M3FN = "enable_gemm_cuda_float8E4M3FN";

Check notice

Code scanning / CodeQL

Unused static variable Note

Static variable kOrtSessionOptionsGemmCudaFloat8E4M3FN is never read.
Comment on lines +365 to +368
// TODO add a unit test that has more than 256 elements, so that multiple blocks are used
// test.AddInput<MLFloat16>("A", {2, 4}, FloatsToMLFloat16s({1.0f, 2.0f, 3.0f, 4.0f, -1.0f, -2.0f, -3.0f, -4.0f}));
// test.AddInput<MLFloat16>("B", {4, 3}, FloatsToMLFloat16s({1.f, 1.f, 1.f, 1.f, 1.f, 1.f, 1.f, 1.f, 1.f, 1.f, 1.f, 1.f}));
// test.AddOutput<MLFloat16>("Y", {2, 3}, FloatsToMLFloat16s({10.0f, 10.0f, 10.0f, -10.0f, -10.0f, -10.0f}));

Check notice

Code scanning / CodeQL

Commented-out code Note test

This comment appears to contain commented-out code.
Comment on lines +375 to +377
// test.AddInput<MLFloat16>("B", {4, 3}, FloatsToMLFloat16s({10.f, 11.f, 12.f, 13.f, 14.f, 15.f, 16.f, 17.f, 18.f, 19.f, 20.f, 21.f}));
// test.AddInput<MLFloat16>("B", {4, 3}, FloatsToMLFloat16s({17.f, 19.f, 21.f, 13.f, 14.f, 15.f, 16.f, 17.f, 18.f, 19.f, 20.f, 21.f}));
// test.AddOutput<MLFloat16>("Y", {2, 3}, FloatsToMLFloat16s({160.0f, 170.0f, 180.0f, -160.0f, -170.0f, -180.0f}));

Check notice

Code scanning / CodeQL

Commented-out code Note test

This comment appears to contain commented-out code.
// test.AddInput<MLFloat16>("B", {4, 3}, FloatsToMLFloat16s({1.f, 1.f, 1.f, 1.f, 1.f, 1.f, 1.f, 1.f, 1.f, 1.f, 1.f, 1.f}));
// test.AddOutput<MLFloat16>("Y", {2, 3}, FloatsToMLFloat16s({10.0f, 10.0f, 10.0f, -10.0f, -10.0f, -10.0f}));

test.AddInput<MLFloat16>("A", {2, 2}, FloatsToMLFloat16s({1.0f, 1.0f, 1.0f, 1.0f}));
Copy link
Contributor

@tianleiwu tianleiwu Nov 6, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For FP8 GEMM, pointers and matrix dimension (strides?) must support 16-byte alignment.

Could you test input like {2, 16} instead of {2, 2}.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Tried that as well, but see similar differences between actual and expected results

The difference between f_expected[i] and f_actual[i] is 11.55078125, which exceeds tolerance, where
f_expected[i] evaluates to 16,
f_actual[i] evaluates to 4.44921875, and
tolerance evaluates to 0.018500000238418579.

Copy link
Contributor

@tianleiwu tianleiwu Nov 6, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I saw that you changed A to {2, 16}, but B and output are still not 16-byte alignment.
How about testing M=16, K=32, N=16?

Detail requirements: https://docs.nvidia.com/cuda/cublas/index.html#tensor-core-usage

((op_A == CUBLAS_OP_N ? m : k) * AtypeSize) % 16 == 0
((op_B == CUBLAS_OP_N ? k : n) * BtypeSize) % 16 == 0
(m * CtypeSize) % 16 == 0
(lda * AtypeSize) % 16 == 0
(ldb * BtypeSize) % 16 == 0
(ldc * CtypeSize) % 16 == 0
intptr_t(A) % 16 == 0
intptr_t(B) % 16 == 0
intptr_t(C) % 16 == 0

We need add some checks before enabling fp8. If requirements are not satisfied, we shall not use fp8.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am seeing a similar behavior for M=16, K=32, N=16 as well.

The difference between f_expected[i] and f_actual[i] is 7.1015625, which exceeds tolerance, where
f_expected[i] evaluates to 16,
f_actual[i] evaluates to 8.8984375, and
tolerance evaluates to 0.018500000238418579.

We need add some checks before enabling fp8. If requirements are not satisfied, we shall not use fp8.

sure, we can add this

Comment on lines +293 to +298
float* quant_float = (float*)malloc(256 * sizeof(float));
for (int i = 0; i < 256; i ++) {
quant_float[i] = i;
}
float std_quant = ComputeStandardDeviation(quant_float, 256);
free(quant_float);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

quant_float is const vector, which means std_quant can be a constant. Why do we need compute it online?

// Option values:
// - "0": Gemm fp8 mode is not enabled. [DEFAULT]
// - "1": Gemm fp8 mode is enabled.
static const char* const kOrtSessionOptionsGemmCudaFloat8E4M3FN = "enable_gemm_cuda_float8E4M3FN";
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since this is CUDA EP specific, should this be a generic session option or a CUDA EP provider option ?

Comment on lines +370 to +372
// test.AddInput<MLFloat16>("A", {2, 2}, FloatsToMLFloat16s({1.0f, 1.0f, 1.0f, 1.0f}));
// test.AddInput<MLFloat16>("B", {2, 2}, FloatsToMLFloat16s({1.0f, 1.0f, 1.0f, 1.0f}));
// test.AddOutput<MLFloat16>("Y", {2, 2}, FloatsToMLFloat16s({2.0f, 2.0f, 2.0f, 2.0f}));

Check notice

Code scanning / CodeQL

Commented-out code Note test

This comment appears to contain commented-out code.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants