Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[AWQ] Cast fns.quantile() result to float32 #3044

Merged
merged 1 commit into from
Oct 30, 2024

Conversation

nikita-savelyevv
Copy link
Collaborator

@nikita-savelyevv nikita-savelyevv commented Oct 28, 2024

Changes

Cast fns.quantile() result to float32 inside AWQ algorithm.

Reason for changes

fns.quantile() for numpy backend returns np.float64 value. In AWQ it is used as a clip lower bound, resulting in float64 result. Then via chain reaction it leads to weights and activations being converted to float64.

As I understand, processing in float64 is not necessary. At the same time it leads to increased running time. Below are measurements for compression time with AWQ enabled before and after the changes.

Model develop (sec.) branch (sec.)
tiny-llama-1.1b 123 109 (-11%)
phi3_mini-3.7b 487 419 (-14%)
llama3-8b 1091 912 (-16%)

@github-actions github-actions bot added the NNCF PTQ Pull requests that updates NNCF PTQ label Oct 28, 2024
@nikita-savelyevv
Copy link
Collaborator Author

nikita-savelyevv commented Oct 29, 2024

Post-training weight compression conformance test shows no accuracy degradations (NNCF/job/manual/job/post_training_weight_compression/229). I've compared against develop build 230.

@nikita-savelyevv nikita-savelyevv marked this pull request as ready for review October 29, 2024 05:42
@nikita-savelyevv nikita-savelyevv requested a review from a team as a code owner October 29, 2024 05:42
@ljaljushkin ljaljushkin merged commit db3a935 into openvinotoolkit:develop Oct 30, 2024
14 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
NNCF PTQ Pull requests that updates NNCF PTQ
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants