-
Notifications
You must be signed in to change notification settings - Fork 27
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add support for native asymmetric quantization to AQTv2. #725
Comments
copybara-service bot
pushed a commit
that referenced
this issue
Sep 19, 2024
Integration of native quantization with biases will require computing the cross terms. See [#725](#725) Itemized changes: - Add `IntAsymmetric` to handle asymmetric integer numerics. - this class forgoes some of the more research-y parameters present on `IntSymmetric`. - Add `MinMaxCalibration` to calculate the scale and bias for asymmetric quantization. I additionally tested this change by training MNIST models using `flax_e2e_model`. With symmetric quantization the model fails to converge for `config.config_v4(fwd_bits=2, dlhs_bits=None, drhs_bits=None)` (due to `NaN` losses). With asymmetric quantization the model converges even with `config.config_v4(fwd_bits=2, dlhs_bits=2, drhs_bits=4)`. PiperOrigin-RevId: 651580879
copybara-service bot
pushed a commit
that referenced
this issue
Sep 20, 2024
Integration of native quantization with biases will require computing the cross terms. See [#725](#725) Itemized changes: - Add `IntAsymmetric` to handle asymmetric integer numerics. - this class forgoes some of the more research-y parameters present on `IntSymmetric`. - Add `MinMaxCalibration` to calculate the scale and bias for asymmetric quantization. I additionally tested this change by training MNIST models using `flax_e2e_model`. With symmetric quantization the model fails to converge for `config.config_v4(fwd_bits=2, dlhs_bits=None, drhs_bits=None)` (due to `NaN` losses). With asymmetric quantization the model converges even with `config.config_v4(fwd_bits=2, dlhs_bits=2, drhs_bits=4)`. PiperOrigin-RevId: 651580879
copybara-service bot
pushed a commit
that referenced
this issue
Sep 20, 2024
Integration of native quantization with biases will require computing the cross terms. See [#725](#725) Itemized changes: - Add `IntAsymmetric` to handle asymmetric integer numerics. - this class forgoes some of the more research-y parameters present on `IntSymmetric`. - Add `MinMaxCalibration` to calculate the scale and bias for asymmetric quantization. I additionally tested this change by training MNIST models using `flax_e2e_model`. With symmetric quantization the model fails to converge for `config.config_v4(fwd_bits=2, dlhs_bits=None, drhs_bits=None)` (due to `NaN` losses). With asymmetric quantization the model converges even with `config.config_v4(fwd_bits=2, dlhs_bits=2, drhs_bits=4)`. PiperOrigin-RevId: 651580879
copybara-service bot
pushed a commit
that referenced
this issue
Sep 20, 2024
Integration of native quantization with biases will require computing the cross terms. See [#725](#725) Itemized changes: - Add `IntAsymmetric` to handle asymmetric integer numerics. - this class forgoes some of the more research-y parameters present on `IntSymmetric`. - Add `MinMaxCalibration` to calculate the scale and bias for asymmetric quantization. I additionally tested this change by training MNIST models using `flax_e2e_model`. With symmetric quantization the model fails to converge for `config.config_v4(fwd_bits=2, dlhs_bits=None, drhs_bits=None)` (due to `NaN` losses). With asymmetric quantization the model converges even with `config.config_v4(fwd_bits=2, dlhs_bits=2, drhs_bits=4)`. PiperOrigin-RevId: 651580879
copybara-service bot
pushed a commit
that referenced
this issue
Sep 20, 2024
Integration of native quantization with biases will require computing the cross terms. See [#725](#725) Itemized changes: - Add `IntAsymmetric` to handle asymmetric integer numerics. - this class forgoes some of the more research-y parameters present on `IntSymmetric`. - Add `MinMaxCalibration` to calculate the scale and bias for asymmetric quantization. I additionally tested this change by training MNIST models using `flax_e2e_model`. With symmetric quantization the model fails to converge for `config.config_v4(fwd_bits=2, dlhs_bits=None, drhs_bits=None)` (due to `NaN` losses). With asymmetric quantization the model converges even with `config.config_v4(fwd_bits=2, dlhs_bits=2, drhs_bits=4)`. PiperOrigin-RevId: 651580879
copybara-service bot
pushed a commit
that referenced
this issue
Sep 23, 2024
Integration of native quantization with biases will require computing the cross terms. See [#725](#725) Itemized changes: - Add `IntAsymmetric` to handle asymmetric integer numerics. - this class forgoes some of the more research-y parameters present on `IntSymmetric`. - Add `MinMaxCalibration` to calculate the scale and bias for asymmetric quantization. I additionally tested this change by training MNIST models using `flax_e2e_model`. With symmetric quantization the model fails to converge for `config.config_v4(fwd_bits=2, dlhs_bits=None, drhs_bits=None)` (due to `NaN` losses). With asymmetric quantization the model converges even with `config.config_v4(fwd_bits=2, dlhs_bits=2, drhs_bits=4)`. PiperOrigin-RevId: 651580879
copybara-service bot
pushed a commit
that referenced
this issue
Sep 27, 2024
Integration of native quantization with biases will require computing the cross terms. See [#725](#725) Itemized changes: - Add `IntAsymmetric` to handle asymmetric integer numerics. - this class forgoes some of the more research-y parameters present on `IntSymmetric`. - Add `MinMaxCalibration` to calculate the scale and bias for asymmetric quantization. I additionally tested this change by training MNIST models using `flax_e2e_model`. With symmetric quantization the model fails to converge for `config.config_v4(fwd_bits=2, dlhs_bits=None, drhs_bits=None)` (due to `NaN` losses). With asymmetric quantization the model converges even with `config.config_v4(fwd_bits=2, dlhs_bits=2, drhs_bits=4)`. PiperOrigin-RevId: 651580879
copybara-service bot
pushed a commit
that referenced
this issue
Oct 4, 2024
Integration of native quantization with biases will require computing the cross terms. See [#725](#725) Itemized changes: - Add `IntAsymmetric` to handle asymmetric integer numerics. - this class forgoes some of the more research-y parameters present on `IntSymmetric`. - Add `MinMaxCalibration` to calculate the scale and bias for asymmetric quantization. I additionally tested this change by training MNIST models using `flax_e2e_model`. With symmetric quantization the model fails to converge for `config.config_v4(fwd_bits=2, dlhs_bits=None, drhs_bits=None)` (due to `NaN` losses). With asymmetric quantization the model converges even with `config.config_v4(fwd_bits=2, dlhs_bits=2, drhs_bits=4)`. PiperOrigin-RevId: 651580879
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
AQTv2 supports biases and will soon support asymmetric quantization, but only via fake quantization. Supporting native integer asymmetric quantization requires calculating the cross terms in
DotGeneralQuantizer
(AQTv2'sconv
anddot_general
operation quantizer).The text was updated successfully, but these errors were encountered: