diff --git a/src/brevitas_examples/imagenet_classification/ptq/README.md b/src/brevitas_examples/imagenet_classification/ptq/README.md
index 7312b8c2b..29386659b 100644
--- a/src/brevitas_examples/imagenet_classification/ptq/README.md
+++ b/src/brevitas_examples/imagenet_classification/ptq/README.md
@@ -36,6 +36,7 @@ Furthermore, Brevitas additional PTQ techniques can be enabled:
- If Graph equalization is enabled, the _merge\_bias_ technique can be enabled.[2 ] [3 ].
- GPTQ [4 ].
- Learned Round [5 ].
+- GPFQ [6 ].
Internally, when defining a quantized model programmatically, Brevitas leverages `torch.fx` and its `symbolic_trace` functionality, meaning that an input model is required to pass symbolic tracing for it to work.
@@ -212,3 +213,4 @@ and a `RESULTS_IMGCLSMOB.csv` with the results on manually quantized models star
[3 ]: https://github.com/openppl-public/ppq/blob/master/ppq/quantization/algorithm/equalization.py
[4 ]: https://arxiv.org/abs/2210.17323
[5 ]: https://arxiv.org/abs/2004.10568
+[6 ]: https://arxiv.org/abs/2201.11113