diff --git a/prototype_source/pt2e_quant_ptq_static.rst b/prototype_source/pt2e_quant_ptq_static.rst index 5f6ae77328..d8307b24ae 100644 --- a/prototype_source/pt2e_quant_ptq_static.rst +++ b/prototype_source/pt2e_quant_ptq_static.rst @@ -307,7 +307,7 @@ For post training quantization, we'll need to set model to the eval mode. .set_object_type(torch.nn.functional.linear, qconfig_opt) # or torch functional op .set_module_name("foo.bar", qconfig_opt) -We have another `tutorial `_ that talks about how to write a new ``Quantizer``. +We have another `tutorial `_ that talks about how to write a new ``Quantizer``. 6. Prepare the Model for Post Training Static Quantization ---------------------------------------------------------- diff --git a/prototype_source/pt2e_quantizer.rst b/prototype_source/pt2e_quantizer.rst index c12237fdb4..c4dcce4116 100644 --- a/prototype_source/pt2e_quantizer.rst +++ b/prototype_source/pt2e_quantizer.rst @@ -10,12 +10,16 @@ Prerequisites: Required: - `Torchdynamo concepts in PyTorch `__ + - `Quantization concepts in PyTorch `__ + - `(prototype) PyTorch 2.0 Export Post Training Static Quantization `__ Optional: - `FX Graph Mode post training static quantization `__ + - `BackendConfig in PyTorch Quantization FX Graph Mode `__ + - `QConfig and QConfigMapping in PyTorch Quantization FX Graph Mode `__ Introduction @@ -25,12 +29,12 @@ Introduction (1). What is supported quantized operator or patterns in the backend (2). How can users express the way they want their floating point model to be quantized, for example, quantized the whole model to be int8 symmetric quantization, or quantize only linear layers etc. -Please see `here `__ For motivations for ``Quantizer``. +Please see `here `__ For motivations for the new API and ``Quantizer``. An existing quantizer object defined for ``XNNPACK`` is in `QNNPackQuantizer `__ -Annotation API: +Annotation API ^^^^^^^^^^^^^^^^^^^ ``Quantizer`` uses annotation API to convey quantization intent for different operators/patterns. @@ -269,4 +273,4 @@ Conclusion With this tutorial, we introduce the new quantization path in PyTorch 2.0. Users can learn about how to define a ``BackendQuantizer`` with the ``QuantizationAnnotation API`` and integrate it into the quantization 2.0 flow. Examples of ``QuantizationSpec``, ``SharedQuantizationSpec``, ``FixedQParamsQuantizationSpec``, and ``DerivedQuantizationSpec`` -are given for specific annotation use case. This is a prerequisite to be able to quantize a model in PyTorch 2.0 Export Quantization flow. Please follow `this tutorial `_ to actually quantize a model. +are given for specific annotation use case. This is a prerequisite to be able to quantize a model in PyTorch 2.0 Export Quantization flow. You can use `XNNPACKQuantizer `_ as an example to start implementing your own ``Quantizer``. After that please follow `this tutorial `_ to actually quantize your model.