Skip to content

Commit

Permalink
auto-generating sphinx docs
Browse files Browse the repository at this point in the history
  • Loading branch information
pytorchbot committed Oct 10, 2024
1 parent afe4a94 commit ec33303
Show file tree
Hide file tree
Showing 9 changed files with 119 additions and 119 deletions.
Binary file not shown.
Binary file not shown.
Binary file not shown.
210 changes: 105 additions & 105 deletions main/_modules/torchao/dtypes/affine_quantized_tensor.html

Large diffs are not rendered by default.

2 changes: 1 addition & 1 deletion main/_modules/torchao/quantization/quant_api.html
Original file line number Diff line number Diff line change
Expand Up @@ -1266,7 +1266,7 @@ <h1>Source code for torchao.quantization.quant_api</h1><div class="highlight"><p
<span class="sd"> e.g. fp6_e3_m2, fp6_e2_m3, ...</span>
<span class="sd"> The packing format and kernels are from the fp6-llm paper: https://arxiv.org/abs/2401.14112</span>
<span class="sd"> github repo: https://github.com/usyd-fsalab/fp6_llm, now renamed to quant-llm</span>
<span class="sd"> For more details for packing please see: :class:`~torchao.dtypes.fpx.FpxTensorCoreAQTLayout`</span>
<span class="sd"> For more details for packing please see: :class:`~torchao.dtypes.fpx.FpxTensorCoreAQTTensorImpl`</span>

<span class="sd"> This is experimental, will be merged with `to_affine_quantized_floatx`</span>
<span class="sd"> in the future</span>
Expand Down
10 changes: 5 additions & 5 deletions main/_sources/tutorials/template_tutorial.rst.txt
Original file line number Diff line number Diff line change
Expand Up @@ -66,11 +66,11 @@ Example code (the output below is generated automatically):

.. code-block:: none
tensor([[0.8493, 0.0526, 0.5841],
[0.6383, 0.5932, 0.8083],
[0.3087, 0.3515, 0.4735],
[0.8996, 0.7762, 0.1826],
[0.2607, 0.2312, 0.7631]])
tensor([[0.7804, 0.8663, 0.7150],
[0.4530, 0.6350, 0.2086],
[0.9097, 0.1238, 0.0825],
[0.5196, 0.2840, 0.3932],
[0.3891, 0.3960, 0.5983]])
Expand Down
4 changes: 2 additions & 2 deletions main/generated/torchao.dtypes.AffineQuantizedTensor.html
Original file line number Diff line number Diff line change
Expand Up @@ -390,7 +390,7 @@
<h1>AffineQuantizedTensor<a class="headerlink" href="#affinequantizedtensor" title="Permalink to this heading"></a></h1>
<dl class="py class">
<dt class="sig sig-object py" id="torchao.dtypes.AffineQuantizedTensor">
<em class="property"><span class="pre">class</span><span class="w"> </span></em><span class="sig-prename descclassname"><span class="pre">torchao.dtypes.</span></span><span class="sig-name descname"><span class="pre">AffineQuantizedTensor</span></span><span class="sig-paren">(</span><em class="sig-param"><span class="n"><span class="pre">layout_tensor</span></span><span class="p"><span class="pre">:</span></span><span class="w"> </span><span class="n"><span class="pre">AQTLayout</span></span></em>, <em class="sig-param"><span class="n"><span class="pre">block_size</span></span><span class="p"><span class="pre">:</span></span><span class="w"> </span><span class="n"><a class="reference external" href="https://docs.python.org/3/library/typing.html#typing.Tuple" title="(in Python v3.13)"><span class="pre">Tuple</span></a><span class="p"><span class="pre">[</span></span><a class="reference external" href="https://docs.python.org/3/library/functions.html#int" title="(in Python v3.13)"><span class="pre">int</span></a><span class="p"><span class="pre">,</span></span><span class="w"> </span><span class="p"><span class="pre">...</span></span><span class="p"><span class="pre">]</span></span></span></em>, <em class="sig-param"><span class="n"><span class="pre">shape</span></span><span class="p"><span class="pre">:</span></span><span class="w"> </span><span class="n"><a class="reference external" href="https://pytorch.org/docs/stable/size.html#torch.Size" title="(in PyTorch v2.4)"><span class="pre">Size</span></a></span></em>, <em class="sig-param"><span class="n"><span class="pre">quant_min</span></span><span class="p"><span class="pre">:</span></span><span class="w"> </span><span class="n"><a class="reference external" href="https://docs.python.org/3/library/typing.html#typing.Optional" title="(in Python v3.13)"><span class="pre">Optional</span></a><span class="p"><span class="pre">[</span></span><a class="reference external" href="https://docs.python.org/3/library/typing.html#typing.Union" title="(in Python v3.13)"><span class="pre">Union</span></a><span class="p"><span class="pre">[</span></span><a class="reference external" href="https://docs.python.org/3/library/functions.html#int" title="(in Python v3.13)"><span class="pre">int</span></a><span class="p"><span class="pre">,</span></span><span class="w"> </span><a class="reference external" href="https://docs.python.org/3/library/functions.html#float" title="(in Python v3.13)"><span class="pre">float</span></a><span class="p"><span class="pre">]</span></span><span class="p"><span class="pre">]</span></span></span><span class="w"> </span><span class="o"><span class="pre">=</span></span><span class="w"> </span><span class="default_value"><span class="pre">None</span></span></em>, <em class="sig-param"><span class="n"><span class="pre">quant_max</span></span><span class="p"><span class="pre">:</span></span><span class="w"> </span><span class="n"><a class="reference external" href="https://docs.python.org/3/library/typing.html#typing.Optional" title="(in Python v3.13)"><span class="pre">Optional</span></a><span class="p"><span class="pre">[</span></span><a class="reference external" href="https://docs.python.org/3/library/typing.html#typing.Union" title="(in Python v3.13)"><span class="pre">Union</span></a><span class="p"><span class="pre">[</span></span><a class="reference external" href="https://docs.python.org/3/library/functions.html#int" title="(in Python v3.13)"><span class="pre">int</span></a><span class="p"><span class="pre">,</span></span><span class="w"> </span><a class="reference external" href="https://docs.python.org/3/library/functions.html#float" title="(in Python v3.13)"><span class="pre">float</span></a><span class="p"><span class="pre">]</span></span><span class="p"><span class="pre">]</span></span></span><span class="w"> </span><span class="o"><span class="pre">=</span></span><span class="w"> </span><span class="default_value"><span class="pre">None</span></span></em>, <em class="sig-param"><span class="n"><span class="pre">zero_point_domain</span></span><span class="p"><span class="pre">:</span></span><span class="w"> </span><span class="n"><span class="pre">ZeroPointDomain</span></span><span class="w"> </span><span class="o"><span class="pre">=</span></span><span class="w"> </span><span class="default_value"><span class="pre">ZeroPointDomain.INT</span></span></em>, <em class="sig-param"><span class="n"><span class="pre">dtype</span></span><span class="o"><span class="pre">=</span></span><span class="default_value"><span class="pre">None</span></span></em>, <em class="sig-param"><span class="n"><span class="pre">strides</span></span><span class="o"><span class="pre">=</span></span><span class="default_value"><span class="pre">None</span></span></em><span class="sig-paren">)</span><a class="reference internal" href="../_modules/torchao/dtypes/affine_quantized_tensor.html#AffineQuantizedTensor"><span class="viewcode-link"><span class="pre">[source]</span></span></a><a class="headerlink" href="#torchao.dtypes.AffineQuantizedTensor" title="Permalink to this definition"></a></dt>
<em class="property"><span class="pre">class</span><span class="w"> </span></em><span class="sig-prename descclassname"><span class="pre">torchao.dtypes.</span></span><span class="sig-name descname"><span class="pre">AffineQuantizedTensor</span></span><span class="sig-paren">(</span><em class="sig-param"><span class="n"><span class="pre">tensor_impl</span></span><span class="p"><span class="pre">:</span></span><span class="w"> </span><span class="n"><span class="pre">AQTTensorImpl</span></span></em>, <em class="sig-param"><span class="n"><span class="pre">block_size</span></span><span class="p"><span class="pre">:</span></span><span class="w"> </span><span class="n"><a class="reference external" href="https://docs.python.org/3/library/typing.html#typing.Tuple" title="(in Python v3.13)"><span class="pre">Tuple</span></a><span class="p"><span class="pre">[</span></span><a class="reference external" href="https://docs.python.org/3/library/functions.html#int" title="(in Python v3.13)"><span class="pre">int</span></a><span class="p"><span class="pre">,</span></span><span class="w"> </span><span class="p"><span class="pre">...</span></span><span class="p"><span class="pre">]</span></span></span></em>, <em class="sig-param"><span class="n"><span class="pre">shape</span></span><span class="p"><span class="pre">:</span></span><span class="w"> </span><span class="n"><a class="reference external" href="https://pytorch.org/docs/stable/size.html#torch.Size" title="(in PyTorch v2.4)"><span class="pre">Size</span></a></span></em>, <em class="sig-param"><span class="n"><span class="pre">quant_min</span></span><span class="p"><span class="pre">:</span></span><span class="w"> </span><span class="n"><a class="reference external" href="https://docs.python.org/3/library/typing.html#typing.Optional" title="(in Python v3.13)"><span class="pre">Optional</span></a><span class="p"><span class="pre">[</span></span><a class="reference external" href="https://docs.python.org/3/library/typing.html#typing.Union" title="(in Python v3.13)"><span class="pre">Union</span></a><span class="p"><span class="pre">[</span></span><a class="reference external" href="https://docs.python.org/3/library/functions.html#int" title="(in Python v3.13)"><span class="pre">int</span></a><span class="p"><span class="pre">,</span></span><span class="w"> </span><a class="reference external" href="https://docs.python.org/3/library/functions.html#float" title="(in Python v3.13)"><span class="pre">float</span></a><span class="p"><span class="pre">]</span></span><span class="p"><span class="pre">]</span></span></span><span class="w"> </span><span class="o"><span class="pre">=</span></span><span class="w"> </span><span class="default_value"><span class="pre">None</span></span></em>, <em class="sig-param"><span class="n"><span class="pre">quant_max</span></span><span class="p"><span class="pre">:</span></span><span class="w"> </span><span class="n"><a class="reference external" href="https://docs.python.org/3/library/typing.html#typing.Optional" title="(in Python v3.13)"><span class="pre">Optional</span></a><span class="p"><span class="pre">[</span></span><a class="reference external" href="https://docs.python.org/3/library/typing.html#typing.Union" title="(in Python v3.13)"><span class="pre">Union</span></a><span class="p"><span class="pre">[</span></span><a class="reference external" href="https://docs.python.org/3/library/functions.html#int" title="(in Python v3.13)"><span class="pre">int</span></a><span class="p"><span class="pre">,</span></span><span class="w"> </span><a class="reference external" href="https://docs.python.org/3/library/functions.html#float" title="(in Python v3.13)"><span class="pre">float</span></a><span class="p"><span class="pre">]</span></span><span class="p"><span class="pre">]</span></span></span><span class="w"> </span><span class="o"><span class="pre">=</span></span><span class="w"> </span><span class="default_value"><span class="pre">None</span></span></em>, <em class="sig-param"><span class="n"><span class="pre">zero_point_domain</span></span><span class="p"><span class="pre">:</span></span><span class="w"> </span><span class="n"><span class="pre">ZeroPointDomain</span></span><span class="w"> </span><span class="o"><span class="pre">=</span></span><span class="w"> </span><span class="default_value"><span class="pre">ZeroPointDomain.INT</span></span></em>, <em class="sig-param"><span class="n"><span class="pre">dtype</span></span><span class="o"><span class="pre">=</span></span><span class="default_value"><span class="pre">None</span></span></em>, <em class="sig-param"><span class="n"><span class="pre">strides</span></span><span class="o"><span class="pre">=</span></span><span class="default_value"><span class="pre">None</span></span></em><span class="sig-paren">)</span><a class="reference internal" href="../_modules/torchao/dtypes/affine_quantized_tensor.html#AffineQuantizedTensor"><span class="viewcode-link"><span class="pre">[source]</span></span></a><a class="headerlink" href="#torchao.dtypes.AffineQuantizedTensor" title="Permalink to this definition"></a></dt>
<dd><dl class="simple">
<dt>Affine quantized tensor subclass. Affine quantization means we quantize the floating point tensor with an affine transformation:</dt><dd><p>quantized_tensor = float_tensor / scale + zero_point</p>
</dd>
Expand All @@ -402,7 +402,7 @@ <h1>AffineQuantizedTensor<a class="headerlink" href="#affinequantizedtensor" tit
regardless of the internal representation’s type or orientation.</p>
<dl>
<dt>fields:</dt><dd><dl class="simple">
<dt>layout_tensor (AQTLayout): tensor that serves as a general layout storage for the quantized data,</dt><dd><p>e.g. storing plain tensors (int_data, scale, zero_point) or packed formats depending on device
<dt>tensor_impl (AQTTensorImpl): tensor that serves as a general tensor impl storage for the quantized data,</dt><dd><p>e.g. storing plain tensors (int_data, scale, zero_point) or packed formats depending on device
and operator/kernel</p>
</dd>
<dt>block_size (Tuple[int, …]): granularity of quantization, this means the size of the tensor elements that’s sharing the same qparam</dt><dd><p>e.g. when size is the same as the input tensor dimension, we are using per tensor quantization</p>
Expand Down
2 changes: 1 addition & 1 deletion main/searchindex.js

Large diffs are not rendered by default.

10 changes: 5 additions & 5 deletions main/tutorials/template_tutorial.html
Original file line number Diff line number Diff line change
Expand Up @@ -413,11 +413,11 @@ <h2>Steps<a class="headerlink" href="#steps" title="Permalink to this heading">
<span class="nb">print</span><span class="p">(</span><a href="https://pytorch.org/docs/stable/tensors.html#torch.Tensor" title="torch.Tensor" class="sphx-glr-backref-module-torch sphx-glr-backref-type-py-class sphx-glr-backref-instance"><span class="n">x</span></a><span class="p">)</span>
</pre></div>
</div>
<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>tensor([[0.8493, 0.0526, 0.5841],
[0.6383, 0.5932, 0.8083],
[0.3087, 0.3515, 0.4735],
[0.8996, 0.7762, 0.1826],
[0.2607, 0.2312, 0.7631]])
<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>tensor([[0.7804, 0.8663, 0.7150],
[0.4530, 0.6350, 0.2086],
[0.9097, 0.1238, 0.0825],
[0.5196, 0.2840, 0.3932],
[0.3891, 0.3960, 0.5983]])
</pre></div>
</div>
</section>
Expand Down

0 comments on commit ec33303

Please sign in to comment.